Saturday, December 7, 2019

Business Report on Ethical Dilemma of Driving †Free samples

Question: What is Business report on Ethical Dilemma of Driving? Answer: Introduction The company is in a rush to be first to market with a new product of self-driving electric cars, which having undergone various tests and simulations and passed, are deemed by the management to be ready for the market. However, according to the AI Consultant, there is still a little bit of work to be done form a technical and safety point of view to ensure the cars are fully ready for real life situations. This will entail creating some simulations for accident scenarios where life and death decisions have to be made by the car. This report gives insights into the situation, discussing the underlying ethical principles the company, and especially the AI intelligence is faced with, the various scenarios possible; a recommendation is made at the end of this report on what can be done given the scenarios ethical dilemma so that there is a win-win situation for all stakeholders. Ethical Dilemma facing the AI (Artificial Intelligence) Consultant The AI Consultant is in a unique situation with regards to ethics; indeed he faces a huge dilemma; on the one hand, the consultant wants to ensure he does his job and adheres to the industrys ethical principles and ensure the self-driving cars are safe for the users and other stakeholders, including pedestrians and other road users. Knowing that while the tests have been successful and the self-driving cars have been learning, the consultant still knows they are machines and the real world situation is very different from a test situation, this is a huge dilemma. On the other hand, the self-driving cars are a business and not an exercise in practice; it is therefore important that the company gets the cars to market first before competitors get ready for the market (Lin, 2013). The ethical dilemma, therefore, is What if something goes terribly wrong and the self-driving cars are unable to make the right decisions during emergencies or life and death situations? The consultant faces the dilemma that in the event of an accident situation where a choice must be made; there is an oncoming car full of children that is going to crash into it and could cause the bus to fall off a cliff; on another hand, the self-driving car is carrying a single elderly passenger, if it decides to swerve, it could fall of a cliff and injure or kill the passenger, while getting destroyed at the same time. This is a huge ethical and moral dilemma; there is also the possibility that the situation is not realistic and in any case, the cars have been tested and that would possibly be a doomsday scenario unlikely to ever happen. So if he passes the cars are fit for road use, he will be hoping nothing drastic happens, and if it does not, he will be vindicated. But if indeed there is an accident scenario and there are injuries and damages, or even fatalities, he will always feel guilty and may have to take full responsibility. Consequences for both AI Consultant and the car customers The consequences of the ethical dilemma could be immense; if nothing happens, then no problem, what you dont know will never hurt. However, in the event there is an accident; for instance, the self-driving car malfunctions or another driver causes it to make a decision that causes injury or death; it could be a legal tussle, especially for the AI consultant and the company, the consequences can be very profound and get himself charged with professional negligence. For the customers, they would also shoulder legal liability that their self-driving cars malfunctioned when they had been assured the cars were fully tested, even for emergency situations. They would probably pass the liability to the manufacturer, and would themselves suffer local and national ramifications for endangering lives or causing a fatality. While there are no laws banning self-driving cars, based on the principle of no crime without a law, this is where ethics come in (The Ethics of Autonomous Cars, 2013). The AI consultants Duty The field of AI has ethical guidelines that the consultant will have to adhere to, these include; Robots must never be designed solely for the purpose of harming or killing human beings The responsible agents are humans, not the robots as the robots are tools humans design to perform certain functions. Robots need to be designed in a manner in which their security and safety is assured Being artifacts, robots should never be designed so that they exploit vulnerabilities in users by evoking dependency or emotional responses; further, a robot should always be differentiated form a human The person legally responsible for a robot should always be possible to find (Norman, 2011). The AI consultant is charged with the responsibility of ensuring the self-driving cars are safe for road use, and this requires them to ensure this is achieved, including by creating crash simulations and where life and death decisions need to be made. The consultant is therefore duty bound to ensure the cars are tested and made to learn situations in which there could be accidents and how to respond. As the EPRSC (Engineering and Physical Sciences Research Council) principles state, a human being must be responsible for a robot on the understanding that the robots are a tool. The consultant must therefore ensure the life and death situations are done to assure safety (Nyholm Smids, 2016). Ethical response to the Dilemma from the point of View of a caring person The moral caring person will first respond to the dilemma by stating that as the principles state, all procedures must be followed to the letter, and so, if the AI consultant wants simulations for life and death situations to be created and tested on the cars before they are released, then this must be done at whatever cost (Simon, 2017). The principles say that humans must be responsible for robots, therefore, the company and the AI consultant must remain responsible for the self-driving cars, including the possible accidents that may occur, and also to ensure that they remain safe to the users, owners, and other members of the public; because as humans have already taught alien intelligence moral lessons, this should be done to robots too (Rini, 2017). Building robots and self-driving cars is parenting, the robots, just like children must be given moral lessons and life compass. Relevant ACS Codes The ACS Code of Professional Conduct is a set of ethical codes that every professional member or affiliate of the ACM (Association for Computing machinery). The codes are supplemented by additional guidelines; the important codes are listed below; Public interest is to be placed above the interests of business, personal, or sectional interests To enhance the quality of life of those that your work affects Honesty in the presentation of skills, services, knowledge, and products Work diligently and competently to all stakeholders and that of staff and colleagues Enhancement of personal professional development Enhancing the integrity of ACS and the respect members have for each other (Australian Computer Society, 2014) For the self-driving cars case, code 1 and 3 apply; code 1 states that every professional must public interest above those of personal, business or other sections; so by passing the vehicles as safe without simulation and testing the cars response in life and death decision situations, the consultant will be breaking code 1. Code 3 has to do with honesty; passing the cars as safe, yet knowing that they have not been tested shows dishonesty chicanery unbecoming of a professional Ethical grounds for Defending the AI Consultant in passing the Electric vehicles Despite all the testing and simulation, the self-driving cars can never be achieve perfect safety; this is because there are other dynamics involved, such as involvement of other drivers, pedestrians, and natural weather phenomena that can result in unexpected performance. Even if the car was tested for many situations where life and death decisions must be made, how should it behave where death or injury is inevitable? The self-driving cars are not synonymous with autonomous driving and expecting this would be pushing the cars beyond their limit (Berman, 2015). A common situation is where suddenly, there is a group of eight children crossing the road that the self-driving car is sure to plough through; the other alternative is to hit an elderly beggar on the opposite kerb. What should the self-driving car do? It will eventually be involved in or cause accidents when inevitable (MIT Technology Review, 2015). Further, accidents cannot be predicted, that is why they are called accident s, and some situations chance of occurrence in real life, even for human drivers are next to remote (Lin, Robot Cars And Fake Ethical Dilemmas, 2017); the responses to crashes can be updated as they happen (if they happen), so the consultant would be fully justified if he passed the cars as safe for real life use. Recommendation to overcome the Dilemma It is not possible to predict every accident situation and simulate it, there are just too many variables, so the best approach would be to be adjusting car behavior and developing algorithms based on real life situations. The other solution is to use a video game approach where even as the self-driving cars drive around, in the event they have a decision dilemma, it can ring technical support where human beings can take control, using on board cameras and override the auto-pilot and make a decision, such as emergency braking or choosing what to do. References The Ethics of Autonomous Cars. (2013, October 08). Retrieved May 05, 2017, from The Atlantic: https://www.theatlantic.com/technology/archive/2013/10/the-ethics-of-autonomous-cars/280360/ Australian Computer Society. (2014, April). ACS Code of Professional Conduct. Retrieved May 05, 2017, from Australian Computer Society: https://www.acs.org.au/content/dam/acs/acs-documents/ACS%20Code-of-Professional-Conduct_v2.1.pdf Berman, B. (2015, Oct 21). Drivers Push Teslas Autopilot Beyond Its Abilities. Retrieved May 05, 2017, from Technology Review: https://www.technologyreview.com/s/542651/drivers-push-teslas-autopilot-beyond-its-abilities/ Lin, P. (2013). The ethical dilemma of self-driving cars. Retrieved May 05, 2017, from Ted Ed: https://ed.ted.com/lessons/the-ethical-dilemma-of-self-driving-cars-patrick-lin Lin, P. (2017, Apr 03). Robot Cars And Fake Ethical Dilemmas. Retrieved may 05, 2017, from Forbes: https://www.forbes.com/sites/patricklin/2017/04/03/robot-cars-and-fake-ethical-dilemmas/#13a2660413a2 MIT Technology Review. (2015, Oct 22). Why Self-Driving Cars Must Be Programmed to Kill. Retrieved May 05, 2017, from technology Review: https://www.technologyreview.com/s/542626/why-self-driving-cars-must-be-programmed-to-kill/ Norman, J. (2011). Ethical principles for Designers, Builders and Users of Robots. Retrieved Apr 05, 2017, from History of Information: https://www.historyofinformation.com/expanded.php?id=4109 Nyholm, S., Smids, J. (2016). The Ethics of Accident-Algorithms for Self-Driving Cars: an Applied Trolley Problem? Ethical Theory and Moral Practice, 12751289. Rini, R. (2017, Feb 22). Raising good robots. Retrieved May 04, 2017, from AEON: https://aeon.co/essays/creating-robots-capable-of-moral-reasoning-is-like-parenting Simon, M. (2017, March 13). To make us All Safer, Robocars Will Sometimes Have to Kill. Retrieved may 05, 2017, from Wired: https://www.wired.com/2017/03/make-us-safer-robocars-will-sometimes-kill/

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.