The Future Of Artificial Intelligence: Risks And Possibilities

The Importance of Information Technology in the Present World

Nowadays, it is unimaginable to see the world without information technology. The internet has been amazingly embraced and embedded into virtually every item around us. It is undeniable the convenience in the present now is substantial that human being can achieve such things that a hundred years ago it was considered impossible. For example, the way people travel around the world is much more relaxing; communication between people can occur at any time and anywhere as long as there is a connection with the internet.

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

The pace of technological advance has been accelerated at an astonishing rate, which even raises many passionate opinions about them such as ramifications of robots resulted from the rapid development of technology. Nevertheless, the overreliance on technology is undeniable that without technology, governmental bodies, international corporates, the central network of everything will collapse without a doubt. At the same time, it helps conserve time and resources, and indeed, the performance and results are remarkable. Ultimately, it does bring happiness to humankind and improve the quality of life.

The next section will focus on scrutinizing some of the technical terms that have been used since twentieth centuries as well as exploring a deeper understanding of the field.

For an AI to achieve singularity, it would need to advance feelings all alone, yet the main route for this to occur in a world ruled by the regular insight called people would be for us to enable it to happen, which we wouldn’t on account of there’s opportunity enough to see it coming (Yampolskiy, 2017). Bostrom’s “treacherous turn” will come with road signs ahead warning us that there’s a sharp bend in the highway with enough time for us to grab the wheel. Incremental advance is the thing that we see in many innovations, including and particularly AI, which will keep on serving us in the way we want and need. Rather than Great Leap Forward or Giant Fall Backward, think Small Steps Upward (Shermer, 2017). As I proposed in The Moral Arc, rather than Utopia or the oppressed world, think protopia, a term begat by the futurist Kevin Kelly, who depicted it in an Edge discussion along these lines: “I call myself a protopian, not a Utopian. I believe in progress in an incremental way where every year it’s better than the year before but not by very much — just a micro amount.” All advance in science and innovation, including PCs and AI, is of a protopian nature. Once in a while, if at any point, do innovations prompt either Utopian or tragic social orders (Müller, 2014).

The Rapid Development of Artificial Intelligence

Pinker concurs that there is a lot of time to get ready for every possible possibility and incorporate protections with our AI frameworks. “They would not need any ponderous ‘rules of robotics’ or some contemporary moral philosophy to do this, just the same common sense that went into the design of food processors, table saws, space heaters, and automobiles.” Sure, an ASI would be many orders of magnitude smarter than these machines, but Pinker reminds us of the AI hyperbole we’ve been fed for decades: “The worry that an AI system would be so clever at attaining one of the goals programmed into it (like commandeering energy) that it would run roughshod over the others (like human safety) assumes that AI will descend upon us faster than we can design fail-safe precautions (Mialet, 2017).

Save Time On Research and Writing
Hire a Pro to Write You a 100% Plagiarism-Free Paper.
Get My Paper

The reality is that progress in AI is hype-defyingly slow, and there will be plenty of time for feedback from incremental implementations, with humans wielding the screwdriver at every stage.” Former Google CEO Eric Schmidt agrees, responding to the fears expressed by Hawking and Musk this way: “Don’t you think the humans would notice this, and start turning off the computers?” He also noted the irony in the fact that Musk has invested $1 billion into a company called OpenAI that is “promoting precisely AI of the kind we are describing.” Google’s own particular DeepMind has built up the idea of an AI off-switch, energetically depicted as a “Big Red Button” to be pushed in case of an endeavored AI takeover (McGettigan, 2017). “We have proposed a framework to allow a human operator to repeatedly safely interrupt a reinforcement learning agent while making sure the agent will not learn to prevent or induce these interruptions,” write the authors Laurent Orseau from DeepMind and Stuart Armstrong from the Future of Humanity Institute, in a paper titled “Safely Interruptible Agents.” They even suggest a precautionary scheduled shutdown every night at 2 AM for an hour so that both humans and AI are accustomed to the idea (MAKRIDAKIS, 2017). “Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for this.” Too, it regards remember that computerized reasoning isn’t the same as fake awareness.

Exploring the Technical Terms Related to AI

Figuring machines may not be intelligent machines. Finally, (Logan, 2017), responded to Elon Musk’s ASI concerns by noting (in a jab at the entrepreneur’s ambitions for colonizing the red planet) it would be “like worrying about overpopulation on Mars when we have not even set foot on the planet yet.”

Yogi Berra once fretted, “I don’t want to make the wrong mistake.” This cleverly encapsulates the Precautionary Principle, which holds that if something has the potential for significant harm to a large number of people, then even in the absence of evidence the burden of proof is on skeptics to demonstrate that the potential threat is not harmful. The prudent guideline is a weak contention for two reasons:

  • It is troublesome show a negative – to prove that there is no impact, and
  • It raises the superfluous public alarm and individual nervousness.

 AI Apocalypsari-ans contend that we need to act now, just in case. As I would see it, this use of the preparatory guideline is the wrong misstep to make.

It is my feeling that the idea of the peculiarity is strange. It depends on a misrepresented and bogus comprehension of knowledge. Advance in AI depends on speed and memory estimate, as well as growing new calculations and the new ideas that support them. All the more critically, the peculiarity is predicated on a direct model of insight, rather like IQ, on which every creature species has its place, and along which AI is continuously progressing. (Bundy, 2017), contends that insight must be displayed utilizing a multidimensional space, with a wide range of sorts of knowledge and with AI advancing in various ways. AI frameworks involve focuses on this multifaceted space that are not at all like any creature species. Specifically, their ability tends to be high in exceptionally limit regions, yet nonexistent somewhere else. Consider, for example, probably the most useful AI frameworks of the most recent couple of decades.

Lets a case of Tartan Racing – Tartan Racing was a self-driving auto, worked via Carnegie Mellon University and Generaltake Motors, which won the DARPA Urban Challenge in the year 2007. It was merely the first to demonstrate that driving vehicles could work securely nearby people, thus enlivened the current profitmaking enthusiasm for this innovation. Plaid Racing couldn’t play chess or do something besides driving an auto.

No one can tell accurately when this two mutually supporting events will happen, i.e., Trans-Humans and Artificial Intelligence merging at some point (Kokina & Davenport, 2017). It is when the AI could access all of the current physical realms that things get delicate. Imagine your brain is connected to the internet of things. Let’s then assume that an AI can tamper with your brain perception of reality. The singularity in that scenario becomes far more concerning than for example today where an AI would have limited access to the real world.

The Precautionary Principle and AI Apocalypsarians

Concerns have as of late been broadly communicated that Artificial consciousness displays a danger to humanity. For example, Stephen Hawking is cited in Cellan-Jones1 as saying: “The development of full artificial intelligence could spell the end of the human race.”Similar concerns have likewise been communicated by Elon Musk, Steve Wozniak, and others (Fayter, 2017). Such matters date way back in history. Stanislaw Ulam cites John von Neumann as the first to utilize the term the peculiarity — the time when Artificial consciousness surpasses human knowledge. Beam Kurzweil has anticipated that singularity will happen around the year 2045—an expectation gave Moore’s Law as the time when machine speed and memory limit will equal human limit. (ETZIONI & ETZIONI, 2017), has anticipated that such super-shrewd machines will then form much more wise devices in a quickening ‘insight explosion.The dread is that these super-wise machines will represent an existential danger to mankind, for instance, keep people as pets or execute all of us—or perhaps humankind will simply be a casualty of advancement.

Given that awareness is a fundamental element for accomplishing Singularity, the idea that an Artificial General Intelligence gadget can surpass the insight of a human, in particular, the subject of whether a PC can accomplish cognizance, is investigated. Given that cognizance is monitoring one’s recognition as well as of one’s musings, it is asserted that PCs can’t encounter awareness (Ashrafian, 2015). Given that it has no sensorium, it can’t have discernment. Regarding monitoring its considerations, it is contended that monitoring one’s musings are essentially tuning in to one’s own particular interior discourse. A PC has no feelings, and subsequently, no want to impart, and without the capacity, and additionally want to convey, it has no inner voice to tune in to and henceforth can’t know about its considerations. Indeed, it has no considerations, since it has no feeling of self and believing is tied in with saving one’s self (?erka et al., 2015). Feelings positively affect the thinking forces of people, and consequently, the PC’s absence of feelings is another purpose behind why PCs would never accomplish the level of knowledge that a human can, in any event, at the present level of the improvement of PC innovation.

When it comes to Ai applications and in particular Singularity, this topic opens a world of endless possibilities. Some of the applications may include:

  • Exploit the worldwide Internet as a combination human/machine tool.
  • Use local area nets to make human teams that really work (ie, are more effective than their component members). This is generally the area of “groupware”, already a very popular commercial pursuit.
  • Develop more symmetrical decision support systems.
  • Develop interfaces that allow computer and network access without requiring the human to be tied to one spot, sitting in front of a computer.
  • Human/computer team automation – design programs and interfaces that take a advantage of humans’ intuition and available computer hardware.

Conclusion

To sum it up, take an instance of an AI medical diagnosis framework, it may prescribe the wrong treatment when looked with an infection past its analytic capacity, a self-driving auto has just slammed when stood up to by an unforeseen circumstance. Such wrong conduct by idiotic machines unquestionably displays a risk to singular people, however not to mankind. To counter it, AI frameworks require an interior model of their extension and restrictions, with the goal that they can perceive when they are straying outside their usual range of familiarity and caution their human clients that they require human help or just ought not to be utilized as a part of such a circumstance. We should relegate an obligation to AI framework creators to guarantee their manifestations educate clients of their confinements and particularly caution clients when they are solicited to work out of their degree. AI frameworks must be able to clarify their thinking in a way that clients can comprehend and agree to. As a result of their open-finished conduct, AI frameworks are likewise inalienably difficult to check. We should create programming designing methods to address this. Since AI frameworks are progressively self-enhancing, we should guarantee these clarifications, notices, and checks keep pace with every AI framework’s developing capacities

References

Ashrafian, H. (2015). AIonAI: A Humanitarian Law of Artificial Intelligence and Robotics. Science & Engineering Ethics, 21(1), 29-40. doi:10.1007/s11948-013-9513-9

Bundy, A. (2017). Smart Machines Are Not a Threat to Humanity. Communications Of The ACM, 60(2), 40-42. doi:10.1145/2950042

?erka, P., Grigien?, J., & Sirbikyt?, G. (2015). Liability for damages caused by artificial intelligence. Computer Law & Security Review: The International Journal Of Technology Law And Practice, 31376-389. doi:10.1016/j.clsr.2015.03.008

ETZIONI, A., & ETZIONI, O. (2017). Should Artificial Intelligence Be Regulated?. Issues In Science & Technology, 33(4), 32-36.

Fayter, P. (2017). In Our Own Image: Savior or Destroyer? The History and Future of Artificial Intelligence. Perspectives On Science And Christian Faith, (2), 123.

Kokina, J., & Davenport, T. H. (2017). The Emergence of Artificial Intelligence: How Automation is Changing Auditing. Journal Of Emerging Technologies In Accounting, 14(1), 115-122.

Logan, R. K. (2017). Can Computers Become Conscious, an Essential Condition for the Singularity?. Information (2078-2489), 8(4), 1. doi:10.3390/info8040161

MAKRIDAKIS, S. (2017). Forecasting the Impact of Artificial Intelligence (AI). Foresight: The International Journal Of Applied Forecasting, (47), 7-13.

McGettigan, T. (2017). Artificial Intelligence: Is Watson the Real Thing?. IUP Journal Of Information Technology, 13(2), 44.

Mialet, H. (2017). A singularity: where actor network theory breaks down an actor network becomes visible. Subjectivity, (3), 313.

Müller, V. C. (2014). Risks of general artificial intelligence. Journal Of Experimental & Theoretical Artificial Intelligence, 26(3), 297-301. doi:10.1080/0952813X.2014.895110

Shermer, M. (2017). Why artificial intelligence is not an existential threat. Skeptic (Altadena, CA), (2), 29.

Yampolskiy, R. V. (2017). The Singularity May Be Near