Artificial intelligence is everywhere now. SIRI uses it, self driving cars use it. It’s anywhere and everywhere and is progressing rapidly. You have science fiction often potraying AI like robots with humanistic characteristics but actually artificial intelligence companies in india can get everything in realm from the search algorithms that Google uses to autonomous weapons being used in the country.
Artificial intelligence in today’s time is known as narrow AI, because it is designed in a way to only perform a narrow task at hand, like facial recognition, driving a car or performing internet searches. When it comes to the long term goal of AI, many researchers have plans. The plan is to create AGI, or strong AI. Narrow AI can outperform every human out there at whatever task they need to pull off, like solving an equation or two, playing chess. When it comes to AGI, it would definitely outperform humans out there at every level of cognitive task and researchers are sure of that.
Why is so much pressure put on researching about AI safety?
When it comes to what the future goals for AI are, the most important is to benefit the society and allow reasearch in numerous areas like law and economics so as to get into validity, verification, security and numerous controls. AI does allow for nuisances like laptop crashing or getting hacked but when you think about the way it runs your life everyday, you’ll be able to see the realm of AI. Artificial intelligence systems control your car, the way an airplane runs, make trading systems automated and give life to your neighborhood power grid. When we talk about safety issues with AI, one of the most important is preventing a arms race in autonomous and lethal weapons.
The long term goals are not clearly defined but an important question that needs to be asked time and again is what exactly will happen if the hunt for a strong AI system gets a pass and the AI system becomes excellent and outperforms humans at every cognitive task out there?
Let’s look at the possibilities. By designing a smarter AI system, you will allow it go for constant self improvement, thus triggering a kind of intelligence explosion that leaves human intelligence far behind. By constantly inventing revolutionary technologies all the time like super intelligence, you might be able to eradicate disease, war, poverty so on and so forth. The creation of a strong AI thus becomes important and almost becomes like an event for the entirety of human history. A lot of experts have expressed their genuine concern for this. They feel that unless people learn to align their own goals with what they want out of AI, then there is a possibility of AI, developed by artificial intelligence service providers, becoming super intelligent.
Strong AI will or will not be achieved is a question and some people say that no matter how AI progresses, the guarantee is that AI is going to be beneficial for everybody.