Losing one’s job to a robot is no fun, but the solution is not to hold on to jobs. It is to change the way we think about them and to understand what we can do differently to make sure that they do not take all our jobs.
We shape our technologies now of conception, but from that point forward, they shape us. We, humans, designed the telephone, but from then on, the telephone influenced how we communicated, conducted business, and conceived of the world. We also invented the automobile, but then rebuilt our cities around automotive travel.
Artificial intelligence adds another twist. After we launch technologies related to AI and machine learning, they not only shape us, but they also begin to shape themselves. We give them an initial goal, and then give them all the data they need to figure out how to accomplish it. From that point forward, we humans no longer fully understand how an AI program may be processing information or modifying its tactics.
AI is not conscious enough to tell us. It is just trying everything and hanging onto what works for the initial goal, regardless of its other consequences. AI will even try things that we as humans consider socially illegal or inappropriate.
On some social media platforms, for example, algorithms designed to increase traffic to a specific website might do so by showing users pictures of their ex-lovers having fun. People do not want to see such images. However, through trial and error, the algorithms have discovered that showing us pictures of our exes increases our engagement. We tend to click on those pictures and see what our exes are up to, and we are more likely to do it if we are jealous that they have found a new partner. The algorithms do not know why this works, and they do not care if suggesting such photos has implications on the welfare of people’s marriages and families. They are only trying to maximize whichever metric we have instructed them to pursue.
This can be true when it comes to the jobs that we carry out to earn a living. In fact, Artificial intelligence treats everything as a computational challenge. As a long, it can compute better than you it will not feel pity for you and hide that solution just to prevent you from losing your job.
That is why the initial commands we supply them are so vital. Whatever values we embed — efficiency, growth, security, compliance —
Why not outline the no go areas?
Commands. To be honest with you, no one is willing to put such boundaries. This is because the things we want our robots to do — such as driving in traffic, translating languages, or collaborating with humans — are very complex. This makes it impossible to devise a set of explicit instructions that cover every possible situation. Therefore, computer scientists feed the algorithms reams and reams of data and let them recognize patterns and draw conclusions themselves. This means no one really is taking precautionary measures to protect humans from the side effects of artificial intelligence.
Some computer scientists are already arguing that AI should be granted the rights of living beings rather than being treated as a mere instrument or slave. We are moving into a world where we care less about how other people regard us than how AI does A robot with an uncannily human-like appearance recently advanced one step closer to human status. The robot was granted citizenship to Saudi Arabia at the tech summit Future Investment Initiative (FII).
Named "Sophia," the robot, created by Hanson Robotics (HR), has a pale-skinned face with features that are capable of being highly mobile and expressive and displaying a range of emotions. The company's "latest and most advanced robot," according to a statement on the HR website took to the stage at FII on Oct. 25, 2018, to address hundreds of attendees in Riyadh, Saudi Arabia, and to announce her recently acquired citizenship — the first to be given to a robot, the BBC reported.
Without human intervention, technology will become the accepted premise of our shared value system: the starting point from which everything else must be inferred. AI systems are already employed to evaluate teacher performance, mortgage applications, and criminal records, and they make decisions just as biased and prejudicial as the humans whose decisions they were fed. However, the criteria and processes they use are deemed too commercially sensitive to be revealed, so we cannot open the black box and analyse how to adjust their biases even if some of the biases have social implications. Those judged unfavourably by an algorithm have no means to appeal the decision or learn the reasoning behind their rejection. Many companies could not ascertain their own AI’s criteria, anyway.
The individuals most affected will likely be those working in lower-wage occupations. In addition, AI-driven automation could also increase the wage gap between less-educated and more educated workers, fuelling economic inequality.
Some experts argue that it is not just lower-paying jobs that will be stressed by AI and other agents of automation. At an MIT conference, one researcher, Mary “Missy” Cummings, director of the Humans and Autonomy Lab at Duke University, noted that some plum positions are also on the endangered species list. Take commercial pilots, for example. These pilots, she explained, “touch the stick for three to seven minutes per flight and that’s on a tough day.” The rest of the time, that flight is literally on autopilot. It does not take a genius to see which way that wind is blowing.
In my next article, I will be covering some practical steps one can take shield himself from the blowing winds of AI.