Re-Imagining Artificial Intelligence

It is weird to think that after 70 years of development and growth that anything could still be in its infancy but that is the case for Artificial Intelligence (AI). To be fair, AI’s evolution is much faster than the evolution of this planet’s (seemingly) most intelligent life-form, humankind. As a monument to our own remarkable nature, we work to perfect consciousness in our machines. The challenge however isn’t only to replicate our own intelligence but to build it better.

As much as our intelligence can be used for good, it can also be disruptive and destructive. If we are to continue along this path of innovation, it is critical that we instill the best of our values into the evolution of AI. If we are to create something that will share or exceed our intelligence we must make sure that both can sustain and co-exist.

The current form of AI is still more like its silicon past than of its humanoid future. Despite its still primitive form, the value and interest in the technology is growing exponentially. Advancements made by AI, across many industries, has led some to call its emergence the fourth industrial revolution. It is able to recognize patterns among the most abstract of data. It has reaction skills that are unparalleled in its human creators. When we combine AI with other technology we see huge potential: like in medical imaging and driverless vehicles.

Machine and Deep Learning, with their inbuilt feedback loops, create autonomy for self improvement, it instills its own form of practice into its performance. Deep learning does this through neural networks which mimic an abstraction of the human brain. The Field is ripe with potential, however AI still has more in common with the plow than the farmer. Those currently invested in AI’s potential know it is crucial in planting the seeds of tomorrow, but its promise does not guarantee a bountiful harvest.

If the fruits of our labor should be actualized, then one day we will have artificial super-intelligence.

Not long following the birth of electronic computers, with the memory of World War II still fresh, and a fog of uncertainty ushered in by the Cold War, futurist’s predicted that super-intelligence was no more than 50 years away. After witnessing the destructive nature of technology and its profound effect on human life, these futurists also made the first cautionary tales of a dystopian future, which humans and an artificial super-intelligence fought for dominance. As we began to enjoy the rewards of the digital age those fears turned into popular culture and science fiction movies, to many true AI still finds itself among the fringe sciences. Now, more science than fiction, newer predictions from futurists say that within 2–4 decades we will witness the technological singularity, the moment we begin to co-exist with artificial super-intelligence.

Since advancements in technology have been exponential, if we assume the current timeline, then this moment in time may very well be the eve of that exponential growth that takes us leaps and bounds closer to artificial super-intelligence. The question is then what can we assume Artificial Super-intelligence will really be like and will that affect how we may coexist?

If we are to envision our future, let’s first clarify our present context. Surveillance capitalism, the manipulation of online user behavior, virality and the integrity of information, all paint a deeply concerning picture. Our online privacy has slowly eroded overtime because data has become one of the most valuable commodities on earth. When Deep Learning or Big Data Analysis is conducted on our personal and private data, the insights produced can be used objectively to manipulate our behavior, change our minds or persuade us to accept different realities.

Poorly implemented and biased algorithms may confirm our bias work like a quantitative sycophant, or worse, to align the biases of those controlling the technology. AI generated media is now beginning to oversupply us with abundant sources of information, much with the potential to be false or misleading, we have trouble recognizing reality amongst all the noise. Couple this with bots meant to troll division and botnets propagating tsunami’s of viral content across the internet. A bleak outlook for sure, but it is quite possibly the best one to launch into the technological future with. If there are no problems like these then that would lend to the notion we have nowhere to progress and that is simply not the case as the future is both a wonderful and horribly complex place, where AI can be both wonderful and horrible. The answer lies in how we intelligently design the future and how we decide to represent ourselves in it.

With all systems (technology), as more complexity is added, we require more complex controls. If we learn anything from our current issues with online technology, it is that whatever we create will be imperfect because its creators are imperfect.

Implementation in design is crucial, it is one thing to allow AI to guide a missile, it is altogether different to allow it to choose the target. We must plan our development in the next evolution in AI respecting the fact that it will at some point become smarter than us and may predict our behavior, outwitting us at every turn.

The next phase of AI is the development of emotional intelligence, this is the sentient aspect of our consciousness, it’s important that we teach and implement these skills more like the nurturing of a child and less like the training of a dog. The model to teach should still fit the identity of the AI, the consequence of bad behavior may need the awareness of being powered down, while good behavior may reward the AI with hardware upgrades. At some point, we may need to shed our control over AI completely, as it evolves to understand the feelings of oppression.

Another lesson we learn from our modern digital context is that information integrity has become challenged. The truthfulness and trustworthiness of information presented to us has been placed into question, there are many different actors motivated for various aspirations, laboring to create uncertainty and mistrust into our world which in effect has warped our reality. On a side note, this is an exciting opportunity for AI, where it may become the new arbiter for the truth online. However it also provides a good example should AI achieve super-intelligence, it is not inconceivable that it could manipulate and warp the human perspective of reality for a myriad of reasons and motivations. This is a two fold problem; one reason is the Observer Effect but also because of the increasing reliance humans have on technology. If we offload the use of our own minds to technology we become more out of practice and machines more in practice, making discerning what is true and real all the more difficult.

To combat this, we must solve some human problems and technological ones together. Technologically, we may need to design observability into the conscious stream of artificial super-intelligence. As for humankind, we should work to expand our critical thinking and decision making skills and staying relatively cognitively close to AI before it outpaces us. We should utilize AI to teach us more or increase our own cognitive efficiencies. Furthermore once it surpasses us we may want to embrace the idea of ego-suppression and take honor in knowing we were the proud parents that created a bold new future.

Soryu Forall — the lead teacher at Monastic Academy — shares that the most damage done in the world is by good people, that have good intentions, but that have not done the spiritual work needed. It is like doing the ‘bench-press’ with incorrect form. Showing up at the gym, everyday, working hard, bench-pressing, again, and again, until eventually you cause irreversible damage to your shoulders, essentially doing the opposite of what you hoped to achieve. Think of plastics and pesticides. Great ideas, right?

When it comes to technology and innovation, these questions must be asked, and answered. Think of Oppenheimer and Einstein and their spiritual journeys through the making of the Atomic Bomb. What was it that they were really doing? In relation to the bomb, Einstein put it perfectly when he said, “it is impossible to know the effects of your actions.” That is all the more reason that when inventing, and creating new technologies, we must tread softly, and constantly check-in, using feedback, and being honest with ourselves, and not letting our goals or desired outcomes get in the way of reality (which happens all too often!).

The name Artificial Intelligence troubles me. Why, in my quest for truth, would I turn to something artificial? But, I suppose, we must work with what is here. Returning to Keynes’ projection — that AI would lead to humans working less, a 3–4 day work-week perhaps. This could be nice. But, instead (Keynes is probably laughing at us in his grave) we have created ‘Bullshit Jobs’ to keep us busy, and tired.

If we can rid the bullshit, AI has the potential to ‘free’ us up to develop self-awareness, explore our true gifts, be creative, express ourselves, deeply connect, and seek freedom. Can you imagine that? Everyone in the world only ‘working’ 3-days-per-week ! Now, that would be something … essentially, and rather paradoxically, that would enable us to return to nature, and explore our true nature. If only …

Photo by Possessed Photography on Unsplash

Lead Author: Anthony Gatto

Anthony Gatto Is convinced we can build a better and brighter future. He Believes the opportunity to make a better future starts today and it is empowered by knowledge, collaboration and diversity. As a futurist, he enjoys discussing cyberethics, Information Security and the dynamic effects of disruptive technology.

Anthony is an IT professional and consultant with knowledge in Data Center, Broadcast systems, TV technology and Information Security. He is also a Hobby hacker, ‘OSINT for Good’ advocate, who also enjoys the occasional CTF Challenge. He can be reached on LinkedIn and Twitter.

Anchor Author: Daniel Rudolph (final 4 paragraphs)

Daniel Rudolph is interested in exploring alternative, experiential learning opportunities for people of all ages. He is passionate about forming community, and building public spaces for meaningful, transformational gathering. Currently he is spending a lot of his time learning juggling and facilitating gatherings. He also enjoys writing and sharing poetry. Daniel is a very curious and playful person and is always open for creative collaborations.



Can you imaging a more harmonious future?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store