Skip to main content

Why do we struggle with AI?

Yes, AI, as in Artificial Intelligence. Is it or can it be as bad as we think? Or are our negative thoughts toward it simply biased from science fiction?

I mean, the ability for a computer to accept 1's and 0's in the form of programming code and turn that into a self-learning, better yet, a self-thinking machine is fantastic to me. But are we wrong in assuming it's potential? Great minds like Stephen Hawking and Elon Musk have publically said that we need to be particularly careful with AI. But have we even achieved true AI yet?

To answer that, let's look at what we have right now, at least as public information. I'll focus on the biggies, the first being Google's DeepMind. They were in the news recently because of a software they call AlphaGo, which was tasked with playing the world's greatest Go player in a series to see if machine can beat man. And crazily enough, it did! It did lose as well, to be fair. If you don't know what Go is, it's an ancient chinese game somewhat similar to chess in its strategic thinking. The software uses machine learning which is something much too advanced for me to talk about here, mostly because it's witchcraft to me. But essentially, it uses complex algorithms to analyze and make decisions based on data it's observed. Really cool stuff.

But, this "AI" was meant specifically to learn and play the game Go, so I wouldn't be too worried about it taking over the internet and turning those flying quadcopters against us (garbage skynet reference... I apologize).

Another "AI"-type project fairly popular is IBM's Watson. Watson, cleverly named I might add, is a software system that "understands" natural language and uses algorithms to answer rather complex questions. It had access to all of Wikipedia (offline), and competed on Jeopardy and actually won! But that's not AI. Watson can analyze "unstructured" data, and "learn" subjects and answer questions about them, but it cannot "think". Again, not AI, but some impressive technology.

So is AI just a sci-fi pipe-dream? Can we make a truly intelligent machine? Will it take over the world when it realizes that humans aren't perfect beings?

Those are some tough questions, and I think as humans who have never had to contemplate these situations before, it's not an easy thing to think about. If true AI is achieved, should it be allowed access to the internet? Should we even be trying to create AI? Are intelligent machines the worst weapon of all? Mind you, we did create the nuclear bomb, but those were different times.

I think if you put aside all the sci-fi-esque scenarios and look at the positive potential of AI, then yeah, it could change a lot. It would see solutions to problems humans could never see. It could end hunger, or minimize the consequences of climate change. It would likely reduce or eliminate jobs basically everywhere.. so we may need to keep an eye out for that.. but will it launch a nuclear war to end humanity? Well, I think we may watch too many movies.

Comments