Time to Embrace Artificial Intelligence & Machine Learning

“Now is an exciting time to be in the cloud.” That’s a mantra championed by one of our executives at my company Ensono and echoed by the hundreds of thousands of blogs, memes, magazine covers, and countless other media articles on the topic. In my last blog post, I talked about the next giant step jumping from a focus Infrastructure-as-a-Service to focusing on the application and the data itself; but, what happens to that data after it reaches a data platform? How, exactly, do businesses, academic institutions, and basement-bound geeks like myself realize relevant insights from that data that we’re working so hard to collect and process in the cloud? Well, there are few answers to that question, but I’m going to narrow the focus to a couple of data-centric, decision-making technologies that allow a menagerie of data geeks out in the IT space to achieve greater things with the same data than they have in the past: Artificial Intelligence (AI) and Machine Learning (ML).

AI and Machine Learning: Isn’t That Asimov’s Thing?

Isaac Asimov famously published his “Three Laws of Robotics” in 1942 that governed how his “robots” should govern themselves. Later, Alan Turing refined this with a process later dubbed “the Turing Test” in 1950 that tests whether a computer is exhibiting truly artificially intelligent behavior to prove a machine is “artificially intelligent.” Does that mean learning about how we predict spending trends, realizing new medical insights, and deriving Twitter sentiment from data depends on whether a machine is sentient or can make its own thoughts? Hardly.

Modern AI’s inherent decision-making and environmental analytics abilities derive from a set of programmatic guiderails which then feeds back on itself to “learn” after repetition. An example would be talking to a virtual assistant like Siri, Cortana, or Google Assistant and asking, “How old is the President?” and then subsequently asking “Where was he born?” A smart AI would know that the second question’s subject is related to the directly preceding question about the president and answer “Queens, NY”. AI technologies are just systems that analyze variables provided in an environment like speech, text, images, etc. and then make inferences based on machine code given to them. Over time, that system can then store more and more information about how it receives information, then based on the code it has been given can make “learned” decisions from there.

Machine learning is a sub-discipline of AI. It specifically uses statistics and other mathematic calculations to recognize patterns and then outputs those patterns to other processes or itself. Usually, you “train” a model with data and then apply that training model to a larger subset of your data to generate inferences, predictions and aggregated observations about the data you supply. ML algorithms modify themselves in order to achieve the stated objective. Think of it this way: ML is like a macro-manager who says “I have an objective, Joey. It’s up to you how you achieve it.” The other more traditional programming approach is like a micro-manager “I have an objective, Joey. These are the steps you MUST follow to achieve it.”

Machine Learning and the greater AI umbrella work together to create new ways to look data already collected, streaming data (like social media, financial markets, etc.), and projected data that traditional programs might or certainly have a hard time processing and analyzing. How is this applied and why is it important? I want to give two real-world examples of why we should pivot towards embracing AI & ML in the cloud industry.

How We’ve Seen AI & ML Change the World Already

Take a look at Google’s new Duplex product. It uses a blend of AI and ML technologies produce a human-like interaction (AI) replete with natural pauses to schedule meetings with humans, vocally, by using either a natural speech recognition system (ML) or type-input into Google’s AI Assistant. Think about how you interact with an interactive voice recognition system when you’ve had to call a customer service line <insert Comcast nightmare call stories here>. Now, think about how a basic understanding of each problem ever reported to a learning system could feedback into better results when querying that same system and also front-ended by a very human sounding voice.

Google’s Duplex is nifty stuff, but how will it affect ours or others’ lives for the better? Consider at Microsoft’s implementation of ML and how it is bettering the lives of those afflicted with Diabetic Retinopathy. Microsoft partnered with a medical imaging vendor that collects images of eye exams and ophthalmologists that examine eye exams to analyze over 250,000 eye exams. The ML code scored each of those exams based on normal baseline eye exam images and then quantified variances from what “normal” looks like to an ophthalmologist. Based on that initial scoring (ML), exam images that had suspicious indicators of Diabetic Retinopathy were sent to a subsequent neural network (a subsection of ML). The pre-screened, neural network applied results achieves an accuracy of 97.9% identification of Sight Threatening Diabetic Retinopathy!

What’s Next?

As seen from the examples here, AI and ML have the ability and already are affecting lives. I believe the next shift in IT is a rapid adoption, creation, and cultivation of a Data Science department. Most businesses, hospitals, academic institutions already have a wealth of data they already have generated or will generate. Sifting it through trained AI and ML models will yield insights from data yet to be experienced in the general IT landscape today. Whether it’s Tesla with its Autopilot self-driving capabilities, Google with its natural speech capabilities in Duplex, or Microsoft with its sight-saving ML techniques, AI is here to stay and holds a treasure trove of opportunities for those that are willing to invest time and effort in developing Data Science practices. Personally, I see the future IT landscape as data-centric and can only begin to imagine how the next implementation of AI/ML can improve my and others’ lives.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.