top of page

What is AI? Artificial Intelligence – A Beginner’s Guide (#010)

Artificial Intelligence (AI) is best defined as machines displaying the same or similar intelligence as nature. With humans being the highest form of natural intelligence discovered so far, AI is most commonly associated with mimicking human intelligence. However, as you will see, in some applications AI is already well beyond human capability. Once this level of capability is achieved for a particular application, it is often no longer considered as AI ironically. For example, if you found this article through a search engine then you were helped by some form of AI. Did you know that? Would you have said that you were searching using AI?


Where is AI already being used?


The short answer is anywhere that AI can analyse its environment (in the form of data) and can take action to maximize its chance of success i.e. achieve its goal(s). High profile examples include; driverless vehicles, medical diagnosis, targeted online advertising, fraud detection, gaming, facial/speech recognition, online assistants, even creating art.


My own interest in AI stems from the point above i.e. AI maximizes its chances of success. In my experience this one of the “most human” traits. Achieving goals is the transformation of your current unsatisfactory state/circumstance into what you perceive will be a better state/circumstance (I cover this in detail in my Goal Achievement Sheet, which you can get for free by subscribing to my blog). This is at the core of AI.


The actions to achieve the goal are defined by an algorithm. An algorithm is a set of unambiguous instructions that a computer/machine can execute. The algorithm (or code) is usually written by a coder (if you are of a certain vintage, like me, you might remember programmers and programs). A complex algorithm is often built on top of other, simpler algorithms. A simple example of an algorithm is the following actions for playing noughts and crosses (tic-tac-toe);

  1. If someone has a "threat" (that is, two in a row), take the remaining square. Otherwise,

  2. if a move "forks" to create two threats at once, play that move. Otherwise,

  3. take the centre square if it is free. Otherwise,

  4. if your opponent has played in a corner, take the opposite corner. Otherwise,

  5. take an empty corner if one exists. Otherwise,

  6. take any empty square.

This algorithm/code/program will most likely result in a win if you go first, or at least secure a draw. In recent years, AI (called AlphaZero) has defeated Grand Masters at chess and the 18-times World Champion at Go (AlphaGo). Some AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write new algorithms to increase its chances of success. Both AlphaZero and AlphaGo “taught” themselves to play. Depending on your perspective, this is where things get either interesting or scary, or both.


AI research has no guiding standards, per se. This makes the field fertile territory for researchers and innovation. However, to the layman, it can appear complicated and chaotic from the outside. Finding areas of agreement between experts and groups can be difficult. Indeed, some of the early key questions in the field remain unresolved. There are many approaches, methods, tools and languages used. If you would like to dig a little more into this, then I found this source the most digestible.


Most people are surprised that the formal study of AI dates back to the 1950s. Although the concept of intelligent automata dates (at least) back to the ancient Greeks. Since the 1950s there have been several iterative cycles of intense interest (and funding) followed by stagnation (and lack of funding). The last interest peak, before the current one, was in the late 1980s and early 1990s. Typical reasons for reduced interest are failing to deliver on expectations and poor communication between the many disciplines and teams involved.

What will be different this time?


There are many significant environmental differences;

  • The ubiquitous nature of computers – it is almost impossible to escape them on (the surface of) planet Earth, we have even sent them beyond our solar system (over 11 billion miles).

  • All those computers are gathering massive amounts of data – if you are just getting used to the Gigabyte (GB) then get ready to for the Zettabyte (ZB) (1 ZB = 1,000,000,000,000 GB).

  • Computers are processing data faster than ever – this is “driven” by Moore’s Law. The number of transistors we can fit on a chip (integrated circuit) roughly doubles every two years. Quantum Computing is set to make this look pedestrian. Governments around the World are literally throwing money at it to get ahead.

  • Data transmission speeds continue to grow – according to Nielsen’s Law compound annual growth rates in bandwidth are 50%, or to put that another way, the internet is now 57 times faster than a decade ago.

We (humans) have chosen to make all these changes happen. We are generating all the data. For some of us, sharing almost every aspect of our lives (deliberately or not, consciously or not) with computers is an addiction. We have even coined the phrase the “attention economy”. Look around the next time you are in any public space, our default is to get on our “phones”. Some have said that we are already cyborgs.


My personal prediction is that AI will change everything. In many ways it already has. The news you read has been curated for you. Your social media feeds are manipulated to suit your avatar. If you are not in education, then 80% of the information you digest is through a computer (of some sort). AI algorithms are listening to you, watching you, trying to understand your behaviour. Mostly to meet your needs, mostly to sell you something and mostly to make money for the distribution channel owners.


I think that AI is no more than the latest label we have put on the computing revolution. The quest for some in AI research is Artificial General Intelligence (AGI); think of this as a self-aware computer. I think Conscious Computing should become the name for thinking ethically and morally for how to proceed down this road we have chosen, rather than our aim. Not because a conscious computer is scary, far from it. I just think that we (almost all of us) have little or no understanding of who or what we are. That does not position us well to code other forms of consciousness.


Perhaps you have some different views on AI? Please get in touch or leave a comment below…


Be happy, healthy and helpful


Paul


How to get in touch

1. Subscribe to my blog here www.paultranter.me

2. We can connect on;

b. Twitter https://twitter.com/GetPaulTranter @GetPaulTranter


I look forward to hearing from you!


P.S. Feel free to share on your social media, just click below

Comments


Never Miss a Post. Subscribe Now!

Blog on Technology and Science

businness & investment

business blog

leadership and growth

For interesting and engaging content, plus goodness knows what...

© 2018-20 PCSi

  • Paul Tranter LinkedIn
  • GetPaulTranter on Twitter
  • Paultranter.me on Facebook
bottom of page