top of page
Writer's pictureYaqin Zhang

How does Artificial Intelligence work?

a brief introduction


Avoiding the burden of explaining the complex algorithm of Machine Learning, here is a simplified version of how AI work:


(and don't worry passionate readers! We will dive into the actual process of Machine Learning algorithm in later posts)

  1. Gathers data collected by sensors

  2. Randomly separates the data into two groups

  3. Observes one group of data, eliminates anomaly (oddly high or low data) and finds a pattern

  4. Applies the pattern to the second group and evaluates the pattern's accuracy

  5. (If the accuracy is desirable) uses the pattern to predict future data

In our HAI Lab, we read a few science fictions that illustrate some potential images of the more developed AI robots. These imaginations are based on assumptions that a) society supports a large number of sensors (including weather sensors, surveillance camera, internet browsing history...) that track and record massive amount of data; b) hardware engineering has overcome limits in speed and storing capacity.


Based on my observations, I generalized three main differences between AI's and human's behavior:


1. Faster (so much!) reaction time

The fastest supercomputer now has a capacity to calculate at the speed that no individual or group of people can compare with. According to an ORNL statement[1], "If every person on Earth completed one calculation per second, it would take the world population 305 days to do what Summit can do in 1 second." In comparison, human's decision making process is rather time consuming. Considering the process of buying a gift for a friend, we need to browse through the friend's social media to see what are their preference, then check online for a seller that offers a reasonable price. Even at the last step - when we are checking out, we need to take out our wallet, search for a credit card that has not reached its limit (feeling sad for a little while maybe), and then copy the card number and security code. The whole process will take up from 1 hour or even up to a few days! Whereas algorithms know what we want after we secretly watched a few funny cat videos on youtube -- we immediately start receiving pop-up advertisements about cat toy, cat posters, or even cat adoption.


I mean, check out this advertisement! Isn't it exactly what I need now?

2. Strictly guided by data algorithms

Data set is selected, algorithm is written. Calculations are done based on the given data. This has both good and bad implications. The good side is, given the known information and standard, the algorithm can deliver the most fitted result. However, the bad side is that if the algorithm learned from a biased data or have a skewed standard, it does not have the ability to correct them but only follows or even exaggerates those bias. As Christopher Slobogin states about using AI algorithm in court decision, the benefit can be significant in theory: "Fewer people of all ethnicities would be put in jail prior to trial and in prison after conviction, the duration of sentences would be reduced for low-risk offenders, and treatment resources would be more efficiently allocated". However, such benefits are only achievable if we understand crime as a social problem, make sure a fairly representing data base and a non-discriminating ruling standard, which are achieved yet, as he states "(those benefits are not achievable) unless the currently popular determinate sentencing structure that exists in most states is dramatically altered"[2].


3. Avoid anomalies

In Machine Learning, data that falls far away from the majority will be marked and eliminated. It seems that anomalies can be as easily detected as Gerald (referring to the figure below). In a human's decision making process, anomalies are hard to be ignored. For example, we all have those friends whom we no longer talk to because they once badly hurt our feelings. Emotions chime in when we think, and exaggerate the impact of those "anomalies" to the extent that we let those anomaly moments take charge.


Not too hard to find an outlier in this case.

However, it is too early to praise the AI algorithms. They are cold-blooded and emotionless, but there is no way to ensure those characteristics will derive the most rational/scientific/correct conclusion. Detecting anomalies requires making assumptions about the model and determining the sensitiveness of the program. Different methods in detecting anomalies will lead to different conclusions about the same data set. One way the algorithm works is to assume that there is a pattern in the data (linear, parabola, exponential...) and tries to define the shape of the data. The data that lies far outside the shape is defined as outlying observations. In order to determine a likely pattern, the programmer needs to determine the desirable sensitiveness of the program. When there are multiple scattered points in a data, being too sensitive can lead to an overfitting of data and being too insensitive can lead to an underfitting of data. For example, in the figure below, if the programmer decides to have a linear model for the data, they will derive the blue line in the first graph (underfitting); if the programmer decides to have a very curly model and fit every single data point, they will derive the blue curve in the third graph (overfitting). (figure 1)


figure 1: underfitting and overfitting

It might not seem to be a big deal because the trend is obvious in the previous example. However, when dealing with big data, "intuition" does not apply to high dimensional data. Unsupervised models "can be a challenge". (figure 2) Thus, human judgements prior to computer's works is vital in determining the conclusion.


figure 2: different models detect outlier for the same graph [3]


Reference:



[2]: Slobogin, C. (2021). Preventive justice: how algorithms, parole boards, and limiting retributivism could end mass incarceration. Wake Forest Law Review, 56(1), 97-168.


8 views0 comments

Commenti


bottom of page