Human Learning versus Artificial Intelligence

We may know a lot about component anatomy, mechanical function, biochemistry, and neurophysiology. However, we do not fully understand how the brain works, so we are very challenged regarding developing an artificial brain. Even if a single high-powered computer is assigned to each individual sensory input, and combined with rich factual information sources, there needs to be a rapid interpretation of the combined input, and interpretation requires hierarchical coding for sequential consideration of all input in probabilistic decision-modeling. The decision trees ultimately determine if an adequate input of both sensory and retrieved data (past, current) has occurred to make good decisions. Such is the manner of artificial intelligence and deep learning.

Unfortunately, we cannot force input data into our brains twenty-four hours per day like a machine, nor immediately store information (e.g., as an HD, SSD, or hybrid device) with essentially one-hundred percent accurate short and long-term retrieval access. As such, humans alternatively apply the skill of rapid recollection, application, interpolation and extrapolation of our more volatile, combined STMs and LTMs.

Machines have no ability to qualitatively experience life. But, collectors and sensors may be integrated to recognize the presence or absence of relevant signals. With a sufficient number of machine assessments of environmental and informational data, the interpretation and summation of assessments yields machines with the ability to determine “yes, no, or a calculated probability of yes” results depending upon programming. A related, “human learning algorithm” suggests that after approximately 10,000 HQ experiences, a human should master a skill, unless not physically or intellectually capable. Such a statement takes into account human attention span, fatigue, error, imprecise repetition, and volatile memory, that are irrelevant to, or of markedly less significance in machine-learning.

Depending upon the nature of the task, a machine may “learn” the appropriate “if A, then B, or for A0 → An, then B0 → Bn, is highly probable”, much more rapidly than in 10,000 trials. More significantly, the machine can perform the learning trials much faster, retaining iterative results with essentially flawless memory. And, most importantly, humans must engage in time-consuming, possibly challenging learning of prerequisite theoretical and functional facts preceding learning new skills, foundational data are simply and much more briskly uploaded into memory preceding new machine learning. Whereas the programmer is able to load all relevant precursor data into the system prior to initiating the machine-learning process.

Unlike machines, humans experience the world around them, and they do so both qualitatively and quantitatively. Humans also learn differently compared both to machines and each other. Given comparable targets, the primary differences in human learning are related to genetic capabilities, systems’ willingness to effectively assess and address instructional and learning performance deficits, the use of qualified (as distinguished from certificate-holding) and highly-motivated instructors, safe, effectively-resourced learning environments, and an absence of sociopolitical distractions. A few significant barriers to efficient learning are the instructor’s class performance objectives, grading systems, and competency/advancement criteria. When these barriers are overcome, complementary student retention strategies are employed and instructors’ objectives are to truly “leave nobody incompetent and behind”, then we will see successful student learning.

In quantitative work, if all agree that A (2+2) equals B (4), we can also agree to similar functional relationships, and other applications of quantitative logic: “if A, then B”. Akin to machines, students should not have to “recreate the wheel(s)”, spending voluminous, precious time seeking alternative ways to learn the foundational material. Rather, via simplistic, HQ base concept instruction, and subsequent exposure to as many applicable, fully-detailed, relevant examples of “if A, then B”, then like machines, students should be able to recognize exact reproductions of examples, as well as interpolate and extrapolate to other challenges given similar information. They should also be trained to focus on the important data and observations separate from diversionary, ancillary information.

In qualitative work, if by analogy regarding the story of The Little Red Riding Hood, you allow for personal opinions of the wolf, the following may occur. Some may suggest that the wolf is good, in character for its species, misunderstood, or may be characterized in an infinite number of other manners per personal perspectives, rather than as the “big, bad” wolf. Such would change the meaning of the story dramatically. For many other fictional and reality-based stories or sociological and historical materials, unguided discussion of topics could become difficult. Opinions of good, bad, right, wrong, equity, or definitions of facts versus beliefs could obscure meaningful direction, making the task of machine programming outcomes of such discussions very tedious, with downline decision-making very arduous if not impossible. The suggestion is that human learning of qualitative subjects can become extremely challenging when we do not apply rules, briskly and logically filtering the noise of alternatives (B1 → Bn).

For qualitative subjects and other subjects with minimal mathematics or logic-based content, the volume of materials is often a greater challenge than understanding the materials. Given the absence of infinite time available for retention, instructors should determine the acceptable A → B relationships that they wish the students to acknowledge and retain. They should also present the supportive materials relationships succinctly, or clearly convey that taking alternative A→ B positions is acceptable if well supported, with good arguments. Instructors should also define their definitions of “well-supported” as precisely as is possible.

In summary, artificial intelligence, or machine learning and related decision-making, can be very efficient given robust input, high-quality instruction, and superior decision-tree programming. Human learning can be as robust. However, because students lack instantaneous STM/LTM-directed data upload capability, and cannot engage in extended twenty-four hours per day learning like machines, instructors need to engage students’ time very efficiently. Instructors should assure delivery of very high-quality, concisely presented data, information relationships, and sample applications that succinctly demonstrate a sufficiently comprehensive application of the information. Students do not need to waste time pursuing complementary and alternative teaching resources to understand foundational concepts, relationships and examples of their effective application. Additional learning enhancement by students should be discretionary.

  • In quantitative work, if all agree that functions such as A (2+2) equals B (4), we can all effectively specify: if A, then B → and we are done, moving on to other relationships, and exposure to multiple examples.

  • When the materials are qualitative and/or subjective, assure that the “instructing instructors’/institution’s platforms” related facts, positions, doctrines are clearly expressed for retention and mimicry, and the acceptable format and style of presentation of both instructor/institutional and alternative opinions are clearly delineated. It is tacitly inappropriate and inefficient to apply an assessment criterion to student work after the fact, versus clearly describing expectations, as well as effectively teaching and reinforcing skills required to be learned/demonstrated prior to assigning work or delivering examinations. Then A (acceptable: facts, beliefs, positions, desired presentation rules) will yield B (expected: content retention, positions representation, profiles adoption, style demonstration), and we can specify: if A, then B –> and we are done.

Some would suggest that the above approach does not promote or results in loss of free-thinking, and creativity. To the contrary, selective, high-quality education is not defined by entering as A(blank slate) and graduating as B(whatever you want). Rather, people want minimally to enter an institution as A(other) –> graduating as → B(mini-Me), or A(other) → graduating as → B(maxi-Me), in the latter scenario the student develops extraordinarily, founded upon, yet exceeding the knowledge, skills, experiences, values, perspectives, and opportunities that the chosen institution brings to the table, producing a transformed individual. Education should imbue the student with a lattice of superior, leading-edge, education-guided facts, skills and experiences that the student may both concurrently and subsequently apply to interpolate and extrapolate beyond the limits of their formal education – as would well-constructed artificial intelligence.

What does this mean for a student? Start developing an AI-like strategy, including instructor interactions that yield efficient gathering of the core, foundational data with which to engage in a similar, information management approach. It also means you push academic administration, student achievement and retention resources, and teaching staff to be more effective than ever before to assure that they share the best possible core, foundation knowledge, skills and experience they have, with you, so that you do not need to independently recreate the wheel. Be an Academic High Achiever – a highly effective human learner.

Leave a Reply

Your email address will not be published. Required fields are marked *