Big Data, AI, and Machine (Selectionist) Learning: A Stroll Through the Thinking of Advanced Program Design–Part 2


Weave in various items from questionnaire and other sources to present a coherent picture of a person functioning highly competently…

That word—picture—combined with the context of selection-based algorithms brought responding to a book I had read some years earlier to strength.

Okay, it made me think of the book Why We Feel: The Science of Human Emotions by Victor Johnston.

In the book, Johnston describes the use of selection-based algorithms to produce a likeness of a person only briefly seen. Producing such a likeness is routinely performed in police investigations and is typically done by an observer describing facial characteristics to a graphic artist. Instead of this typical method, Johnston displayed on a computer screen various combinations of facial features. The observer then rated the characteristics and the face as to how well they matched the actual person observed. The algorithms generated variations, selected against variations that received low ratings, and presented new combinations based upon known variation principles. Within a short time, the algorithms provided a picture of the person that was a very close match to the person observed and was, in fact, much better than the artist’s sketch.

The immediate question was raised: could a similar procedure be used to provide “a coherent picture of a person functioning highly competently…?” Of course, a questionnaire is based on asking questions, but so is an artist’s sketch. Could a series of multiple choice or yes-no questions be used to generate an early picture—a written scenario—that could then have its features and overall description rated? As a result of interacting with the program, could a picture—much like what is derived from the Constructional Questionnaire—be accurately produced (as rated by the user)?

The Constructional Questionnaire begins:

I am going to ask you a group of questions about your goals. You are here because you want certain changes to occur, or want something else. (a. Presented outcome) The first of these is: Assuming we were successful, what would the outcome be for you?”

At this point the client typically talks in generalities about being less anxious, depressed, etc. Or if more positive, “I would be a better communicator.” The client is allowed to say whatever they like with little direction.

The next series brings much more specificity to the answers by asking what precisely the client would be doing if they were not anxious, depressed, etc. This is not always easy. People are used to speaking pathologically. That is, they can easily say what they would like to stop doing. But, describing what they would be doing if the present problem were not a problem can be  challenging for many. To help with this, the next question was creatively designed to produce useful answers:

Now, this may sound silly, but suppose one of these flying saucers is for real. It lands and 2,000 little Martians pour out. One of them is assigned to observe you—your name was chosen by their computer on some random basis. He lands some time after L-Day—Liberation Day from your problems—and follows you around invisibly. He records his observations and these are sent back to Mars. Their computer will decide on the basis of the sample of 2,000 Earthlings they have what their disposition toward Earth should be. What does he observe? (Alternate or added form: What would others observe when the successful outcome was obtained?)”

The client is then asked to begin in the morning and describe what the Martian sees from awaking in the morning to falling asleep at night, on weekdays and the weekend. Remember, this is life after the problems are solved. When a patient says, “I would have a happy exchange with the receptionist,” the interviewer responds by saying, “What does the Martian see when you are having a happy exchange?” Eventually, a picture of how the client’s life would change is provided. It is comprehensive and thorough and is likely something to which the client has given very little thought.

The next question asks, “How does this differ from the present state of affairs? Can you give me an example?” Here is where the client describes and gives examples of the difference between the way things are now and where they would be if all was well. It provides the before and after from which the success of the intervention will be determined.

Other questions are designed to explore other important areas, such as whether there is a “hidden agenda” not mentioned in the outcome question, what is going well and would not change, and one which often provides revealing answers: “You’ve heard of the proverb, ‘It is an ill-wind that blows no good.’ With regard to some advantages that might have ‘blown your way,’ has your problem ever produced any special advantages or considerations for you? (Examples: in school, job, at home) Please give specific examples.” The full questionnaire may be seen here:

Here was a method that had the potential to bring the benefits of this very sophisticated interview to thousands of people.

Two possibilities immediately emerged. First, could we ask some carefully constructed multiple choice type questions that would allow us to construct a completed scenario for the question about successful outcomes, which could then have its components as well as its overall picture rated? We could iterate this process using composite scenarios with the selectionist algorithms producing variants until the components and composite received high ratings for accuracy.

The second approach would be to take smaller steps. Instead of constructing a composite scenario and rating it’s various components and the overall picture presented, this approach would present a component for rating. Once the component evolved into a highly rated one, another component would be added and the process repeated until a composite was produced. For example, “What would the Martian see when you awake, arrive at work, at lunch, etc.?” Mini scenarios would be generated and put through the process. Responses to earlier questions may be useful for producing subsequent variants. These components would eventually be put into a larger composite for evaluation.

Would one approach or the other yield better results? Would both yield the same result, but with one more efficient than the other? Could it be done at all? As to the last question, given my investigation into how Johnston contracted his algorithms, I was confident it actually would work. The accuracy of the scenarios could be evaluated using ratings as well as by comparing the results to verbal pictures produced during actual interviews. Once there was some confidence in the outcome, an evaluation could be conducted to determine how well programs whose outcomes were defined by this automated selectionist interview actually performed when used in a program to help individuals meet their goals.

Think about it for a minute. Here was a method that had the potential to bring the benefits of this very sophisticated interview to thousands of people. An interview that specifies not only the consequences maintaining the client’s disturbing patterns given their alternatives, but that could specify outcomes having the same or better benefits at less cost to the individual. Further, it might be possible to produce different versions of the interview to match individual characteristics determined by a big data analysis. Earlier I described four possible categories into which potential uses might fall. Each person in a particular category could be provided with a separate interview customized for their category. Our research and development plan would need to coordinate these two features. Further, our individual assessment results would be fed back to the big data algorithms to help define and make more precise the categories and their predictive validity.

This would not simply be an app that provided goals and feedback, but one that evaluated progress, collected the right kind of information (including  client emotions), adapted to user performance, benefitted from big data analytics, and would be able to switch from “topical” to “systemic” approaches to achieving client outcomes.

So now two elements of what would be required were specified: the big data feature and the selection-based automated interview based on the Constructional Questionnaire. But once the contingency analysis was made and a case guide such as described in Part 1 was generated, a delivery program would be required. This would not simply be an app that provided goals and feedback, but one that evaluated progress, collected the right kind of information (including  client emotions), adapted to user performance, benefitted from big data analytics, and would be able to switch from “topical” to “systemic” approaches to achieving client outcomes.

Now an app would have to be created that could provide both topical and systemic programs, collect personal data, and connect everything to big data analytics and ongoing selection—the topic of Part 3.

T. V. Joe Layng has over 40 years of experience in the experimental and applied analysis of behavior with a particular focus on the design of teaching/learning environments. In 1999, he co-founded Headsprout. At Headsprout, Joe led the scientific team that developed the technology that forms the basis of the company’s patented Early Reading and Reading Comprehension online reading programs, for which he was the chief architect. Joe earned a Ph.D. in Behavioral Science (biopsychology) at the University of Chicago where Israel Goldiamond was his advisor. At Chicago, working with pigeons, he investigated animal models of psychopathology, specifically the recurrence of pathological patterns (head-banging) as a function of normal behavioral processes. Joe also has extensive clinical behavior analysis experience with a focus on ambulatory schizophrenia especially the systemic as well as topical treatment of delusional speech and hallucinatory behavior. Joe is a fellow of the Association for Behavior Analysis International, and Chairman of the Board of Trustees, The Chicago School of Professional Psychology.


One comment

  1. […] program learned more and more—even better. In Part 1 I described the technologies available. In Part 2 I described how those technologies could be used to assess and prescribe strategies and goals […]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: