Big Data, AI, and Machine (Selectionist) Learning: A Stroll Through the Thinking of Advanced Program Design

I must admit some reluctance when I was approached about writing a blog post for the Technology SIG. My first reaction was that I had little to contribute beyond what has been already written on the blog and elsewhere. In education, there are many useful articles that distinguish between technologies of tools and technologies of process, including games and gamification. In clinical work, the use of bug in the ear, video, and a variety of sensors have been explored. And an increasing number of apps help users set goals, monitor progress and communicate with health or behavioral professionals.

However, as I spent time thinking about all this, I began to think about a topic that might have some value for the readers of this blog. What if—instead of an entry about how technology could facilitate behavioral practices (or vice versa)—I described how I would approach a development project that required an outcome not possible without the intertwining of behavior analysis and certain technologies?

Over a short series of posts I will attempt to do just that.

The Project

The project I will write about was proposed and planned but never completed, so we will never really know if the approach would work. As with every development project, the data generated from testing with users are what actually shape the course of development, though a plan is usually developed that at least guides initial efforts. It is this plan and how user testing influences design and development that will be the subject of this series. Although the project never moved passed the planning and initial design stage, the thinking and approach described is real in the sense that it is what would have been done.

Today there are countless apps designed to help people meet individual goals. Apps provide advice, offer easy interfaces for collecting personal data, include clever graphics to represent progress toward a goal, provide feedback upon subgoal completion, award points, badges, or other acknowledgements, and may provide social support by linking to others. Some apps are designed to help individuals collect data and communicate with therapists or others who are counseling or coaching them. In almost all of these applications, the app itself is a conduit for gathering or evaluating data and providing feedback. In short, some behavior analytic principles may be incorporated into the app itself, but often the app collects data for a therapist or counselor who is implementing interventions that would likely have occurred if the same information had been gathered in another way. In essence, technology is used to provide faster—and perhaps better—data that are analyzed and acted upon much the same way as they would be without the technology.

Would it provide needed services? It likely would. But, nothing would be invented, nothing would really change the way service is delivered, and no tools would be created that might inform and improve behavior analysis itself.

I was part of a group asked if there was a way to affect large scale health-related behavior change using technology. The first impulse was to do exactly what was described above—design apps and digital job aids that  would allow fewer counselors to help many more people. This would include remote interviewing, distance coaching, digital call centers, and apps to record data and provide feedback. Technology would play the role of extending human reach. In truth, this approach, though not without its challenges, is a pretty straightforward application of behavior analysis and existing digital technology.  Would it provide needed services? It likely would. But, nothing would be invented, nothing would really change the way service is delivered, and no tools would be created that might inform and improve behavior analysis itself.

Discontent with the Status Quo

I began thinking in earnest that we needed to do something entirely different. Could we use the emerging areas of machine learning, big data, and selectionist algorithms to actually enhance our procedures? Could we actually go beyond simple linear behavior analysis and bring a more complex nonlinear analysis and systemic interventions to large numbers of people, without having to extensively train therapists, counselors, or coaches? In a 1968 paper, behavior analyst Israel Goldiamond and psychoanalyst Jarl Dyrud speculated that one day perhaps we could develop what they called programed clinical instruction (PCI). PCI would consist of programs targeting repertoires the absence of which was the reason for seeking therapy. Could their 1968 speculation be turned into reality with today’s technology? Several new applications of existing technology along with the invention of new technology combinations would be required.

Big Data

The first step was to think about where we needed to go. What was it that we wanted to see at the end of development? As I investigated big data applications, I found they were providing some fascinating results in many fields. For example, I spoke with the CEO of a major marketing firm about their application of analytic algorithms to big data. In big data applications, seldom is there a prejudgment about what data are important. The more data the better. They use many sources—local traffic information, zip code demographics, census data, crime rates, pet ownership, and on and on. In this case, they used big data to identify the person most likely to use the facial cream product they were hired to market. They found that person to be a middle-aged woman with an upper middle class income who lived in the suburbs and who owned a dog. Move the woman to the city, the likelihood dropped off. Take away the dog, the likelihood dropped off. Why was this the case? No one knew and they didn’t care. They targeted this segment with advertising and promotions directed at this “person” and sales took off.

We soon found there were similar applications in healthcare. Hospitals in a northeastern state were having too many readmissions to the hospital within 30 days from discharge. With the help of a big data firm, they determined that older women from their area who are discharged to the home of a lower-income, working, grown child where the spouse is also working have an 80% chance of being readmitted to the hospital in 30 days. What was fascinating was that this was true regardless of admission diagnosis. If only one of the couple was working or there was no spouse, if the patient was discharged to their own home, had a slightly higher income, or was slightly younger, etc., the readmission rate was much lower. Again, one may speculate as to why this was the case, but for the hospital it didn’t matter. They began home visits for this group as well as follow-up telephone communication and greatly reduced readmission rates as a result.

Genetic Algorithms

It was clear that this type of big data analysis could be very important in identifying what procedures might work with what individuals. Equally interesting was that the hospital data were generated by algorithms that had not been directly programed. They were genetic algorithms that evolved based upon feedback from their increasing predictive success. Now that was exciting. Gathering large amounts of seemingly disconnected data, evaluating client success through the program, and feeding this back into evolving genetic algorithms might lead to being able to better match people to interventions. We could look at existing data on people who did or did not succeed in various life improvement programs and see if we could find initial big data correlations that appeared to predict that outcome. Perhaps we would not have to start from scratch.

Equally interesting was that the hospital data were generated by algorithms that had not been directly programed. They were genetic algorithms that evolved based upon feedback from their increasing predictive success.

Application of big data analytics utilizing genetic algorithms became part of the vision. Could we, for example, pre-identify people who would fall into the following categories:

  • Self-Motivated: give them information and they run with it
  • Motivationally Challenged: people who express interest in change, but can’t seem to do it on their own
  • Clinical Psychological Issues: people with serious life issues besides the targeted health-related patterns
  • Fatalistic or Resigned: those who believe nothing will work

By identifying individuals falling into each category, initial individual assessments and programs could be customized. For example, for someone falling into the self-motivated category, simply providing initial information, digital guidelines, and rudimentary data collection and feedback may be the only intervention required. For the motivationally challenged, a different assessment and program would likely be required. I will return to this discussion later in the series.

The Constructional Questionnaire

Next, we would need to individually assess the people participating in our behavior change programs. Now there are all kinds of surveys and questionnaires that are used by psychologists, behavioral or otherwise. Often, these questionnaires are trying to operationalize some form of internal hypothetical construct, such as resilience or psychological flexibility. While perhaps useful under some circumstances, this form of methodological behaviorism–an operational definition based on an observable indicator response representing an internal hypothetical construct–would not be adequate to our task. We needed an assessment that helped elucidate the sets of alternative consequential contingencies that made sense of the disturbing or problematic patterns our clients were displaying. Israel Goldiamond had developed just this sort of questionnaire. It was called the Constructional Questionnaire. It is a powerful tool that can help identify the alternative sets of contingencies responsible for the current patterns as well as provide an evaluation of the client’s current relevant problem-solving repertoires. But, not unlike many behavioral procedures, it requires a trained nonlinear behavior analyst to effectively use it. Accordingly, it seemed that the analysis provided by the Constructional Questionnaire could not be delivered at anywhere near the scale required for our project. However, in rereading the case presentation guide provided to trainees, I was struck by an idea about how it might be possible after all.

The guide began with, “Weave in various items from questionnaire and other sources to present a coherent picture of a person functioning highly competently, given his circumstances and implicit or explicit goals.” The reference to “coherent picture” immediately jumped out at me. The extensive reading about genetic algorithms and selectionist approaches to big data analysis combined with seeing the word “picture” worked together to produce an exciting possible solution as to how a Constructional Questionnaire could be used at scale. In essence, the next element of our emerging vision might indeed be possible.

What precisely was the solution? How did it occur to me? We continue our stroll in Part 2 of this series.


T. V. Joe Layng has over 40 years of experience in the experimental and applied analysis of behavior with a particular focus on the design of teaching/learning environments. In 1999, he co-founded Headsprout. At Headsprout, Joe led the scientific team that developed the technology that forms the basis of the company’s patented Early Reading and Reading Comprehension online reading programs, for which he was the chief architect. Joe earned a Ph.D. in Behavioral Science (biopsychology) at the University of Chicago where Israel Goldiamond was his advisor. At Chicago, working with pigeons, he investigated animal models of psychopathology, specifically the recurrence of pathological patterns (head-banging) as a function of normal behavioral processes. Joe also has extensive clinical behavior analysis experience with a focus on ambulatory schizophrenia especially the systemic as well as topical treatment of delusional speech and hallucinatory behavior. Joe is a fellow of the Association for Behavior Analysis International, and Chairman of the Board of Trustees, The Chicago School of Professional Psychology.


 

3 comments

  1. […] a human counselor. And perhaps eventually—as the program learned more and more—even better. In Part 1 I described the technologies available. In Part 2 I described how those technologies could be used […]

    Like

  2. […] versions of the interview to match individual characteristics determined by a big data analysis. Earlier I described four possible categories into which potential uses might fall. Each person in a […]

    Like

    1. adam dreyfus · · Reply

      How come I cannot see the full replies?

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: