In Part 1 of this series, I described the task of automating delivery of guided expert help. To reach thousands of people using trained professionals is a daunting and expensive proposition. Our goal was to serve tens of thousands of people with little direct human intervention. We reasoned that by employing big data analytics, some aspects of artificial intelligence, and machine learning (selectionist) algorithms, we could design a program that interacted with a user much the same as would a human counselor. And perhaps eventually—as the program learned more and more—even better. In Part 1 I described the technologies available. In Part 2 I described how those technologies could be used to assess and prescribe strategies and goals tailored to the individual user. This final entry discusses how we approached designing an app that would guide the user through their program and enable them to reach their critical goals.
Many of us have used or at least seen clever apps available to assist us in meeting behavioral goals. These apps gather data—from user input, sensors, or GPS—and present that data in ways designed to help us meet our goals. Some help set goals, some provide advice, some connect us to support communities, and some allow us to consult with experts or more knowledgable people. Some apps are designed to be used in therapy, allowing clients to record their thoughts, feelings, and actions and later send them to a therapist. Although data are collected and analyzed and some advice or activities may be supported through audio or video coaching, most interventions are left to human coaches or therapists. While all of this may be helpful, the cost of providing a personalized, therapeutic intervention at scale remains high. Would it be possible to automate clinical intervention?
The solution we designed would involve using commercially available machine learning software whose adaptive algorithms would allow us to pinpoint the complex interrelationship of multiple variables that influence behavior and produce highly accurate recommendations and results. We would use these algorithms and other artificial intelligence-based software to transform big data and our Constructional Questionnaire into personalized goals and subgoals that we hoped would increase enrollment and engagement of large populations. We then reasoned we could use the same underlying adaptive algorithm to create personalized plans that incorporated health behaviors into the consequentially important patterns of an individual’s daily life.
The app would have to generate individual target repertoires or goals for users based upon a combination of big data analysis and the Constructional Questionnaire. These goals would be broken into subgoals based on each person’s current relevant repertoire. Each week’s subgoals would be based on the past week’s performance and the analysis of the contingencies responsible for that performance. We wanted to begin with the leanest intervention possible that would allow our users to meet their subgoals, which would ultimately lead to meeting their target goals.
Our model was Goldiamond’s (1974, 1975) constructional approach to self-control. In this approach, weekly subgoals are typically determined after a constructional therapist and the client analyze daily logs kept by the client. Each subgoal is chosen based upon its relation to the final goals. The daily log is often the key to success. Using the logs, clients can identify if they are getting what they want out of their daily interactions and activities. They can determine if the reinforcers important to them are likely to occur as a result of what they are doing. They can test different approaches and adjust their behavior in order to achieve what they value. Clients make an entry every hour on average. An example of such a log is provided below.
|No.||Time & Duration||Audience, Place, Conditions||Activity Intended, What I Wanted||Activity, What I Got||Comments, Emotions|
Of great significance is the distinction between what was wanted versus what actually transpired (“What I Got”). “What I Wanted” speaks to the potentiating variables operative at that time. That is, what the likely reinforcers were at that point and why they are important. These may be either positive reinforcers (“Him to ask me out”) or negative reinforcers (“Mom to stop nagging me about homework”). Emotions are used as windows into the contingencies operating. Through a series of questions and a dialogue with the client, instances are cumulatively examined and analyzed until next steps emerge. For example, we may discover that the only real interaction with mom comes in the form of nagging and no instances of close interaction were recorded. This suggests that the consequences of nagging and what occasions nagging may be critical to understanding the contingencies of which the overall pattern is a function. That is, when combined with what is in the log, what isn’t in the log may be as important as what is in it. We may find that although the nagging is aversive to a certain extent, it is also reinforcing, bringing close social contact not otherwise available. The question would be raised, “How can a valued and meaningful interaction with mom be developed without the behaviors that occasion nagging?” The logs would be examined to look for a place to start—where a brief, meaningful interaction could occur. The interaction would be a subgoal for the next week and the results evaluated during the next session. In the next week’s logs we might see:
Activity Intended, What I Wanted: A nice interaction with mom.
Activity, What I Got: Asked mom about how she picks out handbags since she always looks so great. She explained it was a balance between utility, activity, and matching ones’ clothes.
Comments, Emotions: Felt really good to have these minutes just talking, I almost cried.
And in a later entry: I noticed she nagged me less, too. Maybe this is how we have learned to interact with each other.
But how can an app facilitate this type of analysis and interaction? From the constructional assessment and big data analysis, possible activities and “wants” related to possible circumstances (audience, place, conditions) could be determined through the application of machine learning techniques not unlike those described in Part 2 of this series. The circumstances could be built from a menu of items proposed by the app. Another field would allow manual entry, which could later be incorporated into the menu structure. Once these were determined, possible activities intended would be proposed from which the client would select. Again, these would be generated by not only current inputs, but also past inputs and other big data sources, both individual and group, and by evaluating how close the proposed scenario was to the actual activity intended. Further, the Comments, Emotions field could be used to suggest the consequential contingencies operating. For example, after the entry on mom nagging, an emotional response might be “irritated, but want closeness.” “Want closeness” could be identified as indicating that there may be a positive reinforcer operating and produce a clarifying question that would not otherwise occur.
If the program worked properly, it would be a therapist that would continually get better and better. No confirmation bias or other response bias would enter into the therapeutic intervention.
As the software is used over time, variations in activities based on the hourly entries and their commonalities and differences would evolve until the processes of offering log entries for each cell became more accurate and linked to what the client is after. These data would use the emotional comments (selections) as indicators of the consequential contingencies operating and machine learning algorithms would increasingly relate those to the log entries and be used to suggest subgoals, which would themselves go through a rating system, the results of which would be fed back to the selectionist algorithms operating throughout the program.
Mini scenario evaluations would be presented and evaluated by the client. This would include contingency observations and suggestions. These in turn would be rated or ranked. In essence, the selectionist algorithms and the proposed suggestions would take on the role of a human therapist. If the program worked properly, it would be a therapist that would continually get better and better. No confirmation bias or other response bias would enter into the therapeutic intervention.
The results from the continual evaluation would also inform the scenarios of the original Constructional Questionnaire and be included in the big data analysis both individually and for all users in order to evolve better assessments and planning. The goal was to build a system that continually improves itself the more it is used and the greater the number of people who used it.
…behavior analytic principles could potentially be the most important contributor to a successful automated clinical intervention system.
In summary, the app would not only provide highly valued client goals and the initial and subsequent subgoals based upon an analysis of the logs, it would suggest steps or strategies to get there that would have their initial origin in the constructional interview, big data analysis, and log entries. The outcome of each highly ranked (by the user) step or strategy would enter the selectionist (machine learning) big data database and be used to further refine the program.
To accomplish this, we would use the machine learning software to produce highly accurate recommendations and results. We would use big data to help determine actionable insights to increase enrollment and engagement of large populations. We would then create personalized plans that incorporated health behaviors into the consequentially important patterns of an individual’s daily life.
If, after a time, progress stalled, or app usage began to drop off, that would occasion a reassessment. Something important to the client may have been missed or benefits of the disturbing pattern not entirely identified. Given the patterns we were targeting and the types of possible interventions required as described in Part 1, we assumed many interventions would require a more topical approach. Often, however, a systemic rather than topical intervention might be required.
Quality behavioral interventions can conceivably be delivered to millions, with very few actual therapists involved.
In a systemic intervention, a matrix of contingencies is identified that typically does not contain the presenting complaint, yet when addressed the disturbing pattern may drop out. An example would be a person seeking help with weight reduction who leaves their desk at work to eat a candy bar when “stress” and “anxiety” build. It may be determined that leaving the desk and eating the candy provides a needed break from the demands of coworkers and gives the client time to gather his thoughts. Stress implies increasing work requirements with falling reinforcement rates, and anxiety suggests behavioral requirements for which the client may be unprepared to meet. The program would scan the daily log for indications that this is happening. In particular, the daily logs would be examined to determine how the client controls work requirements and how the client evaluates work requirements and prioritizes them. The app might then recommend a program of assertiveness training and another focusing on organizational skills as part of the subgoals. No action in regard to the candy eating would be taken. Once the new assertive and organizational patterns were established and greater control over the work environment occurred, the candy eating should no longer be required and drop out. The app would continually scan the logs for evidence that the frequency of candy eating changed. Such a reduction in candy eating frequency would suggest the systemic procedures worked. As the program made recommendations and adjusted to user success (or lack thereof), its recommendations should get better and better. Further, individual user data would be continually aggregated with others to look for procedures that, given certain circumstances, would be the most successful. These data would also be fed back to the big data analysis and assessment algorithms in order to improve them.
We were confident we could, within one or two years, build such a system. And, behavior analytic principles could potentially be the most important contributor to a successful automated clinical intervention system. As I said at the outset of this series, we didn’t do it. But, I firmly believe it could be done. The technology and tools exist. The implications for treatment and service delivery are immense. Quality behavioral interventions can conceivably be delivered to millions, with very few actual therapists involved. And further, as the user database grows and the data are fully integrated and related, the automated therapist may grow to be much more effective than a human therapist. I believe that sooner than many realize, the future of clinical behavior analysis likely lies with the machines.
Goldiamond, I. (1974). Toward a constructional approach to social problems: Ethical and constitutional issues raised by applied behavior analysis. Behaviorism, 2, 1–84. (reprinted 2002 in Behavior and Social Issues, 11, 108-197.
Goldiamond, I. (1975). A constructional approach to self control. In A. Schwartz & I. Goldiamond (Eds.), Social casework: A behavioral approach (pp. 67–130). New York: Columbia University Press.
T. V. Joe Layng has over 40 years of experience in the experimental and applied analysis of behavior with a particular focus on the design of teaching/learning environments. In 1999, he co-founded Headsprout. At Headsprout, Joe led the scientific team that developed the technology that forms the basis of the company’s patented Early Reading and Reading Comprehension online reading programs, for which he was the chief architect. Joe earned a Ph.D. in Behavioral Science (biopsychology) at the University of Chicago where Israel Goldiamond was his advisor. At Chicago, working with pigeons, he investigated animal models of psychopathology, specifically the recurrence of pathological patterns (head-banging) as a function of normal behavioral processes. Joe also has extensive clinical behavior analysis experience with a focus on ambulatory schizophrenia especially the systemic as well as topical treatment of delusional speech and hallucinatory behavior. Joe is a fellow of the Association for Behavior Analysis International, and Chairman of the Board of Trustees, The Chicago School of Professional Psychology.