Jethro's Dissertation - How Principals who Use AI for Innovation Create Cognitive Equity
Download MP3Welcome to Transformative Principal, where I help you stop putting out fires and start leading.
I'm your host, Jethro Jones.
You can follow me on Twitter at Jethro Jones.
Okay.
Welcome to Transformative Principal.
Last week I did my handoff episode with Mike, and then the next day I defended my dissertation, and then I recorded it, of course, because I'm a nerd and I thought it'd be great to put out here on my podcast.
So I hope you enjoy this.
Thank you so much for listening to Transformative Principle for all these years.
What you're gonna hear first is Tom Hur the, dissertation committee chair.
He's going to share a little bit about the process and what it's gonna look like, and then I'm gonna do the presentation, and then there's gonna be some question answers, some comments, and, uh, conversation at the end.
So, uh, just wanna say thank you, uh, for listening.
Thank you for being part of Transformative Principal and what a huge accomplishment to finally finish this dissertation.
And I so appreciate everybody who's listened and been a guest on the show who's helped me get to this point.
So I'll turn it over here to Tom.
Thank you so much.
So let me go ahead and officially welcome everybody and kind of give you the, the plan for what's gonna happen today.
as you know, you're here for Jethro's dissertation defense on his EDD.
And we're excited to have all of you here.
What will happen is in a minute I will ask the committee, I'll ask Jethro to give you a paragraph about himself and then the committee members, and then he will go ahead and proceed with a PowerPoint he's prepared.
That does two things.
One is it, it depicts his study, what he is researching, what he's learned, and it also shows scholastically from where he drew his data.
So it's a good academic piece.
after he finishes that.
Then the committee will ask questions or make comments, and you should know right now we may not do a whole lot of that because we've done time with Jethro before.
Sounds like a prison sentence.
We've done time with Jethro before, we met with him last week for a practice run.
So we're all comfortable.
So it may be that we don't have anything new.
in any case, after we are finished, then I'm simply gonna say, does anybody in the audience have any questions, have any comments?
And we'll throw it open.
Once that all happens, then Jethro Row will thank everybody who's a guest, for attending, and you all will leave.
And then the three of us, Dr. Dr. Be and I will meet in a little room and we'll talk about what happens next.
there are a few options.
One would be, who is this guy?
This is terrible.
What is this?
doesn't work.
That's not gonna happen.
the other one could be, this is perfect.
he can, we'll see him in May.
It's finished.
That's not gonna happen either.
What's likely to happen is, Jethro, this is really good work.
We know that already, and here's some things we want you to play with and refine and move forward.
but we're really excited about it.
And then we will bring back in, tell him that, and then we'll be finished.
So, before I go any further and ask for the committee to identify themselves, Belinda, Mindy, did, did I say that correctly?
Anything you'd like to add
No, I think that was
good intro.
good?
Yep.
Okay.
So let's go ahead and these folks don't know us, most of them.
Linda, would you give us a, a paragraph or two about your background and, and why you're here?
Sure.
my name is, I'm Dr. Linda Burrick.
I am a behavioral psychologist specializing in artificial intelligence and machine learning.
I met Jethro no.
Year and a half ago, a while ago.
Anyway, we had a lot of common interests.
He had mentioned to me that he was doing his dissertation and asked me to be on his committee as an outside member.
So I'm here doing that.
I got my PhD, I finished it in 94, defended it in 2000, on generative instruction, so a specific type of machine learning.
And I've been working in the industry ever since, well, even before that, right?
So I've got over 40 years experience working in high tech, and I'm really excited to see, Jethro Defend because he is doing really good work and, he's adding a significant contribution to the field.
Hi.
my name's, Mindy Beer.
I'm sorry.
Am I getting feedback?
Okay.
Is it, I don't know if it was on my end or someone else's.
I'm the Theresa M. Fisher, endowed chair for Citizenship Education at the University of Missouri St. Louis, and I'm also the co-director for the Center for Character and Citizenship.
I've been, on this journey with Jethro for three years and also very excited about it.
I did my, I got my PhD in, technology and education and computer science way back when in, 97.
And boy, it's come a long way, although some of the concepts that I've seen Jethro use are things I also used.
But, this is just something that's been really exciting as we have gone through three years, from not hearing the words AI at all at the beginning of this journey to having AI everywhere in education.
So I'm really excited about, about this.
And Jethro is also the first of our cohort, of 13 students.
We go through the defense.
So, with that, I'll turn it over to Tom.
Sorry, Tom, you were causing the feedback so I muted you briefly.
I'm so glad it wasn't.
Okay.
All right, good.
I'm Tom Hur and despite my youthful appearance, I received my PhD before many of you were born.
I've known Jethro Row for quite a few years before.
He was a student at UMSL, and I've always been impressed by him, and I'm excited to be on this journey.
I learned a great deal from him, and I think you probably will too.
So, Jethro, give us a paragraph about you and then jump into your presentation please.
Sure thing.
first of all, I wanna thank everybody for taking time outta their day to be here.
that means a lot to me and, and I could not have gotten here without other people helping me out because, all this work takes a lot and I have very truly, stood on the shoulders of giants as, as I've been able to learn so much through my podcast and working with so many different people.
And, there's certainly not.
Time enough to, to share all the things that I have learned and all the people and thank them all.
But I just want you to know if you're here and you think that you've impact that I've impacted you, you've impacted me as well.
so I was a principal for a long time and then, about six years ago I started, on my own doing, full-time consulting, work, training principles.
And, a lot of that created the seed for this, dissertation today, which is helping people, know how to use technology and how to learn and, and how to apply, their learnings to what they're doing.
so, with that I am, I'm going to get started.
I'm going to mute everybody.
so you all should, should be muted.
And then Stacy, my wife, who is wonderful, she is going to mute anybody who comes off mute in the next.
During the time that I'm presenting at the end, I have room for questions and you're welcome to come off mute and ask questions at that point.
I'm gonna share today what I've learned about how school principals can use artificial intelligence, not just to save time, but to actually solve hard problems.
And I want to start with a story, which is that in on August, April or August on Sunday, August 14th, 2022, I was signed up for a new tool called Dolly, which was an image generation tool from OpenAI.
And the reason I did that is because I've never had much of an artistic skill in my body, and it seemed like a way for me to be able to create something that, I couldn't do on my own.
And, my early attempts at that were not very good, and that's a lot to be desired, but the tool enabled me to do something that I just hadn't been able to do before.
And I thought that was really amazing that I could unlock something where I could describe something and then make an, an image related to it.
A few months later, open AI unveiled Chat, GBT and made it so that large learning, large language model technology was accessible to anybody who had the internet.
And immediately marketing slogans just started focusing on saving time.
They would say save hours a week on menial tasks.
And ed tech companies especially told teachers to teach smarter, not harder.
And I thought, is that really the best we can do?
That approach of just doing the same old things faster is not really going to change anything meaningful or address the bigger issues in our school, in our schools, period.
So I started this dissertation in practice with a simple question, is this AI tool destined to just make us faster doing the same old things or can it help us actually innovate?
And when chat GPT launched in 2022, it reached a hundred million users in just two months, which made it the fastest growing consumer application in history.
But almost immediately, the messaging from MedTech companies was all about, things like co-teacher helps you run your classroom better in fewer hours, save time on lesson planning.
And I think that saving time is really valuable and good, but it misses a more powerful aspect of these tools, which is to enable actions and make ideas come to life that have not been possible before.
I've argued in my previous work that being a great principal is really about designing your school for the people that are right there in front of you to meet their needs.
This requires adaptations that go beyond just instructional leadership or efficiency.
It requires principles to be innovative, vision focused, mission-oriented, and to operate from a moral purpose.
So to me, that tension between efficiency and innovation became the heart of this study.
So where I wanna start is with the literature review and I reviewed it.
I reviewed research on AI and education principle effectiveness and innovation frameworks.
And what I found is a lot about AI for teachers and students, and almost nothing about AI for principals.
And this matters because the peer reviewed literature simply hasn't kept pace with the technology Chat CT launched in late 2022 and academic publishing cycles mean that we're only now seeing rigorous research emerge.
Mariano and Neuro Leaf conducted a literature review of 66 articles published between 2020 and 2024.
Note that halfway through that is when Chat GPT was released and they focused on AI's role in transforming learning environments and none of the sources they used related directly to principal leadership.
So much of literature has focused on student and teacher use of ai, and very little is focused on principal use.
Furthermore, there's a dearth of peer reviewed literature for principal leadership and ai, and it's especially lagging for leadership applications because everybody is focused on how teachers and students use it.
That being said, there are some extrapolations we can make.
research consistently shows that principals matter, grissom and colleagues conducted comprehensive.
synthesis for the Walls Foundation, reviewing two decades of research on principle ECT effectiveness, and their finding is clear.
Principles are second only to classroom instruction in their impact on student learning.
They break down a large body of quantitative evidence into four mutually reinforcing domains of practice, instructionally focused interactions with teachers, building a productive school climate, facilitating collaboration in professional learning communities, and personal and resource management.
But here's a gap I kept running into.
None of the AI literature addresses how principals can use AI across these domains.
Specifically as I wrote in my book, school X, the role of the school principal may be one of the most unique positions in any organization.
There aren't many other roles that require a leader to interface with so many stakeholders with such drastic and diverse expectations for success in different areas.
Bixler and SBIs published a paper in 2025 trying to build a conceptual model for principals using ai.
And it's one of the only peer reviewed pieces, specifically about principals and ai.
But even they had to rely mostly on teacher-focused studies.
They argued that principals have the potential to lead AI to maintain and enhance instructional effectiveness in schools, but they're missing a key piece, how AI might be used to approach the role of the principal specifically differently through innovation.
And there's a second gap even when we do talk about AI in education.
Most of the research out there is talking about efficiency, time savings, and automation.
We rarely talk about innovation, about using AI to do things that weren't possible before.
And that's why I wanted to explore and and explore what actually happened with participants in my study.
So what is an innovation?
The Clayton Christen Institute identifies four types of innovation, sustaining innovation, disruptive innovation, hybrid innovation, and efficiency innovation.
Pretty much whenever somebody's talking about innovation, they're talking about Clayton Christensen's definition of disruptive innovation.
And these distinctions are typically found in business, but they do relate to education as well.
Christensen's work on disruptive innovation shows that real disruption often starts at the low end, which is simple, cheap and accessible, and then moves up market.
And that's exactly what chat GPT did.
Christensen himself applied disruptive innovation to higher education, asking quote, if there is a novel technology or business model that allows, allows entrants in higher education to follow a disruptive path.
That answer seems to be yes.
And the enabling innovation is online learning.
End quote.
Now, this was back.
many years ago when he first made that statement.
And, you know, online learning has certainly changed the educational landscape considering that this whole program that I did was all online and Clayton Christensen didn't even, live to see this day and, and what it has become.
But there's definitely some, innovation happening there.
Arizona State University's Mary Lou Fulton Teachers College defines principled innovation as the ability to imagine new concepts, catalyze new ideas, and form new solutions, guided by principles that create positive change for humanity.
And that framing is one that I really love because it helps us understand that innovation without ethics is just disruption for its own sake.
And I'm not about disruption for its own sake.
I'm about improving things that create positive change for humanity.
James Besson's research on automation shows that technology typically doesn't eliminate jobs, but it transforms them.
And that's the kind of an innovation that I'm interested in for principles.
Lemme talk a little bit more about Besson.
This is one of my favorite examples from the research, and I think this directly applicable to education.
There's a lot of hand wring about AI taking over jobs, and I have even said that if a computer can teach students how to read, for example, better than a human can, why wouldn't we turn that over to them?
Well, the obvious answer is that humans provide something that machines don't.
Besson talked about the banking industry and the innovation of ATMs.
Decrease the need for tellers per branch, but increase their reach as banks open more branches.
He says, quote, thanks to the ATM, the number of tellers required.
To operate a branch office in the average urban market fell from 20 to 13 between 1988 and 2004 end quote.
But that redu cost reduction allowed banks to open more branches and the number of banks in urban areas increased by 43%.
The nature of their work changed.
Tellers went from being cash dispensers, a task the ATM could do to being part of a relationship banking team who could sell financial services.
We can and should expect the role of a teacher and principal to change in a similar way, just as a principal used to be viewed as a disciplinarian manager, but is now seen as an instructional leader.
Their role will evolve again.
The question is whether we're intentional about that change and whether or not it brings about positive change for humanity.
Here's the core tension of this dissertation.
In June, 2025, researchers published a paper called Your Brain on Chat, GBT Accumulation of Cognitive Debt when using an AI assistant for Essay Writing Task, they
studied 54 participants in three groups who authored essays, one group using just their brains, one using Brains plus search engines and one relying on an LLM.
The group who relied on the LLM showed decreased cognitive load as shown on their Alpha Band connectivity.
The researchers define cognitive debt as a condition in which repeated reliance on external systems like LLMs replaces the effortful cognitive processes required for independent thinking.
But I saw something different.
I created a new term called cognitive equity, which is the opposite of debt, not the other kind of equity we often talk of.
To clarify a different perspective, much like using assistive communication devices for a nonverbal person expands their ability to communicate.
Cognitive equity is the situation where someone who is burdened by a cognitive load offloads that to an AI that will then help them perform tasks they couldn't do otherwise.
Here's another way to think of it.
If you are reliant upon ai, when you don't need it, then it creates debt.
But when you rely on it, when you do need it, then it can create equity.
So what is this cognitive equity?
In a paper I wrote last year, I defined it as using AI as a tool that expands a user's cognitive capacity to address complex problems and lead adaptive change.
Let me illustrate with an example.
If someone is capable of a task and they don't use the muscles required for the task, it can't be damaging.
That's cognitive debt.
But if someone doesn't have the skills necessary for a task and they use the tools to bring their skills up to that level, then the use of that tool is meaningful and worthwhile.
That's cognitive equity.
AI can serve as what Liz Wiseman calls a multiplier, giving people skills they did not previously have, enabling them to create solutions to problems they never thought possible in her framework.
Multiplier leaders quote, liberate people from the oppressive forces end quote in bureaucratic settings and help people be their best.
This connects to the framing of assistive versus replacement technology.
When we use AI for efficiency, we're often replacing human cognition and letting the human think.
For us, when we use AI for innovation, we are assisting human cognition using the machine to extend what we can do.
Cognitive equity is explicitly about assistance, not about replacement.
So let's get on the same page about what AI actually is.
Artificial intelligence, which right now is mostly a marketing term, is broadly defined as the development of computer systems that can perform tasks, which would typically require human intelligence.
While it seems the AI is new, it actually originate in the sixties with the development of the computer.
Machine learning is a subfield of AI focusing on pattern recognition and data.
Deep learning is a subset of machine learning that utilize artificial neural networks loosely inspired by the brain, the way the human, pro human brain processes.
Data and large language models are a specific area within deep learning that focuses on text.
Specifically.
These are powerful machine learning models that use neural networks to model complex relationships at a massive scale.
Here's the key.
LLMs essentially predict the me next best word in a text string.
think of this process as putting together a very large all white puzzle.
The LLM does, it does predict what the next best piece is going to look like.
Understanding that AI in its current form is a prediction machine is crucial to understanding what it can and cannot do.
It may appear to be magical, but it is simply trained on a massive amount of data and can produce better results faster than many other tools.
But it is still solidly within the box of computer programming.
This diagram adapted from Sto.
Bower is a data scientist who wrote a clear explanation of how these technologies relate to each other.
I definitely suggest you check it out if you are interested in learning more about that, going into more details beyond the scope of this, dissertation and practice, which I will probably say about a hundred more times throughout this conversation.
in 2024, Michael FO's colleagues published a paper.
which is again one of the few peer reviewed pieces, specifically about AI and, and school leadership.
Their assessment is very direct.
AI has crossed a threshold, and it's not a novelty anymore.
It's not something we can wait and see about.
They write the ai, AI has quote, transition from a mere toy tool to a disruptive innovation end quote, that language matters.
A toy is something you play with a disruptive innovation is something that changes industries.
They also emphasize that quote, school leaders need to create a long-term vision for integrating this technology into their schools in a careful but principled way.
Despite the allure and promise of this brave new AI gen, AI world, school leaders must always put the learning needs of children and young people first.
And I quoted that because that is an important piece that we have to remember, that educating our students is the priority in our jobs.
The question for principal is not whether to engage with ai, but how will we use that disruption for efficiency, doing old things faster, or for innovation, doing new things that matter.
I would say that we should, that we should not just do the wrong things faster, the old things faster, just because we've always done them.
Education has a pattern, and I don't think that it's a good one.
We adopted new technology.
We use it to do the old things faster, and then we wonder why nothing changes.
Ai startup companies quickly seized on the monetary potential involved in promising to make life easier for principals and teachers.
While saving time is great and necessary, it misses the larger point of these tools, which should enable actions and enliven ideas that have never before been possible.
It can be a disruptive technology in schools, but many school leaders are treating it solely as an efficiency innovation.
This prevents leaders from embracing the change that can come when educators are able to create something that they never thought they could before.
AI gives us a chance to break that pattern if we're intentional about it.
The innovation imperative is this.
Don't just ask, how can AI make us faster?
Ask what can AI help us do that we couldn't do before?
Everett Rogers created the diffusion of innovation theory, in the early seventies, if I remember my dates correctly.
the modern understanding shows a bell curve like this one here, with different groups occupying different roles, innovators, early majority, ear, sorry, early adopters, early majority, late majority laggards and non-ad adopters.
There is a chasm between the early adopters and early majority, and a similar chasm between the late majority and the laggards.
These chasms are particularly important because they constitute opportunities for disruptive innovation.
We can assume that about 2.5% of the school leader population was using AI before chat GPTU was released.
These were the innovators, they were the people who jumped on the beta version as soon as it came out.
And as an innovator myself, I saw enormous potential in chat GBT and other AI tools, and had actually been using them before they were broadly available to the public.
I was, my, the first book I wrote was in 2012 called Paperless Principle about using these types of tools to, automate the work that I was doing as a principle so I could have more time to focus on the needs of the students that I was working with.
This orange part of this diffusion of innovation, curve is where I wanted to focus, and I added a design linking layer for the innovators and early adopters.
The goal of my intervention was to help principals move from early awareness of ai.
To actually using it to solve real problems in their schools, not just understanding it, but application.
As the 11 principals attending my presentation were all innovators in Wyoming, I wanted them using the AI for what I would call the right things being innovative and not just saving time.
These are the three questions that guided my study and they're intentionally sequenced.
The first question asks, as a result of the AI for innovation training, do principals report understanding how AI works better than they did before?
The second question asks, as a result of the training, do Principals report being able to apply AI strategies to use AI for innovation in their schools?
And the third question asks, what supports are needed and what barriers arise when leaders employ AI as a change agent to create innovative solutions?
The first two are really about learning, is the message hitting?
And the third is about reality.
What actually gets in the way and what actually helps?
So lemme describe the intervention that I did, which was a, full day workshop in, Wyoming to principals in connection with their, principal, conference for the association that year.
So I officially started teaching how to use automations and machine learning in a school setting in 2007 as a teacher with what I called Techno Thursdays it's staff in my building that, that were there to learn how to use technology better.
Then 2012 I wrote my first book, paperless Principle and I learned something very important back then.
Educators loved collect tools and shortcuts, thinking they'll use them someday.
I once ran an experiment at a principal's conference and offered a session called 50 Tools for FIF in 50 Minutes for Busy Administrators.
And it was my best attendance session that year, but it had the smallest amount of impact because people just collected, tools like they were badges and didn't do anything with them.
In 2018 as a principal, I started a new strategy where I focused on giving people time to think deeply about a problem before giving them any tools to solve it.
This helped tremendously.
So I designed this AI for innovation presentation around asking attendees to first identify a problem before using AI to generate innovative solutions.
This approach is key because when presentations are tool focused, you use that tool.
But when the presentation is problem or solution focused, use whatever tool makes sense in the moment.
So I created this visual to illustrate what that looks like.
The key is finding the sweet spot problems that are hard enough to matter, but tractable enough to make progress on in a day or whatever the length of the training is.
As complexity and meaning rises, a person becomes hungrier for a solution and is willing to do more to achieve it.
But if the problem is too big, like how do we end homelessness for our students?
People quickly become disheartened because the problem really is huge and multifaceted, and they feel like they can't solve it on their own.
On the other hand, solutions should be simple.
There's a sweet spot where people are willing to take a second and third stab at something, and this is where AI solutions can both be good enough and simple enough that principles could actually implement them for meaningful change.
My guidance was that the chosen problem could not be solved in 30 seconds.
With ai that would indicate it wasn't a wicked enough problem, but also couldn't require multiple stakeholders for any workable solution that would be too wicked.
The way to find the sweet spot is to think about it, talk about it, think some more, and talk some more.
This diagram is my original work.
Based on years of facilitating this kind of design work, 10 principals and one teacher gave up a Saturday in Wyoming to learn about ai.
The participants self-selected to attend this AI for Innovation training conducted in partnership with the Wyoming Association of Secondary School Principals at their annual event on November 1st, 2025 in Casper, Wyoming, that they gave up personal time matters.
These were people already leaning into change.
Their innovators in Rogers framework and their willingness to give up personal time and lean into a topic suggests they were more open to change in
experimentation than the average principle, which of course is a limitation on transferability, but also shows what's possible with motivated participants.
They were also a perfect group of people to see AI as a disruptive innovation rather than just an efficiency innovation, and I really wanted to help them see how to use it in this new way that was not just about saving time.
I used a retrospective pretest design to assess their growth in traditional pretest administered before an experience.
People often overestimate their knowledge.
They don't know what they don't know yet.
Also learning something new, changes how they perceive their level of understanding.
This is called a response shift bias.
A retrospective pretest solves both of these problems by having participants rate both before and after at the same time after the training, and they're using the same frame of reference for both ratings.
The retrospective pretest has been shown as an effective way to measure self-reported learning in educational environments.
Across many studies, there are risks, however, demand characteristics where participants want to please the facilitator.
Implicit theories of change and meta memory related biases like hindsight bias.
I mitigated these as best I could by keeping responses anonymous, establishing a culture of non-judgment and keeping the time period limited to just a single day so that they couldn't think they knew things better or worse before when it was days or weeks in the past.
For qualitative coding, I used AI assisted coding by Atlas TI as a support for my own analysis.
and then I also engaged a peer researcher, a doctoral candidate with qualitative research experience to check and refine the codes.
She reviewed all the codes and categories and confirmed alignment or suggested refinements, which I, ended up with and will present to you here today.
So I wanna go through some of the, the findings.
excuse me.
There we go.
in re in the retrospective pretest, I asked participants to indicate their understanding of how they could use AI generally in schools and for innovation, for understanding how tools work.
The average before rating was 3.2, which is about average, you know, on a five point scale showing most principals believed they were moderately proficient after the training.
It was 4.1 above average.
That's a gain of 0.9 points.
For the question of using AI effectively in school settings, they went from a 3.1 to a 3.9, a gain of 0.8 points, and for using AI to solve problems and innovate, which was the heart of this study, they went from a 2.9 to a 4.0, which is the largest gain of one point points.
They started slowly, below average and on innovation and ended above average.
Even those who raided themselves highly initially, highly initially still showed growth.
In the end, there was not one participant in any of the responses for any of the questions that did not self-report some growth.
The second finding was that they could apply it.
They weren't just learning theory they were building.
My question was, can principals apply this learning?
Yes, they could apply it.
Let me share what the principals actually created.
One principal wanted to help high school students reflect on their interests.
They create a one month curiosity tracker.
Another one To address students' emotional and mental state, they proposed an app that prompts students to complete check-ins about their mental health.
Multiple participants want to improve digital citizenship.
They designed games where students could progress through levels to learn these concepts.
One participant captured it perfectly.
You could.
You used to have to depend on whatever vendor can sell you in an app, and you have to live with compromises.
Well, if you know what you want, you can build it yourself.
The tools that we used were varied, and because we were focused on a problem, they were able to find the tool that matched the solution that they were trying to build.
Whereas if I went into this training and said, we're gonna learn how to use chat GPT, for example, then they would learn how to use chat GPT specifically, and then have to figure out the problems.
This was an intentional design decision to help them use whatever tools would be necessary.
So we used Repli, Zapier, Google Forms Cloud Code, Manus and Chat GT, and probably a couple others that I don't even know about because they were able to find solutions on their own as well.
The third finding was about barriers.
When I asked what would get in the way of continued AI use, three themes dominated, I call them the three Ts.
Time training and treasure Time was the biggest barrier, which I'm sure is not a surprise to anybody who works with educators.
Respondents noted lack of time for learning, implementing and practicing with ai.
In my experience, this is the number one reason people are excited about using AI for efficiency because they get to save such a scarce resource, which I totally understand.
Educators are caught in a catch 22 though their daily work consists of doing things they don't find purposeful, so they want AI to help reduce that time.
One participant wrote that thinking more holistically about underlying issues rather than surface level problems was empowering.
Others said their in their day jobs, they were just trying to put out fires every day.
Training was second one.
Workshop isn't enough and treasure resources, subscriptions infrastructure was third.
If PRI principals truly use AI in innovative ways, they must start by identifying and abandoning unimportant tasks in favor of meaningful efforts.
The fourth finding is about supports needed.
The supports participants said they needed are almost the mirror image of their barriers.
What they needed most was professional development, training, or hands on time.
But here's what strikes me.
There's a perceived need for someone to teach them.
Schools have created an attitude of dependency on teachers to teach us things.
One participant Beth said, I'm good creatively.
I'm not good technologically, so let's sit together and we're better.
I see a problem with this.
Principals mistakenly believe the only way to learn is by an expert teaching them.
I was brought in from Washington to Wyoming as the expert, but I didn't gain my knowledge from experts myself.
I learned by experimenting and trying new things myself.
AI has been trained on the content of the internet where there are many tutorials and all kinds of things.
AI can eliminate the need for a teacher to be the fountain of knowledge as we currently see them.
Instead, we should work towards systems that make experts a compass, constantly reminding learners of their North Star and helping them find their way back to it.
This was my role in the training.
Whenever they wanted to just be more efficient, I had to remind them that our purpose was to create cognitive equity for ourselves, which means doing things that were not.
Were not possible.
Previously, one principal initially wanted to solve student misuse of technology by composing an email about why it was problematic.
Sure, AI can do that, but would that actually solve the problem?
Of course not.
Nobody even reads emails with coaching.
He ended up creating a game for students to learn about digital citizenship in an interesting way.
In the old days, he wouldn't have had the time, expertise, money or knowledge to make any kind of app, but now he can do something that was not possible before.
Here's something else that I observed that the data seems to have confirmed.
Learning to use AI for complex tasks involves predictable fluctuations in demeanor.
I call it the effective rollercoaster.
AI enthusiasts began with guarded optimism as they encountered compelling demonstrations.
Their demeanor rose, excitement and curiosity showed on their faces during the ugly portion of my presentation.
When limitations and ethical concerns dip, they dipped as well.
Participants said things like, we should just ban it if this is what happens.
AI skeptics started at a more negative place.
Early demonstrations elicited concerns about job displacement and the black box nature of ai.
Their line remained lower, but followed a similar oscillating pattern.
Again, this was by design.
The lowest point for both groups occurred around the first failed solution, and although that was a a different time for each person.
Pretty much everybody got there when participants recognized that their initial prompts didn't yield what they had hoped.
This moment of productive struggle is central to the training design.
It forced participants to refine their problem framing and engage more thoughtfully with the solutions as they iterate and eventually produce more useful out output.
But it was still often imperfect, but noticeably better.
Their traject turned sharply upward.
One participant joked about designing a beautiful black screen.
Their prototype wasn't working.
Later, they said It's incredible.
It really is.
Every participant, hi a wall, and every participant pushed through.
There are some.
another thing that I asked about was the culture impacts.
And when I asked how innovation work would affect school culture, the most common responses were greater collaboration.
And buy-in from teachers.
These are essential for healthy functioning schools.
Grissom's research that I mentioned earlier identifies the importance of principals building productive school climate, and facilitating collaboration for student outcomes.
When school professionals have time to collaborate, because they're not wasting time on superfluous things, they can foster collaboration and teacher buy-in Through
regular professional development, collaborative planning, and peer observation, participants also mentioned that innovation could help them improve success breed success.
Innovation breeds innovation and principals who model innovation, teach others how to do the things they're asking them to do.
Principals who model behaviors they expect get the same behaviors from their teachers.
As one participant said, if you know what you want, you can build it yourself.
Imagine hearing that from your principal and thinking, I can build it myself.
That's an empowering message to share with your staff.
So there's some implications from this research.
The first implication is that we need to fundamentally reframe how we talk about AI in schools.
There are two aspects to repositioning AI from a time saving tool to an innovation partner.
First, reframing AI as a problem solving tool shifts the focus to address the problem first.
This involves employing design thinking, and recognizing that teachers that teaching a specific tool may not always be beneficial when focusing on tools.
Individuals learn how to use the tools when focusing on problems.
They learn how to solve problems.
Second, presenting AI through cognitive equity and design thinking allows principles to perceive alternative methods of operation and underscores using AI for human-centered purposes, enabling more time for individual interactions, relationship building, and providing feedback.
AI should automate processes to allow more time for interpersonal engagements rather than merely increasing productivity.
I love the term term innovation partner because it captures this shift.
A partner works alongside you bringing capabilities you don't have and helps you do the things you couldn't do alone.
Implication two is about time.
Principals need real time to learn and experiment.
Not 45 minute sessions crammed in between other obligations.
Cal Newport defines this work as defines deep work as professional activities performed in a state of distraction-free concentration that push your cognitive capabilities to their limit.
These efforts create new value, improve your skill, and are hard to replicate.
That's from his book Deep Work.
When principals say they need time, this is what they really need.
The benefit of this six hour workshop was that principals had the time to use the tools away from their campus and their everyday distractions.
During the afternoon, they had most of the time just to work on their problem.
They talked about it, took breaks, came back to it, worked with AI tools, took another break and came back to it again.
This can be described as working on the school rather than merely working in it.
Lecture only or tool focused workshops are unlikely to produce substantial changes because participants lack the time to practice during these sessions.
The experience of deep work and time to do it was the most valuable part for principals in this workshop.
The third implication direct comes directly from observing that effective roller coaster.
We need to normalize this journey.
Frustration isn't a failure.
It's a sign that you're doing something that's hard enough to matter.
The concept of productive struggle is often discussed in education, but seldom applied to adult learning.
This workshop provided such a struggle resulting in participants feeling more adept at using AI for innovation.
This has practical implications for how we design professional development.
We should tell participants upfront, you will get frustrated.
Your FO first prototype will probably break.
That's normal.
That's how this works.
And then we should build in support for those low moments.
Peer collaboration, facilitator check-ins, permission to take breaks and open-ended work time with a deadline for some known unfinished product.
If we don't prepare the people for the struggle, they'll interpret their frustration as evidence that they're not cut out for this.
But if we frame it as expected as part of the process, they're much more likely to push through to the breakthrough they're seeking.
Lemme be honest about limitations, because the study has several of them.
The sample was small, just 10 responding participants, 11 total.
And they represent a self-selected group of innovators willing to attend a six hour workshop on a Saturday.
Their willingness to give up personal time suggests they were open.
To change more than the average principle, which also limits transferability.
I Jethro Jones conducted the full day presentation with my own biases embedded in the design.
I have a clear bias toward innovation and just creating stuff instead of efficiency because I believe certain practices shouldn't even be done in schools, even if they've been done for years.
I also emphasize what I view as the hypocrisy of AI for me, but not for the, when teachers use AI but prohibit students from using it.
There's no control group in this.
The measures were self-reported and I didn't directly observe subsequent school level changes.
All of that was beyond the scope.
And finally, the study did not address the environmental, political, or societal costs of AI infrastructure or financial costs of ongoing subscriptions, as well as many other things, which are also important considerations, but are beyond the scope of this project.
These findings are exploratory and not definitive, and to me, they're proof of a concept and not a final answer.
Despite those limitations, though, I contend this work matters.
The purpose was not to produce generalizable claims, but to closely examine how a small group of innovator, innovator principles experience a deliberately designed professional learning environment that positioned AI as a resource for innovation rather than mere efficiency.
The combination of quantitative shifts and self-report understanding qualitative themes about supports and barriers, and a rich description of the training offers a grounded practice oriented account of what it could look like when principals begin to use AI to tackle wicked problems.
For a dissertation in practice that situated.
Experience near knowledge is precisely the contribution.
It provides a concrete, replicable, replicable model services, real constraints of possibilities and points the way for future studies that can extend, refine, and test these ideas with larger samples over longer periods.
As I've done this work for the past eight years in my own consulting, I've seen how much more valuable this type of approach is for any kind of problem based training that is happening in schools.
Based on this, I do have five recommendations.
First, we should design professional development that is sustained and not one shop work.
One shop workshops, single PD sessions are insufficient and follow up is crucial.
A subsequent session with this group is scheduled for one year after the initial training.
Second, focus professional development on authentic problems, not on AI tools in the abstract.
When focusing on problems, principals learn how to solve their problems.
This was evident in participants projects, which included various tools, some which were not initially suggested by me.
That's a lot more scary for the presenter, by the way, when you go in without knowing the exact answer, I didn't include that in here, but that's a good point to make that I that I keep thinking of.
Third, we should integrate AI leadership into principal preparation programs.
Now, our university programs are probably behind.
We are preparing principals for a world that's changing faster than our curriculum is.
Fourth, provide protected time for experimentation.
Deep work requires protected space.
And fifth, again, this is typically out of the scope of professional development, but we should conduct cost benefit analysis that are, include environmental and other implement implications.
AI isn't free, not financially, and not in many other ways.
The future research should also explore longer timeframes and diverse contexts.
Conducting similar studies with larger and more diverse samples across various stages and types of schools would be advantageous.
A longitudinal follow-up within six to 12 months would determine which practices persist and whether the workshops impact endured beyond the day of training.
Although this dissertation did not include such follow-up, I plan to engage with these principals at a future conference.
We would also benefit from systematic measurement of effective responses towards ai, attitudes, anxiety and confidence before, during, and after PD sessions.
And we need to compare different professional development designs to reveal which methods yield more durable changes.
However, one of my goals for this training with innovators was to help them see how they could use I AI differently.
And they would hopefully take some of those lessons back to their schools and communities and suggest using AI for innovation instead of just efficiency when the conversation comes up.
The purpose of this dissertation and practice is to change practice.
For me, that practice is changing how professional development works for principals.
There's a plethora of research that says PD should be job embedded and focused on the real challenges principals face.
If I had to distill this entire dissertation and practices to two sentences, here's what I'd say.
A focus on problems, not tools, and give time for deep work.
The tool is secondary.
The problem is primary.
Start with what you're trying to solve and then find the tool that helps real learning, especially learning that involves creativity and problem solving requires sustained attention, and we can't shortcut this.
AI won't do that for us either.
I believe two critical issues are often overlooked in professional development, focusing on solving problems rather than merely teaching tools and providing sufficient time during workshops for human-centered approaches to real issues.
AI will keep evolving faster than we can track in in the time.
Since I started this project, OpenAI has released chat, GPT five, 5.1, 5.2, and after my presentation, my practice presentation last week, they released chat, GPT 5.3.
The pace of change is remarkable.
Every day I wrote this, there is more to add.
By the time you read this dissertation, when it is published, it will be obsolete, and that is part of what we're dealing with.
But the core principle holds individuals are leveraging AI primarily for efficiency rather than innovation.
Focusing on expediting tasks without questioning the necessity of such tasks.
Simply asking principals to reflect on their practice and consider whe whether they should even take a certain action, can catalyze innovation.
Our commitment should remain focused on using AI to enhance the human capacity rather than diminish it.
Principals play a vital role in stewarding this objective within their schools.
I'm betting on innovation, and I hope you will too.
I wanna thank my committee, Tom, Mindy, and Linda for being excellent supporters and guiding me through this.
Tom, you've been a mentor for years and I'm grateful you invited me into this program.
Mindy, your methodological guidance was invaluable and your gargantuan efforts that helped me and my cohort through this process that have all been behind the scenes have been amazing.
And Linda, you push my thinking in ways that I didn't expect and help me to see the technology with clearer eyes.
I also want to thank the Wyoming principals who gave up their Saturday of participate in this study.
And thank you to my wife Stacy, and.
And my kids.
I know this has been tough and I'm so grateful to you.
Anything good, that has happened in my life has come through my savior Jesus Christ, and I don't do anything without consulting him.
And he was very clear this doctoral program would be worthwhile even when it didn't make a lot of sense.
These are the key references I've cited in this presentation.
The full reference list is in my dissertation document and on my website, dr Jethro dot com.
I've, presented proof of concept for innovation focused AI professional development for principles.
I've shared what worked, what got in the way and what we might do next.
Not a single person left my, presentation with a fully completed solution, and none of them were disappointed on that.
on the contrary, they were eager to continue working on it and try new things that had not yet, they had not yet considered.
During the showcase portion, each person said something similar to, I know what else I need to do to finish this.
That's exactly where I want people to be.
Now, I welcome your questions, challenges, and feedback.
Thank you.
Can you, close your PowerPoint so we can see everybody?
I sure can.
And then let me start off.
that was, that was wonderful.
yeah.
Really nice.
so Linda, Mindy, do you have any questions or comments before I open it to the group?
Sure, I'll go ahead and kick us off.
Jethro, really well done.
significant improvement from our practice, so very much more an academic rigorous focus, which is what we asked of you, and appreciate that.
I will maintain what I said at the beginning, what you've produced here is a significant contribution to the field because yeah, innovation is definitely a way we could be using this technology and more people need to think of it that way.
Need to approach from this perspective of what's the problem you're trying to solve and for whom, as opposed to what can this tool do.
Because the tech companies that I've worked for, they build the tech, right?
And it's just like, because they can.
But the real issue for humanity is how are we gonna use this tool for the betterment of humanity?
Right.
And I love that you included the distinction between replacement technology versus assistive technology, because that is totally my job.
So I wanted to thank you for including that.
Yes, you have inspired me greatly in that arena.
Thank you.
Thank you, Linda.
Mindy,
you're.
I just thought it was, a wonderful, presentation and.
Scott, I, you know, wanna have a whole conversation with you about so many of these things, but the key that I think is that what you discovered and, and what you found and the implications go beyond the technology.
so I really do believe that even though AI is progressing faster than any of us can, certainly than we could publish it, that your findings and your insights are, are really valuable to education.
And the way that you have boiled it down, to your two major insights, I think are, are really, contribute to the field.
And I'd like to know how you are going to, just knowing how good you are at dissemination, how you are planning on beating that, that publication bias that we have that takes so long.
What are you gonna do with this next?
Well, this recording is gonna go out on my podcast transformative principle, probably this Sunday.
And so that will be out.
And then my dissertation will be published as soon as I get it back from my editor, on my website for people to be able to see and access as soon as possible
because, you know, that's what I've been doing, the whole entire program, all of my assignments, I publish them on the, on the web, as I turn them in.
So, I believe in that stuff, getting out there and not being stuck behind a long process.
And, and so I'm gonna take a bias towards that as much as possible.
Well, Jethro, I'd like to invite you to present to my class of 22, 2 weeks from now.
So,
got it.
great.
Go ahead Tom.
I'm sorry, Mindy, go ahead.
No, I was just saying go ahead, Tom.
So, so let me, make one comment before I open it to the, to the crowd.
And, I'm gonna take a different tack.
I too am sitting here just delighted.
Really, really pleased.
But what I wanna talk about is Jethro Row's grit.
I write a lot about social emotional learning, and I gotta tell you, I've been with him through this journey and there were lots of times when he could have said, Hey, it's not worth it.
I don't need it.
But he didn't do that.
He plugged away, he worked.
And so what you see today, which is a thoughtful, insightful, really well done presentation, it's not only due to his intellect, it's due to the fact that he had grit.
He hung in and he pushed forward.
So, good for you, Jethro.
So, anybody else, feel free to, I guess Jethro raise their hand or what?
We, we would welcome comments or questions from anybody.
Go ahead, Gina.
First, thank you for, for, for having me.
This was really a treat.
This is the first UMSL dissertation that I've seen.
and I, I, I too was very, very impressed.
I know very.
All about ai, so you really have educated me on that.
as a, a retired high school principal, I would've loved to have had the innovative bug to use the tools to solve problems, but what I really was impressed with in, in the dissertation was how you kind of uncovered the, the trap of professional development for principals.
And I think my question, or, or what I'd, I'd like to have a maybe a, a follow up conversation with you on is how do we change the mindset from productive struggle is great for kids, but it's not okay for me as the adult leader of the building.
You know, how do we make that, that professional development, that growth opportunity that you described in your dissertation and really, you know, change that paradigm from sit and get to that, that exploration mode and, and you know, setting, putting adult leaders.
Back into that, that, that learning seat.
yeah.
my, 700 episodes on transformative principle and the masterminds that I've run for the last eight years have been focused solely on that.
And when we take ego out of the equation and give people opportunities to be vulnerable, then, then that is what happens.
And the point that I mentioned in the dissertation is that when people are tasked with problems first, then they have the opportunity to do that.
And that really is key that you have to be open about the problems that people are facing.
And if you are, then people can drop those barriers and have that kind of a professional development experience.
Great.
Thank
I I agree with you a hundred percent.
I just have never experienced a professional development like that, you know, in my career, which is unfortunate.
Yeah, that is unfortunate.
And that is what a lot of principals have.
And that is why I created my mastermind approach to do just that and give people opportunities for that.
So, I've seen that, all too clearly in my own professional development provided to me.
And the reason that I started my podcast in 2013 was because I was an assistant principal and didn't think that I was getting the kind of professional development I wanted.
And so I decided to make my own.
And I've heard too many principals say that they haven't had that kind of pd and, and that really is a tragedy.
So I, I definitely want to do my part to help make that happen.
it is possible.
Thank you.
There are questions, comments.
Go ahead Trish.
Thank you.
I just wanted to say I wish I, so I know a principal, I mean, I've never been a principal, but I was a teacher for 23 years and I've taught all the.
Grades from preschool all the way through middle school.
And I've never had a principal.
I've had some wonderful principals, but I will tell you in the same school when a principal changes and someone else comes in that is not as strong or that is stronger, I mean the whole school changes.
I mean, I think people don't understand just how important the principal is on campus and sometimes it's this us, them mentality instead of, we're all on the same team here.
This is our, this is our school, this is our community, this is our safe space for students.
And I feel like this is what you're creating, kind of creating this vision for people that this is possible.
And, and I really appreciate that.
And a lot of these concepts that you've put into your training and that I've listened to over a while, you know, and I've been a guest on your podcast, and I realized more and more as you were going through a lot of the things.
in the, in your presentation, I was like, yeah, I'm doing this.
Like in my presentation, I actually have them put in the prompt for helping kids define the purpose for their learning.
I spent hours putting together a really solid prompt, and we usually use Gemini, sometimes chat, but we put in the prompt and then it comes up with 10 to 15 ways that a student can actually use whatever they're gonna be learning about in their lives.
And they get to choose either something on the list or something they choose on this themselves.
And I thought, you know, I got these kinds of concepts, this idea to do this from you.
'cause I was just having kids brainstorm and they were, you know, getting confused.
They weren't really sure.
They didn't have enough background information.
They didn't.
And I thought, you know, I've been worried about ai, like writing papers for kids and that kind of thing.
'cause that's what was in the news.
But what I really got from you was, wait a minute, turn that on its head.
How about if you have AI come up with, here's a whole bunch of ways.
I mean, that I, that that was you.
So I just wanted to acknowledge what you're doing because I don't know if you've listened to his episodes on transformative principle, but there, there really,
really helpful and now I am consulting with principals and working with teachers on this project thing that I'm doing, and it's, it's just been invaluable.
So thank you for what you're doing.
Thank you.
Anybody else?
Thank you, Dan.
Go ahead.
And then Aaron,
Congratulations.
Well done.
as someone who's studying AI and school leadership, I just wanted to thank you.
there is a huge gap in the literature around AI and how school principals are tackling.
There is a significant gap in the literature in regards to AI and school principals.
we're looking at teachers, we're looking at students.
and I've seen that in my research and I appreciate you, adding to the literature here, and, wish you well.
Congratulations, and you've been a great friend and mentor over the years, and, really proud of you.
Good job.
Thank you.
Appreciate it.
And I wanted to finish, early enough that you'd be able to cite me, Dan, so, all right, go ahead Eric.
Thanks, Jethro.
Yeah.
I just wanna say very well done.
I'm impressed at how this has come together.
I've seen different parts of it throughout the stages, but this is a really.
Will experience to see this version today and wanted to comment on the marketing part.
we all know that's true, but I mean, there was literally a Super Bowl commercial that somebody paid millions of dollars for yesterday that said, use our AI tool, take the day off.
And as someone who's been working with AI and education, that's always people like it could just write my emails and it's turning what I already did into an automated thing.
And I have met very few people who approach it like you, who say, but should you even be doing that?
Like you're, you're applying a an amazing potential thing to just do more of what you already did.
And I know that can be hard to get people who want to hear that message.
'cause they always say, you know, well, what would that even look like?
How do I reimagine things?
But to encourage you as both a friend, someone who's learned from you a lot and someone who's also in education's, like, we need more people who have that framework of.
Reimagine things.
Don't just spin the hamster wheel.
You're already on faster.
'cause that's 99% of the messaging and the products and the application of it that I see.
Yeah, I appreciate that.
Thank you.
Go ahead, Linda.
I just wanna throw one more thing in there too.
I mean this, the approach that you've taken was actually the same approach that I took, but with children like aged 14 to 17 back in 2019.
So this whole fo focus on what is the problem you're trying to solve.
Is it, what I love is that you're introducing it to principals because again, as I believe it was Gina had said, the principal is, you know, it changes the whole dynamic and tone of the school.
And if you start there, then it can encourage the trickle down.
'cause kids are already, especially, you know, digital natives are already gonna be wanting to get in and use the tools, how they want to use them.
It's the adults that have lost that, that feel that they need to be taught by someone.
I mean, Jethro, you went out and just played around.
I go out and just play around, but a lot of teachers, because of their profession.
And time and other things just, you know, tell me, show me.
Whereas now you're opening up this whole new field of possibilities because the assistive aspect of technology is the thing.
And if you think this is cool, wait till you see what's coming out in my newsletter tomorrow in terms how you can expand your senses.
'cause that's where I'm focusing now.
People who can't see, people who can't hear, and even people who can, you don't see like other animals can see because we don't even think about that, right?
So, I love Erin touched on it and your emphasis is completely here.
How do we extend the capabilities that we already have as opposed to just automate things that we can easily do?
Because, especially if you consider the environmental, ramifications of using this technology, do you really wanna, you know, waste all that water just to write an email for you?
Or is it an, can you justify the use because of what you're creating?
And that's the question that we need to have, again, beyond the scope of your dissertation.
But we do need to ask that question.
And that's the research that I like to see going ahead.
For those of you who are doing more research, I wanna see systematic replications.
I wanna see the application and transfer of training.
Like you had me, and you mentioned this already, Jethro, so I'm just repeating things that you'd already said.
But I also want people to imagine how assistive technology extends the fa, the facilities and abilities of the user.
Instead of just relying on, you know, this is the tech that the company pushed out to me.
Now you are empowered to envision and build your own prototypes.
And if you don't have the technical skills to take it from prototype to MVP, then work with people who can, because that's what I see as the great equity piece here, is that, you know, I learned how to code when I was 14, but now anybody can do the things as long as you have the imagination for it.
So that's the big piece.
And I wanna challenge all of you to start thinking that way about tech.
And there are, there are two people I know who have expanded or completely created their businesses based on being able to tell an AI what they want, something to be able to do, and then having it create it for them at a prototype stage and then hiring an expert to take it to the next level.
And that is, that is very powerful.
And these two people could not have done this.
Four years ago, and in fact they had the idea then, but couldn't bring it to pass because it was too expensive, it was too prohibitive.
And now they have created whole businesses around it that are flourishing because they're serving people in a way they knew would work, but they just didn't have the resources to make it happen.
Now they do, and it's an amazing thing to watch.
I love it.
All right.
Any other comments?
Okay, Jessica, why don't you, close everybody else out and you can put Linda and me, Mindy and me in a room, and we'll talk, and then we will leave the room and we're ready to talk with you.
Okay, sounds good.
I'll open your room.
Thank you everybody who was here, I appreciate it.
And
inviting us.
You're welcome.
Thank you for being.
Creators and Guests
