Generative Instruction with Linda Berberich, PhD.
Download MP3Welcome to Transformative Principle, where I help you stop putting out fires and start leading.
I'm your host, Jethro Jones.
You can follow me on Twitter at Jethro Jones.
Alright.
Welcome everybody to Transformative Principal.
I am very excited to have on the podcast today, Linda Burrick.
She is a behavioral scientist, uh, behavioral scientist, and she does all kinds of amazing cool things.
Uh, she runs a consultancy called Linda b Learning out of Seattle, and we started talking.
Uh, last week before we, uh, decided to do the interview and I was like, we definitely gotta talk, because Linda's been in the learning space for a long time, has a ton of really fascinating perspectives and information and experience about, uh, artificial intelligence.
Before it was the marketing word, artificial intelligence.
So Linda, welcome to Transformative principle.
Great to have you
Thanks so much for inviting me and having the opportunity to chat.
Yeah, I'm, I'm really excited to talk with you.
Our conversation last week really inspired me and made me think about some things.
And I wanna start with this idea of artificial intelligence and how, uh, you mentioned that it's just a marketing term and that you've been doing this for a long time.
So can you just give like a brief idea of some of the things that you've worked on, uh, already in your career?
Yeah.
Yeah.
So when I talk about artificial intelligence, I use the term machine learning, and then sometimes I refer to the specific kind of statistical learning.
You know, category that it is.
So that's typically how I talk about it.
But because AI is such a buzzword, I, I will use those terms, but please know, that's what I'm talking about.
And so from that perspective, really, when you're building anything with a computer that helps you memorize or learn things, I. You are building like a machine learning peer.
And I don't know that people think about it that way, but that's how I think about it.
And that was literally the first experience I had with machine learning when I was 14 years old.
And then, you know, as I got into graduate school, um, I was studying behavior analysis, so that's mostly human and.
Animal learning, but we also worked with plants because plants actually can learn.
And then for me, I'm like, well, machine learning, obviously, duh.
So I mean, I worked on things like some of the very early, uh, voice recognition and voice synthesis, um, technologies used because pop people thought that would be a way better way of interacting or interfacing with a computer.
Not realizing how difficult it actually is to match a voice when you're thinking about how that actually works.
Um, and then just progressed on from there.
You know, I built, I, uh, assisted Paul Allen building one of the very first online schools ever before there was a thing.
Uh, built my own companies and sold it when I was 28.
And then continued to work, uh, with different companies in Silicon Valley, mostly Sun Microsystems in Oracle for a substantial part of my career.
But lots of other, uh, places as well.
And really.
My role was to get closer to the technology because I wanted to not just be a learning person who develops training.
Um, you know, and that was something I always used to get is like, well.
For a training person, you really don't want to do training.
I'm like, oh, I do training.
I just put it into the software.
I'm not about like, everything has to be in a classroom.
We are.
That ship has sailed like decades, you know, before, even when I first started when I was a teenager.
So like where we are now with virtual reality and all these other kinds of immersive technologies, the internet of thing called computing.
It's opened up so much possibility.
But the challenge I think, for people, especially in the K 12 space and higher ed space, is just, you know, sifting through the hype from.
What can you actually do and how can it be beneficial for the populations that we're serving?
Well, and what I, what I think is so interesting is you and I took a, uh, a similar path that we said, how can we make this happen more automatically so that we don't have to spend the human cycles doing the things that machines can do?
Better than us.
And so you talk about that idea, a little bit of outsourcing, repetition, repeated learnings, repeated practice kinds of things to technology and why that makes sense and has made sense for decades now.
Yeah, well.
Think about what we call these things that we're interfacing with.
We call them computers.
So that alone should suggest to you what they're really good at doing.
What technology is, was the underpinning of what technology is really good at.
And that is computing, right?
Being able to put things together.
So I remember back in the day, and you probably remember this too, when you were told that you know, can't rely on the calculator, you know you're not gonna have a calculator and then you know, you look at your.
Smartphone and you're like, oh, really?
Yeah.
um, so there is so many things that are just in your hand and just in your pocket that you can do.
But the challenge is, is that.
We're kind of like doing the wrong things with the devices, so yeah, computing, calculating, sorting, memorization, those kind of repetitive skills where there's a specific kind of response that you're after and it's a correct response.
Like I just wrote a newsletter about psychomotor learning and the whole underpinning of psychomotor learning is this notion of there's good form.
So, and that's what you coach to, is what is that good form?
You know, that Olympic level performance or that surgeon who's now ready to operate, or that, um, Potter who's working at the wheel shaping that pottery bowl.
How do they get to the point where their motor skills are done in a way where they's.
They're exhibiting good form.
So for psycho motor skills, it becomes really, really obvious when we're trading up physical skills, and we even think about it when you watch Chi children develop, right?
And they're, they get feedback from their natural involvement environment on how to walk, but they also get feedback as the repertoires get more complex from a skilled observer.
And sometimes the computer can be a better skilled observer depending on what it is that you're training up.
But if we're trying to train up things that are creative in nature, like.
Oh, we could have a whole conversation about generative AI and GPTs and that whole thing because I was, again, I was very early in the front of that as well and actually had the opportunity to work for one of the bigger vendors and after reviewing their model declined.
But um, but yeah, there's definitely things that computers are very good at and can be excellent coaches for.
It comes down to this notion of what's the res required response?
And we already get a bad rap in, in K 12 and higher ed around too much focus on memorizing bodies of curriculum as opposed to critically thinking about them.
Right?
And so it's not to say that computers can't be, you can't design computer programs that can challenge and, and help augment the development of, of, um.
Um, creative behavior or, or, you know, problem solving and those kinds of more complex, uh, complex cognitive skills.
It's just the way you get there isn't the way we're doing it right now.
You don't apply this mechanistic, step-by-step procedural methodology to train a far transfer complex task.
And, and quite honestly, sometimes it's like, well, do we even want to, like, there's so many things where if you consider the emotional underpinning of learning, which is present, we can talk again.
Whole conversation about what that means, but the, that's where the human element really comes in.
And so there's definitely ways that feedback can be reinforcing just in terms of build, building skill development.
And that's motivating, but there's also the human touch.
And especially in the digital world that we're in, coming outta COVID, we're almost forgetting how to be human, you know?
'cause we're not spending a lot of, as much time even now in, in actual, in-person, you know, in the room kind of experiences.
Yeah.
Yeah, that's true.
I mean, it's been five years since Covid, and yet it is still fresh on everybody's mind, so it's not like it's, um, like it's that far in the rear view mirror.
But you, you started talking about these near transfer versus far transfer and organic and mechanistic skills.
And so I want to talk about that a little bit, um, because that's something that I've been.
Thinking a lot about that these mechanistic skills, which I'm defining as skills that you can easily say, this is how to do it.
This is the right way, there's a wrong way, and I can transfer this knowledge to you.
And by transferring the knowledge and you practicing, you can show that you've got it.
And, and that is sufficient.
And you know, that is the memorization, the, the, uh.
The, the process skills, the things that we know need to happen in schools.
Um, also a lot of times considered, uh, secondary, biological knowledge, I think is another term they use for it.
But those are things where it's like, how do you, how do you live in this world and do the things that you need to do that are like, there's a yes and a no right?
And a wrong way, binary skills, whereas organic skills are skills that are.
Things that, uh, there's a right way, there's a better way, there's a really good way, there's an okay way and there's a really bad way.
And there's this gradient of how to deal with it.
Things like empathy, grit, perseverance, uh, patience, all all kinds of, these, what we typically call soft skills that I'm now calling organic skills.
And then you, uh, brought up near transfer and far transfer skills.
And how do those relate?
Let's talk about that a little bit.
Yeah.
Yeah.
So I, I touched on it a little bit, but let's go a little deeper on that.
So, a near transfer skill is procedural in nature.
So to your point, there's a right way to do it and everything else, right?
And so.
Think about like making a Starbucks latte.
There is a step one, step two, step three, step four on how you do it.
And yes, there's variations on it and, but for the most part there is, this is the way you build it.
And that's not to say that that's representative of all lattes, that's representative of Starbucks lattes for standardization purposes.
Right?
So that's the idea of near transfers, uh, near transfer tasks and near transfer training.
Is that the way that you teach it is the way that you, uh, the way that you perform it on the job.
In real life is how you train it up.
So it's not, there's no variation to it.
It's always done the same kind of way.
So that's what for far transfer, um, usually or near transfer usually refers to.
And I should say too, that's comes from Ruth Clark, who is very prominent in technical training and corporate training.
So that's comes from her, uh, Noma, Cher, that's where the, the terms near transfer and far transfer comes from.
And she also talks about 'em in terms of procedures being near transfer tasks, and then.
Principle based learning being far transfer tasks.
So if near transfer tasks is, it's always done the same way every single time, under every condition.
Step one, step two, step three.
Far transfer tasks are those tasks that are more.
Strategic in nature.
So you form a strategy about how you're gonna approach the task at hand, meaning that depending on the context, what you're gonna do varies.
And so, like a classic example that people would often give is making a sales call.
It's not just step one, step two, step three because how the customer responds or how the potential prospect response matters.
And you're gonna, uh, start pivoting and do things differently.
So far, transfer tasks.
Are also referred to oftentimes as complex cognitive tasks, whereas they, you would call a procedural task.
Um, some sometimes referred to as a simple cognitive task.
So these are different theorists.
So that la latter term that comes from teman and markle, that's very top of mind since the psychomotor work I was just writing about comes from them as well.
Right.
So, and the way that they propose it is that everything builds on everything.
That there's a psychomotor response, a physical response using your musculature and your, your.
Um, anatomy and physiology to perform.
That's a precursor for the simple cognitive tests, which are precursors for, um, complex cognitive, and they are like hardcore programmed instruction behaviorists.
So that's where that particular theory comes from.
But this, and, and quite honestly, it's worth revisiting programmed instruction considering the computer age we're in now because so much of that was dependent on our understanding of how computers and information processing and memory and those pieces work.
And now if you look at it in the context of, like I said, cloud computing, internet of things, generative ai, you know, then it starts to take on, oh, so what can we actually program?
And again, you can program things that have a definite right and a definite wrong.
This really comes to bear, like when we start thinking about subjects where history.
Even geography, right?
Some people have, have had questioned is like, are maps really representative of what our actual world actually looks like?
Or is that even a skewed response based on, you know, political and you know, the ruling power or the Victor's powers presentation of what geography is or what history is, so,
Yeah, so, so you talked about these, uh, standardizable
mm-hmm.
and, and much of what we do in schools is really standardized information, and that's what we're testing on.
That's what we're assessing.
That's why we use multiple choice tests and things like that.
And so there are, there are things where it makes sense to use.
These mechanistic approaches to, to learning things.
And so, uh, for example, um, we know how to teach, uh, reading with phonemic segmentation, oral reading fluency, those kinds of things.
And, and so that makes sense for that to be something that we use a computer to teach.
Whereas other things, um, like how to understand what an author is saying in the book.
Is probably not going to be the, the same way that we teach someone how to read.
How do you determine what kinds of things are worthwhile to use?
specifically machine learning technology for learning and not use that for learning.
I think for, for me, it is definitely.
The context in which you're gonna use it.
So I actually love using computers for peer practice partners on those tasks that you were describing.
Oral reading, fluency, math, fact memorization, those things where there's definitely right, definitely wrong answers because I.
One, the computer is always accurate.
It always gives you correct feedback.
So you actually, so it's not like, oh, you want, like a human is, has human foibles, right?
They will look away.
They will not listen or they will nod and say, Ah-huh, even though, so they're not as reliable as a computer is in terms of giving feedback when there's a correct response.
But when there could be variation, you run into challenges, right?
And so.
Mm-hmm.
What I've seen happen with GPT models recently is it falls into a couple categories.
So early GPT, when you would ask questions of it, it would, it kind of made you feel bad about asking questions because it came back with this very like, you know, just a tone to it.
Even though you know, you know, it's just uh, uh, you know, basically.
A word processor on steroids, right?
That's putting together the probability of this word coming.
You know, that's how they're constructing these answers.
And I mean, that's just a very, very simplified version of talking about how GPTs work, but that's essentially it.
Like it, the fact that it's able to produce the kinds of response that it is and make you feel some kind of way about it.
That's your own anthropomorphism that you're introducing into what you're seeing.
But it's, it's true.
Like people would comment on the, the, you know, for lack of a better term, the mansplaining, um, tone of the original versions of Chatt PT 3.5 and on.
But now we're seeing in this emphasis, we were talking about empathy, you know, to make the computer.
Interaction appear more empathetic is this whole thing where it's almost pandering to you and telling you how wonderful you are and how special you are and how that was a really insightful even it's like having some garbage response.
You know?
You could have put in any other series of words and it would've picked even that would make no sense.
Near you are praising me on how.
well at.
And the challenge to that is that that is actually.
helpful to say that it's been, that.
It's like, oh, that was a great response, good catch when like, no, that don't patronize me.
Don't, don't tell me that this was good when it actually wasn't.
Because if I'm not familiar with what that is doing, then I might believe that and that's another big problem.
So.
So before we move into that arena, what you're saying really is that things like the.
The, the terrible phrase we used to use is drill and kill for specific things.
That's what the technology is really good
It's excellent for that.
Yep.
that's what we should be using it for, because it can create answer correctly on those things, millions of times faster than we can do it.
So
And consistently.
Oh, and that's,
consistently
that's super important too, because that's an important thing that we can come back to talking about GPTs, why they're also problematic.
It's not just this pandering fe way of giving feedback.
It's that.
You can run a trial of exactly the same interaction and you're gonna get different feedback.
So GPTs aren't consistent because they aren't, you are not giving a consistent response, and so they're not gonna give you consistent feedback because what they're giving you feedback on depends on your response versus in the drill and kill.
You're always gonna have correct feedback now, and that's not to say that you can't train computers to be able to give better feedback and to be able to like for more complex cognitive skills and to be able to better help the human come to their own conclusions.
But it's not done with GPT technology, like it is absolutely not.
But you can code it so you can be done.
I've done small language models, so as compared to large language models, I've done small language models that actually work very well to shape behavior into a behavioral repertoire or behavioral class.
So when we're talking about training up things like concepts or you know, more strategic learning.
We have a frame of referent, we have to have some frame of referent, right?
It's the same thing when you're teaching them in school.
You have your rubric of what you're looking for, but the way that you apply that rubric is gonna vary based on the circumstances under, you know, that come to bear and the what's being presented to you in the case at hand.
Yeah, well, and, and that hints on my big problem.
With any kind of grading in school is that it is all subjective and we try to make it objective, but it is not.
And what I constantly tell my own children all the time is grades are made up.
once you understand that grades are made up, then you, they can, one, stop having power over you.
And two, you can understand that you are playing a game.
And you need to figure out the rules that the teacher is, is going by as she or he grades your work because it is all made up.
And anybody who argues that it's not and that it is actually objective, has no basis in reality, they're just absolutely wrong because every teacher makes up their own grading system.
Even if they have a rubric, even if they have standards and guides from the, the district or the state or whatever it is, they still make that ultimate decision about whether or not it is.
Uh, it is a certain thing and that makes it subjective.
And so understanding that and being okay with it is one thing and like saying, all right, I get what's going on here and I, and I can react in a different way.
But the problem is that we pretend like they are actually objective and that that is part of a, a bigger problem that we're not gonna go all into here.
you brought up this example of, of GPTs.
Reacting differently to the same prompt.
And I actually did this.
I have, I'm in a doctoral program right now and I had an assignment where we had to basically write a paper on what we've learned this, uh, this semester in class.
And to me, this assignment was just really tedious and I did not.
Want to do it because it, it did not, I, I didn't see the value in it, and so I decided to make it meaningful to me, which is often how I approach, uh, my assignments in school, is I figure out a way to make it meaningful.
So what I did is I put it into two different gpt to see how it would write it, uh, based on my notes, what my dissertation topic is, um, and what, what other things I could feed all my assignments and stuff.
So I put all my assignments in there.
I put my notes that were very like basic chicken scratch notes that I had just typed into my note taking system.
And then a few different documents also to like give context to it.
And then I said with one that I had already trained to have my voice and sound like me as best it could, uh, and to know what next word it should predict for
Mm-hmm.
on, on that I use that to, to write this.
Paper and then I used another one that I hadn't trained on all my information and I just said, here's the stuff.
Now make it how it should, like how it makes sense.
The second one.
Um, made it very academic.
Included the citations for my other assignments and said, this is where I talked about this.
And it sounded very much like an academic thing.
The one that I had trained to sound like me.
I had to do way more editing because one, it didn't sound like me, and two, it, uh, it got a bunch of things wrong, had more hallucination, but that other one was very academic.
I did, I had to do very little editing because.
It already didn't sound like me because it wasn't trying to sound like
Yeah.
one that was trying to sound like me made mistakes and hallucinated and put things in that weren't there.
And it was so fascinating because I did the same exact prompt.
I just copied it from one or the other and, and let it do its thing and, and then as I was giving it feedback, going paragraph by paragraph, which I also did, then it was.
It would, it would give different responses based on what I said, and then it would add stuff in that weren't there and things like that.
And so you can't, can't think that this is a perfect technology.
However, it was pretty amazing that it could do what it could do and that it could generate this stuff from what I had said.
And, and so that part was pretty cool.
But if you don't understand what it's really doing.
It's so easy to think that it's either magic or that it's always right and, and that are, those are the two big problems in my mind with it, is that it's not always right.
It's basically just guessing and doing a good job of it most of the time.
But it's really just guessing and it doesn't actually know the right answer.
Yeah.
But it does.
But yeah.
But again.
Don't look behind the curtain, you know?
And it all seems like magic.
Yeah, it's a, I mean, don't get me wrong, I've grown up with the evolution of this, so I think it's super cool that we can do this.
And you were mentioning about training up your own voices.
That was literally some of the first work that I did in this area was working for, like, I, I left Oracle in 2019, so I'd take on a few gigs working for some social media influencers, and they were just telling me the big problems about, ah, I can't respond to everybody and I don't, I'm losing followers because I'm not responding.
I'm all like.
Cool story, bro.
I could totally make a bot that could respond.
That sounds just like how you respond.
And I did be, but again, the kind of responses were very much tailored to whatever, you know, what they would've said.
So I had, and we had tons and tons of data to build with, and it was just a, it was a small, uh, language model.
I didn't need to scrape the entire internet.
The reason that your paper sounded so good in academic was.
All of those academic models that had to draw from, in addition to, as opposed to, you know, there's a time when you wanna narrow the field in terms of what data you wanna draw from, and then there's a time where you wanna widen the field.
And so that's the difference.
Like, like with search, you wanna widen the field, you know, you wanna be able to find the thing that you're looking for, but you wanna be able to wi, it's sort of like there's a concept when you play sports about having wide focus and then narrowing your focus.
So when you're out on the field and you're sort of taking everything in and then there's something that you need to beeline on.
That's how search should work.
It doesn't, it's because it's, it's not built for that.
It's not built to really search and be helpful.
I mean, I think that was the intention originally, but we're so far from that where we've gone with what we can do with the compute capability that we have now and, and, and the connectivity that we have now.
I mean, it just, it's frustrating and having spent like, you know, 15 years working in Silicon Valley and just.
I know why we are where we are, but I'm just, I, I just wanna shake some of my, all the tech bros I've worked with, I'm like, why though?
You're not the only one who has ideas.
And guess what?
Lots of people think way more creatively than you because you think you're the default.
And that, you know, like I run into it constantly and I'm like.
Oh my God.
Do you know how, how much hubris you have?
Do you, do you recognize like how much you just center yourself?
Like just by default.
I'm like, oh my.
Like really, dude?
And now you've built this megaphone that just amplifies all this out and said, this represents the world.
And I'm like, no.
Oh, no.
Talk about making people feel not seen, I mean,
Yeah, exactly.
So, so with what we have access to now, focusing on schools,
mm-hmm.
where should we be putting our time, energy, and our money in investing in using these tools?
And what should that look like in your opinion?
I am so I loved your take on grades.
The challenge with grades, of course is that is not that, you know, yes, we can agree that they're subjective and that, you know, it's, they're inconsistent.
However, they have a massive impact on people's lives.
And so they, for that reason, that's why you can't just like go, oh yeah, I don't care about whether I get the A, but I have noticed being that I'm not educated formally until I got to graduate school in the United States.
Big cultural differences with that too.
So I think when you're thinking about how to implement technology in a classroom, it's gonna be completely based on what is the culture that's already there and how does the technology support the culture.
I promise I did not pay her to say that.
I'm just saying that Yeah.
For, for anybody to say like, this is the answer to.
Schools, problems with technology and say they have that solution, they just don't.
It has to be focused on what your culture is.
And I, I don't have my book right here, but, but that is literally what I talk about in both the books that I've written, how to be a Transformative Principal in school X is you have to design your school for the people that are there, not for who you hope would be there, not for who was there 20 years ago, but who is actually there right now, which means that it has to be a continuous, ongoing.
Design process for you to make it work for those people.
And it's going to change because that is the nature of schools and, and it has to, so, so, I, I like that you said that, but go deeper.
What do you, what are the things that you have to be looking for and paying attention?
Maybe give a few examples of, of what that looks like in a school.
So.
There's already a predisposition in society to use technology as a babysitter.
So recognizing what your learners are coming in with and what their experience is with technology versus what you can actually support.
So for example, I moved at the Seattle area in 1994 to do my one year doctoral internship at a school for children with learning differences.
And they had a big part of the way that they did their generative instructive approach included drill and practice.
So.
For me, seeing all these reams of paper being wasted hurt my heart from, you know, just from a conservationist perspective, but also knowing that, well, I thought it was cool that they did peer learning and they, they were coaching each other and there's a substantial amount of their days where they function as peer coach.
Love that.
For a lot of reasons.
I didn't love that.
They could have had that relationship done more effectively with.
A computer and using technology even in 1994, we could have, that's was entirely possible.
And then use the time, uh, where kids can be together more pro socially.
'cause that was the other reason why some of the children were in there with, with learning differences.
It wasn't just that they, it was.
It's sort of a chicken and egg for me because I, I worked with the older kids.
It's like, what happened first?
Did they have the, the learning differences first?
And that led to social issues or vice versa, right?
And so, um, so that's, that was a lot of what I saw.
And while I worked there for five years after I finished my, um, my internship, that was my contention all the time.
And plus I wanted to do more complex things, uh, with the actual component composite.
Building that you can do with generative instruction and how that really can make these huge cre uh, huge leaps so that you can do some really fun work in person, together with humans.
Right?
So problem solving group, like, so much of what I loved about Morningside, I really loved, but the stuff that sat in my craw, I just designed a better version of, and that was the first company that I sold to a German company.
So,
Yeah.
yeah.
So I think it's really, but learning your co, like.
I love that you said that, that, um, it, your culture is not a static thing.
It's not this one and done.
You design it, you're done.
No, of course not.
You're gonna see different things happening based on the demographics that come into your school, but then also world events and just you as an individual, human evolving as well.
Like when teachers don't consider their own impact, like what they're bringing in and reflecting back, like that student in front of you is actually a reflection of you.
And how much are you actually putting on that student instead of realizing, no, that's you putting it on them.
You know?
So being able to, to pivot as well.
But I think really it's just, it's being really, and teachers know this stuff.
I mean, a lot of times, I guess sometimes folks get jaded, but I have one amazing client who is like 28-year-old or 28 year veteran in the classroom.
Uh, grades two through four.
And she has written the most amazing.
Technology, the most amazing curriculum that I just adore.
That does, that, illustrates this use beautifully about how you can, and then also capitalizes on lots of other technology that we have available now.
Like that's the other thing too, is like it's not just.
What are some past really good uses?
It's like imagining what else can come.
Like I love also using technology to, uh, as a springboard, like getting kids interested in STEM from things they're already interested in.
Like there's this, um, uh, you know, stem to dance group, right?
Which is focusing on dance, which a lot of children love.
Organically.
And then what is the technology pieces that you can attach to that?
So that's really more of like this doy ask progressive education perspective, right?
Rather than the standardized curriculum.
It's like what are the things that really grab the children's interest, interests?
What do they love to do?
And then how can we teach them those curriculum standards based on their interests, their unique passions?
That's individualized instruction.
Uh.
Prescriptive diagnostic that you can absolutely do with computers too.
So I mean, the potential to personalize instruction is amazing, but
Yeah.
lot more you have to under really understand all the different layers of learning, including that emotional layer to do that well.
So,
Yeah.
Yep.
And I didn't pay her to say that either.
I mean, you're the reason why when we were talking last week and you started hinting at some of these things that you're now
mm-hmm.
I was like, yes, we are kindred spirits.
We totally get this.
And, and, and those are things that people who have listened to the show have heard me say a hundred times.
And, and that's where.
We should take advantage of the technology to, to give ourselves and students more opportunities to be human.
With each other and, and be, and be the thing that the technology can't be.
And
Mm-hmm.
that the technology can do better than us.
We should have it do as soon as possible.
But there are so many things that the technology will never be able to do because it is not human and, and there's not value there, but that we can do and that we should pursue and continue doing because it is.
It's, it's what makes us unique and what makes us human and, and not machine.
So, uh, this, this was awesome and I, I would love to interview that.
Uh.
That teacher you were talking about, uh, that created that, uh, curriculum for kids.
That would be, that would be interesting.
I'll talk to you more about that afterward.
Yeah.
um, Linda, this was awesome.
I wanna let make sure people know they can check, uh, your stuff out at Linda b learning, uh, dot com, linda b learning.com for Linda Burrick, and, uh, any parting words or anything you'd like to say before we sign off?
I. Just wanna encourage teachers, especially those who are a little more maybe reticent when it comes to technology.
Don't be afraid of it, and let the kids lead too, because that can take you to some really, really interesting direction.
So rather than thinking about fighting against the technology and that it's something that we have to overcome, or like the whole cheating aspect, nah, lean into it.
Get a little more creative of what we can do instead.
Yes.
Oh, preach.
Alright, Linda, this has been awesome having you on Transformative principle.
Thank you so much.
Again, Linda b learning.com for her newsletter and it's been great chatting with you.
Thanks so much for your
You bet.
Thanks.
Creators and Guests
