AI Standards and Cybersecurity Education for Kids with Sam Bourgeois
Download MP3Jethro Jones: Welcome to Transformative Principle, where I help you stop putting out fires and start leading.
I'm your host, Jethro Jones.
You can follow me on Twitter at Jethro Jones.
Okay.
Welcome to the podcast today.
I am your host, Jethro Jones.
Today is a special day because we're doing one of these fun simulcasts, where we post this on cyber traps and on Transformative principle.
And the reason is is because we're talking about something important today, which is AI and cybersecurity, and how we are protecting our kids online.
Super important and something that is even more important with AI now.
And I have, uh, a great friend of mine, Sam Bourgeois, who is, he's been a IT director.
He's worked overseas in China.
He has, uh, worked in private industry as well, and he's now creating something really awesome to teach kids about cybersecurity.
So Sam, welcome to the program.
Great to have you.
sam-bourgeois: Thank you.
No, thank you and welcome, welcome back.
It's been years, it's been since 2020, I think it is.
The last
Jethro Jones: Yes.
sam-bourgeois: we did something like this, so thank you for
Jethro Jones: Yeah.
So the last time you were on Cyber Traps was episode 67, which was August of 2021.
So it has been a while and we've talked a few times between then and now.
So, um, so you're doing some cool stuff.
So let's, let's start by talking about this idea of.
AI standards, and we'll get into the cybersecurity in a little bit, but let's start with the AI standards.
What's your stance on that and why do you think that's important?
sam-bourgeois: Well, I mean, know, my, my perspective as, as it.
is when we have these kind of conversations, is, is formed from all those different experiences that I have.
And one of those experiences in my current experience, I guess if you want, is working in the private sector.
And so my job today, um, as you mentioned, I'm, I'm the cybersecurity and IT leader for a large.
Uh, company that has, you know, an international footprint.
And so we do everything from governance and compliance, security, managed services, et cetera.
But really the important thing, um, that I'm seeing not talked about nearly enough is, uh, is ai.
Now, again, not directly in education, we have education customers, but in my day to day I am approaching ai, I have to ask myself those questions like.
so I get a new request for a new tool.
Somebody says they want to use, uh, teams pro and they wanna record the calls and do transcription, and somebody else brings in fireflies.
And I don't, if I don't have some kind of governance foundation to, to help determine the risk and to help understand what's the right process to, to, to say yes
or no to some of these things, I get myself in the situation where I'm the jerk who kind of stands in the way of innovation I don't wanna be the, the blocker.
What I really wanna be is the guardrails.
want, I wanna do it safely.
I want to innovate.
I wanna move fast, but I wanna do it safely.
And for me, that means guardrails.
Think, think bumpers on the on the, on the bowling lane, right?
Where it's okay to, to kind of bounce off things safely.
and so with that in mind, that is my perspective.
I, I have to ask myself, you know, I'm not the smartest guy.
In the world, much less the smartest guy in the room, or even on this call, Jethro with you.
But,
Jethro Jones: That's, that's not true.
sam-bourgeois: but, uh, but you know, I have to ask myself, somebody's gotta have done this before.
Like somebody has to have asked these questions before.
And so I look to the experts.
I look to the experts like, uh, like nist, which is, uh, the National Institute for Science Technology.
They have an AI.
Risk management framework, RMF, I look at iso, ISO's got an annex specifically that talks about, you know, applying AI safely in, uh, in, in an environment.
So I, I couldn't possibly be smarter than those guys, right?
Like they've, they've got entire think tanks directed, you know, at the, at solving those problems and thinking through those things.
GDPR, same thing.
We've got GDPR regulations around AI as well for Europe.
my approach is.
You know, is, is there a standard that's globally accepted and how can I do my best to align to those standards if there's something out of line or out of band, or maybe something that's more important to me or that doesn't necessarily fit.
In the use case or whatever.
I'm gonna, I'm obviously gonna add to that, but what I don't want to do is I don't wanna shortchange myself and I don't wanna put my organization at risk, so
I wanna follow, um, I'm not looking for a checklist, but I wanna follow some, some standards that I know that, that people have thought long and hard about.
And if it's my prerogative to pick and choose which ones I like and don't like, at least I have them.
And that gives me kind of that thoroughness, um, in the way that I approach those, those problems.
That's just my perspective.
Jethro Jones: Yeah.
So when it comes to those standards, what are the things that you think are essential and.
Important, uh, for you thinking of yourself in a, in a company now, but also for educators as we have, you know, a AI is changing really fast and progressing very fast, and schools take traditionally a very long time to change.
And so, you know, a lot of them are still.
Today talking about just blocking AI wholesale.
And I don't, personally, I don't think that's a good idea, but I've never thought that just blocking everything wholesale is a good idea.
Even back when the internet first came out and my first year of teaching, I had kids blogging on the internet and um, that was a very different thing but, um, that I got in trouble for.
But I still think it was a worthwhile thing to do.
And, and I put parameters and safeguards in place 'cause they were under 18.
But ha them having a real audience matters a ton.
So, so what are the, the, the thoughts you have about which standards are essential?
sam-bourgeois: Yeah.
So yeah, and I'm, I'm glad you brought that up.
I mean, I, you know, we go ba we go way back in, in, uh, in education and I, a classroom teacher, same thing.
I was always getting myself in trouble 'cause I was a risk taker and I wanted to expand.
And, and here's the, here's the reason.
This answer to your question, I guess for me.
Um.
It, it's, it's an admission that in a school setting, we're, we're probably not gonna do everything that's required for this kid to be successful outside of this classroom.
Full stop.
Right?
Like, certainly not in the context of it.
Certainly not in the context of security.
'cause we sure as heck aren't protecting our kids, you know?
And, and, and worst of all is, is ai because don't get it in a, in, in a regular context.
People don't really get it.
I mean, you can ask.
A hundred Americans, uh, you know, in your audience probably beyond that, but like you ask a hundred Americans, what is ai?
I think they'd have a real hard time explaining to you, and it's, it's actually quite simple, but they would have a hard time explaining that to you.
So I, guess what I would say is, um, me it's a, it's, it's that back to the, the governance, it's that context of there is a right way and a safe way to do things.
And, um, if.
If your priorities as a community, as an organization, a school district or a school or whatever, if your priorities are not such that you're encouraging, empowering, and enabling innovation, that's your problem.
Like, you know, the, the AI adoption, that's, that's not the issue.
The, if you're in a culture of innovation, then you should also be in a culture of governance and putting up guardrails and thinking through things and being safe and you know someone like you.
Knowing, knowing your background and where we first met, I know that that was.
That was your MO was finding places where we could innovate and do things interesting and fun and the, the boundaries of, of normal education, but in a safe way.
And so that's, to me, that's the biggie for education is, is the, the NIST frameworks and the industries are not appropriate for a classroom teacher.
A hundred percent, no question.
So what I've done, um, and what we've talked about is.
You know, how could we, can we translate that?
How could we do that?
How would we, how would we communicate those things that are most important?
So, I don't know the number off the top of my head, but, um, what I did is I correlated the, the NIST framework, the, the, uh, risk management framework to the ISO standards.
I'm, I'm in the us you know, some of your listeners might be in the GDPR, uh, EU governed regions, but, uh, I didn't correlate any other standards.
I'm sure they'd be easy to do.
and then I just basically asked myself.
What would I do if I was an IT director, which is my past life?
would I convert this standard to a K 12 context for an operationalization for implementation in a school context, in a school setting?
And then when I wrote that, I was like, uh, this is, this is actually really good.
What about examples in the real world of why that's important?
So then I, I added in things like what could go wrong and I, there's a. Obviously a ton of examples of what can go wrong.
Um, and then I thought, okay, but what do we actually do with this?
Like, how does this, like where's the rubber meet the road?
So then I thought, which ones of these are actually appropriate for, for teachers to convey to the next generation to their learners?
so I wrote those from, you know, kind of based roughly off of those things that were applicable.
And then I thought, any of this worthwhile for a kid to learn?
And I think the answer is, yeah, I mean, I think I. When, when you adapt those standards, it might come out.
This is off the top of my head, so forgive me.
It might come out something like this.
Um, a student should know and recognize that there are correct and incorrect ways to use AI in education and in their community, that sometimes there are legal implications to the incorrect use of ai.
I that's, I would agree with that.
Like in, in all cases, I would say a student can comprehend that.
Of course.
Would I like to spiral that?
Would I like to grade level that would, I like to apply that to some specific outcomes.
Sure.
But in terms of a standard, that's kind of where I'm at.
Right.
And so I've got, I don't know, maybe 50 of those.
I, I, I didn't count 'em up to be honest, but that's kind of where my mind's at is, is how would I apply that to, to the real world and to, to education.
Jethro Jones: For me, the, the real question comes down to things that are more ethical in nature rather than like, uh, correct or incorrect.
Because there are some, there are situations in which it is, it is correct to do it, but it's unethical to do it.
And, and so that gets into this question about ethics in.
Using AI and you know, is it like, it, it, it may be fine to take a picture that you find or that you took and use it to generate an image, but if you,
if that is a picture of someone else, it's your picture according to our copyright law currently in the United States because you took the picture.
sam-bourgeois: A
Jethro Jones: And so, but is it ethical for you to use that picture?
To create something with AI without that other person's knowledge that could put them in a different light.
And, and those are the kinds of things where, you know, our laws haven't caught up with this stuff either.
And so we need to be able to talk about it in such a way that we can say, you know, whether or not it is legal or correct or incorrect, or whether or not there could be ramifications later.
We need to be able to have the conversation about, um.
Is, is this an ethical thing to do?
And, you know, even is it ethical, is a, is a loaded question, right?
And so how do we really teach these things and, and give kids exposure to them without making it like yet another lesson, yet another standard that kids need to learn, um, when, when really there's an underlying set of things.
That, whether it's AI or something else, uh, we need to understand that that's, that's unethical to do that.
Does that make sense, Sam?
sam-bourgeois: It no, it, it absolutely does.
It absolutely does.
Because there's a. Yeah, there's something that, that people often overlook when they think about, um, you know, uh, technical skills and things like that.
Like with great power comes great responsibility, right.
Uncle Ben.
Like when you're, when you're in my field, in my profession, um, certifications you get, they're all hinging.
They're all contingent on this idea that I know just as much as a hacker knows.
But I choose to use my skills in a different way.
So I, I completely agree.
I'm, I'm, I'm totally in line with you there.
Um, and that was a great example.
You know, uh, I don't know if you wanna jump off from ethics to issues with ai, but I think,
I think fundamentally understanding that you have the ability to do things that you probably shouldn't do, and you should be thinking and training your brain to think critically about making those decisions.
Like that's.
That's the key.
I, I completely agree with you.
Jethro Jones: Yeah.
sam-bourgeois: I guess was your hypothesis that we shouldn't think about, uh, a rigid checklist and more like a, like a growth mindset, just kind of continuously learning, thinking, and inner focus.
Is that your
Jethro Jones: Yeah, that's, that's my perspective on, on a lot of things that it's not so much about the technical.
Understanding yes or no of X, Y, or Z. It's more about can I analyze the decision I'm making in the context of the situation that I'm in?
And I don't want to get into like moral relativism where like, yeah, it's fine as long as I feel like it's okay to do right now.
Like that's not what I'm talking about.
There needs to be something deeper like this, deeper respect for other people, and.
Their image and likeness and how we should treat them and that kind of thing.
Those things matter.
Other people's work.
Is it unethical to use AI because it's trained on other people's stuff that they didn't give permission for?
You know, like, uh, there was a tool a while ago where you could go see if your, your web content was included in the training data and you could essentially put in your, put in a website and see.
How, if it was included in that, and I did use je Jones dot com and Transformative Principal dot org, and it was included in the training data, so it does know about me
and about my work, and so somebody else could go look it up and, and ask chat GPT about Jester Jones and Transformative principle and it would know something about it.
Um, but like, is that.
Now unethical because of that.
And those are questions I don't know the answer to, but they're questions that I think we need to spend time discussing and talking about in schools so that kids have an idea and they can understand how it actually works and not just think that it is the source of all knowledge.
I wanna move just a little bit over to the cybersecurity side because the real challenge is that si the AI.
Developments have made it so much easier for our cybersecurity to be less secure than it has been in the past.
That there are so many more vectors, there are so many more, uh, spoofs and so much more like you can use AI to basically have this thing spam people for, you know.
Hours and hours on end without any end in sight, using new email addresses, new approaches, new impersonations, all kinds of things, and, and those are the things
that I am really concerned about because those will have real lasting damage and impact on kids and their future and all that if they get into a bad situation.
sam-bourgeois: Yeah.
Yeah, I agree.
I, my, my learning management platform, um, make it Secure Academy, it, it has a signup form and so you can click the button and to your point, I have yet, I, I think I've gone 14 pages.
I've gotta figure out a solution to this problem.
I've got 14 pages, 10.
to a page unique but junk emails and, uh, it just, it, I don't dunno if you have that same problem, uh, with your contact form.
So I've gotta do something better with my contact form.
It's like,
Jethro Jones: Yeah.
sam-bourgeois: they're just bots and they're just, they're just putting out junk.
Um, yeah, from the, from the security perspective, you know, I, I don't wanna get too much in the weeds on security specifically.
That's where I can really kinda geek out, but
Jethro Jones: Yeah.
sam-bourgeois: I, I, I totally agree.
Um.
What I think is, uh, what I think is terrifying in the education context specifically that by and large, you and I know how to, how to kind of mitigate risk to our, to our children and protect them in the future, but most people do not.
And, uh, the exposure of, fill in the blank, it's, it's not about a particular vendor.
It's not about PowerSchool, it's not about black bud.
Fill in the blank, right?
They're all, there's gonna be failures across the board.
There's gonna continue to be failures, uh, to protect that data.
Once that's exposed, like my biggest fear is that these young people.
Yeah.
only are they being watched constantly, which is another creepy thing, data brokers are watching everything we do.
This is not an over exaggeration.
This is not a four chan rant.
This is the truth.
Your, if your kid's got a mobile device in their hands, the device is knowing where they go, what they click on, how long they watch it, when do they scroll, what do they click, what do they like, what do they don't like?
so not only are they gonna have these digital footprints, which we've been talking about for 20 years, right?
only are they gonna be born into a digital footprint, they're also gonna be turning 18 with their identity exposed for the last 17, 18 years.
Which is just, just horrifying to think about that, you know?
And, um, that's, that's what really, that's what really kind of burns me up is, uh, I feel like, um, there's always been that conversation.
Whenever you and I were young men, there was a conversation around why are we doing geography and not.
You know, working with woodworking or changing a tire, balancing a checkbook, we're having the same conversations, right?
Like everybody says, why are we doing this and not that, I mean, fundamentally speaking, like we are, we are absolutely these kids up for failure by not teaching this, this stuff, by not living it in, uh, in the classroom.
And that's what, that's what really worries me.
Jethro Jones: Well, and and you said something real key there, that we're not living it in the classroom.
And one of the things that I've been frustrated about in the past is when it departments set up their own people for failure by sending, uh, internal phishing attempts and, and then it becomes a Gotcha.
And when you and I talked about your Make It Secure Academy, you talked about how you can, you can send these phishing attacks and then kids can see.
How they did, whether they responded to phishing attacks or not, and see what it was, what the message looked like, and, and then do some evaluation on how they fell for it or didn't.
And, and then can you talk a little bit about that, because that's part of it that I think it becomes a learning experience at that point, and they get a score that says, Hey, this is how you're doing with your cybersecurity, which is, is really powerful.
And that kind of stuff needs to be embedded.
So talk a little bit about that aspect.
sam-bourgeois: Yeah, yeah, yeah.
You nailed it.
I mean, like I've, I've always had this approach, like professionally I've had the same approach where I'm not a gotcha guy.
but, but I run security programs and I have run security programs for, for a while.
when it comes to young people, I think, uh, it's, it's an interesting approach.
I'm, I'm fishing.
Kids.
And, and the reason I'm doing it is, uh, it's, it's a heck of a lot of work.
I've gotta go, you know, fake all these domains and everything.
But to your point, the, the outcome that I'm looking for is, is the change in behavior.
I'm looking for not statistics.
I'm not looking for, you know, reporting to the teacher and, and all this other stuff.
I'm looking for an individualized approach to enhancing my security posture.
And why can't that be a young person?
I don't, I don't see any reason why it shouldn't be, I don't have time to visit your, your daughter.
I don't have time to sit with my son and build him a, a safe space to work a, a safe Gmail and, and review every email with him and hover over things.
I do try to do that for the record.
That is my job as a father.
But, um, it's difficult and it's, it's hard to manage.
So what I wanted to do was build a safe space that had specifically aligned.
Uh, activities and outcomes.
And one of those activities is, again, I, I think quite innovative.
It's phishing kids.
And so, yeah, to your point, you know, um, when, when the students interact with a phishing email that I might have against them, you know, attack them with, so
to speak, simulated, they get a splash page and it says, oops, this, this was not a, you know, this was not a a, a real malicious email or a bad email, whatever.
Um, here's what we can do in the future.
And then we also have training.
specifically to that behavior and a follow up that says this is, this is why that one was unsafe, specifically, here's why could have looked for this, or you could have done this, or how, you know, how this could have been d done better.
Um, more, more than that to your point about the kind of changing the way we, we think and our mindset, um.
I've created videos that preload and also, you know, follow up and, and reinforce those skills, uh, that are tailor made for young people.
You know, so I've got child actors hired children to act out scenarios and we're using AI to, to build videos that kind of speak directly to kids in the way they wanna be spoken to, animated short clips, which.
And I wish they could stay, you know, more, more attentive for more than a TikTok video.
But, um, I digress.
We gotta, we gotta meet 'em where they are.
Right.
Unfortunately.
that's, that's my approach.
And, um, you know, I think, I think the, the gamification you mentioned, I think that's, that's really important too, because I think, uh, miss out on that.
Frequently when we have like asynchronous learning opportunities, we miss on, we miss out on that ability to, um, to kinda gamify, gamify the experience.
And so I really wanted that to be a part of what we do.
And, and so again, we kind of invented a, whatever you wanna call it, like a credit score, you know,
Jethro Jones: Yeah.
sam-bourgeois: like a, a
Jethro Jones: Yeah.
sam-bourgeois: for kids to, to measure their security posture appropriate to their age group, by the way.
So they
Jethro Jones: Yeah.
sam-bourgeois: level up.
Jethro Jones: Well, and, and here's the other thing.
The, what I don't want is an additional AI class.
What I don't want is an additional cybersecurity class.
What I want is for these things to be, and I don't want additional SEL class either, by the way, and I don't want additional character class.
I, I want these things to be built into.
What we're doing and, and there are so many things in the world that we live in that is information dense that we need to be aware of.
And, and that is, that is essential to, to the work that we're doing.
And, you know, we, we have to understand that we've got to educate more than just the, the math and reading and writing.
You know, it's got to be the, the whole child.
And it can't just be focused on.
The things that get tested because that's not serving our kids well.
Because those things, honestly, you can learn those at any time and it's no good if you learn those things and then you, uh, grow up to cheat people outta their money and they're hard earned things by being a hacker or some unethical.
A criminal of some sort.
You know, that's not the kind of society that anybody wants to create, and so we need to be teaching these things early on.
So tell us about how people can get in, get started with Make It Secure Academy, what that looks like.
Um, for, for themselves, their district.
sam-bourgeois: Sure, sure, sure.
Um, yeah, like I said, we're, we're on a mission to protect every kid, you know, one kid at a time in the world.
And, uh, what, uh, what you can do to, to get ahold of us.
The easy thing is, you know, you can find us on all the socials, um, make it secure.
LLC, so M-A-M-A-K-E-I-T-S-E-C-U-R-E-L-L-C.
And you can find us on, uh, on Instagram, Facebook, LinkedIn.
make It Secure.
Dot Academy is the, the actual website.
Make it secure llc.com or make it-secure.org.
We also have a nonprofit, uh, wing of, of what we do and, and how we serve around the world.
Everything from Haiti and Kenya to uh, to, uh, to, uh, low socioeconomic status schools here in the US we're, we're just passionate about helping kids and helping educators.
Jethro Jones: What I would encourage anybody to do is to go check that out and, uh, get a demo from Sam so he can show you what it looks like and, and you can.
Tool around in it.
And really the idea is for this to not be an extra class,
sam-bourgeois: Correct.
Jethro Jones: something that kids are, are doing on the regular, which I, which I so appreciate and think is, is essential.
So, uh, Sam, thank you so much.
Those links are available at Transformative Principal dot org and@cybertraps.com.
Come and check it out and, uh, appreciate you being here and appreciate your friendship for all these years.
Sam.
sam-bourgeois: Thank you.
Creators and Guests
