Topview Logo
  • Create viral videos with
    GPT-4o + Ads library
    Use GPT-4o to edit video empowered by Youtube & Tiktok & Facebook ads library. Turns your links or media assets into viral videos in one click.
    Try it free
    gpt video

    Responsible AI in Education: Release of the Department of Education’s New Guide for Developers

    blog thumbnail

    Introduction

    Good afternoon everyone, and good morning to those of you joining us from the West Coast. Thanks to all of you for joining us for today's webinar. We're so pleased to host the launch of "Designing for Education with Artificial Intelligence: An Essential Guide for Developers."

    My name is Roberto Rodriguez, and I proudly serve here at the U.S. Department of Education as the Assistant Secretary for Policy and Planning and as the Director of our Office of Educational Technology (OET). For those of you new to our work, let me briefly introduce our priorities and mission, which include the promotion of digital inclusion in education such as digital equity, connectivity, and digital literacy. We support vibrant and innovative learning ecosystems that effectively utilize technology to meet the needs of all learners and explore emergent trends and technologies, particularly artificial intelligence (AI).

    I suspect many of you are already familiar with our report issued last May, titled "AI and the Future of Teaching and Learning." This foundational document traces many of the themes you'll hear over the next hour. You'll also hear from Deputy Secretary Cindy Martin, who will introduce today's discussion, followed by remarks from Jeremy Roschelle, a key partner who has brought expertise, thought partnership, and leadership to this project. Our Deputy Director in the Office of Educational Technology, Anil Hurley, will then host a panel to explore how this guide came to be and discuss its significance.

    I'm especially grateful to our stellar team at the Department of Education's Office of Educational Technology and the team at Digital Promise for their collaboration on this project. Now, without further ado, I welcome Deputy Secretary of Education Cindy Martin.


    Thank you so much, Roberto. I used to teach kindergarten, and we used to say, "Teamwork makes the dream work." Your introduction just now showcased how many people collaborated to make this happen today. Thank you to everyone joining us.

    The importance of responsibly developing AI in education cannot be overstated. We are launching our new guide for developers with a focus on responsible AI development in education. Many of you are familiar with the president's 2023 executive order, which was specifically focused on the safe, secure, and trustworthy development and use of AI. This executive order involves a whole-government approach to AI.

    Our role at the Department of Education involves developing resources, policies, and guidance. Today, we shift focus to the design aspect, releasing our developers' guide. Since our AI report was released, we've heard that developers want to strengthen trust and collaborate with each other and educational partners regarding the responsible use of AI. This guide builds on those 2023 recommendations by establishing a foundation to increase trust across the field.

    We have three key themes in the guide:

    1. Trust requires managing risks to fully realize opportunities.
    2. Trust can be grounded in existing policy.
    3. Developers must coordinate innovation with responsibility.

    Jeremy Roschelle, the Executive Director of Learning Science Research at Digital Promise, will give us more insights into the guide. Dr. Roschelle is an esteemed researcher with a notable track record and experience. Over to you, Dr. Roschelle.


    Thank you, Deputy Secretary Martin, for that kind introduction. I am honored to discuss the subject matter of this project.

    Let's talk about how the guide was built, its intended audience, and usage, and describe key concepts like the dual stack which coordinates innovation with responsibility. The guide is constructed based on public discussions with stakeholders, including educators, students, parents, and developers from various sectors. Collaboration across government entities strengthened the guide's practical utility for developers.

    The guide is intended for teams developing AI for education, aiming to build trust through transparency and ethical practices. Trust requires managing AI risks and realizing opportunities, integrating a dual stack approach where innovation stacks are coordinated with responsibility stacks.

    Here’s what the dual stack framework looks like:

    • Assemble data with representative samples.
    • Train models to identify and mitigate bias.
    • Adapt models to educational use with transparency.
    • Deploy them while ensuring safety, security, and privacy.

    We emphasize the importance of evidence, safety, security, equity, and protecting civil rights. For example, bias must be addressed through civil rights frameworks to prevent discrimination.

    Developers should pursue responsible innovation within existing federal and educational policies, enhancing trust through transparency, listening, demystifying AI, working collaboratively, and engaging in forums like today's webinar.

    Now, let's shift to the valuable contributions from our distinguished panel, which will deep-dive into the practical applications and experiences of integrating responsible AI into education.


    Hello, everyone. I am Anil Hurley, Interim Director for the Office of EdTech at the Department of Education. Beyond the President's executive order, our 2023 AI report laid the foundation for responsible AI in education, emphasizing human oversight, centering educational goals, and managing risks.

    Today's panel includes Dr. Nity Megee, former Chief Academic Officer from San Benito CISD, Texas; Dr. Patrick Rengal, Assistant Superintendent for Technology at Lynwood Unified School District, California; Dr. Kristen Deeser, Chief Learning Officer at Khan Academy; and Carl Rick Tannis, Chair of the EdSAFE AI Industry Council.

    We'll examine several key themes, beginning with the importance of strengthening trust. How can we proactively address risks to realize AI opportunities? Let's begin with Nity, Kristen, Patrick, and Carl.

    Opportunities and Risks in AI

    Nity Megee: My current focus is AI for young learners and professional development for teachers. Ensuring age-appropriateness and involving educators early can mitigate risks associated with AI for children under 13.

    Kristen Deeser: At Khan Academy, our AI-powered tutor, Kigo, involves risk assessments through various audits. For example, we rated the likelihood and impact of inaccurate information and misuse of AI tools, focusing on mitigating those risks through practical measures like ensuring prompt engineering.

    Patrick Rengal: In Lynwood Unified, AI literacy in understanding, using, and evaluating AI is paramount. We ensure our AI tools are unbiased, inclusive, and privacy-focused. Education about AI addresses misinformation and bias.

    Carl Rick Tannis: The EdSAFE AI Alliance focuses on a multi-stakeholder approach to creating safe, accountable, fair, and effective AI systems. Collaboration and alignment with standards enhance the development and implementation of safe AI in education.

    Trust and Transparency

    Achieving and maintaining trust between developers and educational institutions is crucial for successful AI integration. Trust is earned through transparency, ethical practices, and user engagement throughout the AI product lifecycle.

    Kristen Deeser: Trust involves understanding AI tools' purpose and development process, ensuring ethical considerations are built into AI prototypes from the initial design stages.

    Carl Rick Tannis: The SAFE framework—Safety, Accountability, Fairness, and Effectiveness—guides developers in creating trustworthy AI systems. Ongoing education and transparent communication are fundamental.

    Nity Megee: Including teachers in the development process builds trust. Ensuring privacy and data security and clearly communicating with families and communities are crucial for gaining their trust in AI implementations.

    Looking Forward

    The guide aims to support developers by integrating responsibility into every stage of AI development, grounded in existing policies and practices. By building trust through transparency, collaboration, and evidence-based approaches, the guide aims to make AI tools beneficial, safe, and equitable for all learners.

    Our hope is that both developers and educators will use this guide to inform their AI development and procurement processes, centering equity and addressing biases.

    Thank you all for your participation and insights today. We look forward to your continued engagement and efforts in integrating responsible AI in education.


    Keywords

    • AI in education
    • Digital equity
    • Trust and transparency
    • Responsible innovation
    • Civil rights
    • Data privacy
    • Ethical AI
    • Developer guide

    FAQ

    Q: What is the primary objective of the new AI guide for developers? A: The primary objective is to foster responsible AI development in education by addressing safety, security, privacy, and building trust through transparency and ethical practices.

    Q: What are the key themes of the guide? A: The key themes include managing risks to realize opportunities, grounding trust in existing policy, and coordinating innovation with responsibility.

    Q: How can developers integrate responsibility into their AI development process? A: Developers can integrate responsibility by aligning each stage of their innovation stack with corresponding ethical responsibilities, ensuring they address bias, transparency, evidence, safety, and privacy throughout the development lifecycle.

    Q: Why is trust important in the development and use of AI in education? A: Trust ensures the adoption and effective use of AI tools in educational settings, which can enhance learning outcomes and mitigate risks associated with AI technologies.

    Q: What role do educators and developers play in fostering digital equity? A: Both educators and developers must ensure AI models are inclusive and representative of diverse populations. They should collaborate to address biases and ensure accessibility for all students, including those with special needs and multilingual learners.

    One more thing

    In addition to the incredible tools mentioned above, for those looking to elevate their video creation process even further, Topview.ai stands out as a revolutionary online AI video editor.

    TopView.ai provides two powerful tools to help you make ads video in one click.

    Materials to Video: you can upload your raw footage or pictures, TopView.ai will edit video based on media you uploaded for you.

    Link to Video: you can paste an E-Commerce product link, TopView.ai will generate a video for you.

    You may also like