AI Goes to College podcast

AI Goes to College

Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response. Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.

Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response. Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.

 

#17

AI detectors, amazing slides with Beautiful AI and Gemini as an AI gateway

Generative AI is reshaping the landscape of higher education, but the introduction of AI detectors has raised significant concerns among educators. Craig Van Slyke and Robert E. Krosler delve into the limitations and biases of these tools, arguing they can unfairly penalize innocent students, particularly non-native English speakers. With evidence from their own experiences, they assert that relying solely on AI detection tools is misguided and encourages educators to focus more on the quality of student work rather than the potential use of generative AI. The conversation also highlights the need for context and understanding in assignment design, suggesting that assignments should be tailored to class discussions to ensure students engage meaningfully with the material. As generative AI tools become increasingly integrated into everyday writing aids like Grammarly, the lines blur between acceptable assistance and academic dishonesty, making it crucial for educators to adapt their approaches to assessment and feedback. In addition to discussing the challenges posed by AI detectors, the hosts introduce Beautiful AI, a powerful slide deck creation tool that leverages generative AI to produce visually stunning presentations. Craig shares his experiences with Beautiful AI, noting its ability to generate compelling slides that enhance the quality of presentations without requiring extensive editing. This tool represents a shift in how educators can approach presentations, allowing for a more design-focused experience that can save significant time. The episode encourages educators to explore such tools that can streamline their workflows and improve the quality of their output, ultimately promoting a more effective use of technology in educational settings. The discussion culminates with a call for educators to embrace generative AI not as a threat but as a resource that can enhance learning and teaching practices. Takeaways: --- AI detectors are currently unreliable and can unfairly penalize innocent students. It's essential to critically evaluate their results rather than accept them blindly. --- The biases in AI detectors often target non-native English speakers, leading to unfair accusations of cheating. --- Generative AI tools can enhance the quality of writing and presentations, making them more visually appealing and easier to create. --- Beautiful AI can generate visually stunning slide decks quickly, saving time while maintaining quality. --- Using tools like Gemini can significantly streamline the process of finding accurate information online, offering a more efficient alternative to traditional searches. --- Educators should contextualize assignments to encourage originality and understanding, rather than relying solely on AI detection tools. Links referenced in this episode: --- [gemini.google.com] (https://gemini.google.com) --- [beautiful.ai] (https://beautiful.ai) Companies mentioned in this episode: --- Grammarly --- Shutterstock --- Beautiful AI --- Google --- Wright State University --- WSU --- Gemini Mentioned in this episode: AI Goes to College Newsletter ... Read more

18 Nov 2024

28 MINS

28:44

18 Nov 2024


#16

Google NotebookLM and Our AI Toolkits

Craig and Rob dig into the innovative features of Google's Notebook LM, a tool that allows users to upload documents and generate responses based on that content. They discuss how this tool has been particularly beneficial in an academic setting, enhancing students' confidence in their understanding of course materials. The conversation also highlights the importance of using generative AI as a supplement to learning rather than a replacement, emphasizing the need for critical engagement with the technology. Additionally, they share their personal AI toolkits, exploring various tools like Copilot, ChatGPT, and Claude, each with unique strengths for different tasks. The episode wraps up with a look at specialized tools such as Lex, Consensus, and Perplexity AI, encouraging listeners to experiment with these technologies to improve their efficiency and effectiveness in academic and professional environments. Highlights: ---00:17 - Exploring Google's Notebook LM ---01:25 - Rob's Experience with Notebook LM in Education ---02:05 - The Impact of Notebook LM on Student Learning ---04:00 - Creating Podcasts with Notebook LM ---05:35 - Generative AI and Student Engagement ---11:03 - Personal AI Toolkits: What's in Use? ---11:10 - Comparing Copilot and ChatGPT/Claude ---06:00 - The Unpredictability of AI Responses ---09:35 - Innovative Uses of Generative AI ---26:55 - Specialized AI Tools: Perplexity and Consensus ---37:22 - Conclusion and Encouragement to Explore AI Tools Products and websites mentioned Google Notebook LM: [https://notebooklm.google.com/] (https://notebooklm.google.com/) Perplexity.ai: [https://www.perplexity.ai/] (https://www.perplexity.ai/) Consensus.app: [https://consensus.app/search/] (https://consensus.app/search/) Lex.page: [https://lex.page/] (https://lex.page/) Craig's AI Goes to College Substack: [https://aigoestocollege.substack.com/] (https://aigoestocollege.substack.com/) Mentioned in this episode: AI Goes to College Newsletter ... Read more

22 Oct 2024

39 MINS

39:23

22 Oct 2024


#15

Leveraging Copilot and Claude to increase productivity in higher ed

This episode of AI Goes to College explores the transformative role of generative AI in higher education, with a particular focus on Microsoft's Copilot and its application in streamlining administrative tasks. Dr. Craig Van Slyke and Dr. Robert E. Crossler share their personal experiences, highlighting how AI tools like Copilot can significantly reduce the time spent on routine emails, agenda creation, and recommendation letters. They emphasize the importance of integrating AI tools into one's workflow to enhance productivity and the value of transparency when using AI-generated content. The episode also explores the broader implications of AI adoption in educational institutions, noting the challenges of choosing the right tools while considering privacy and intellectual property concerns. Additionally, the hosts discuss the innovative potential of AI in transforming pedagogical approaches and the importance of students showcasing their AI skills during job interviews to gain a competitive edge. In this insightful discussion, Dr. Craig van Slyke and Dr. Robert E. Crossler explored the transformative potential of generative AI in higher education. Drawing from their extensive experience, they examined how Microsoft's Copilot can alleviate the administrative burdens faced by educators. Dr. Crossler shared his firsthand experience with Copilot's ability to draft emails and create meeting agendas, highlighting the significant time savings and productivity gains for academic professionals. This practical use of AI allows educators to redirect their efforts towards more meaningful tasks such as curriculum development and student engagement. The hosts also addressed the information overload surrounding AI advancements, advising educators to focus on tools that offer tangible benefits rather than getting caught up in the hype. They discussed the strategic decisions universities face in selecting AI technologies, emphasizing the need for thoughtful integration to maximize educational impact. This conversation underscored the necessity for higher education institutions to remain agile and informed as they navigate the evolving landscape of AI technologies. Further, the episode examined AI tools like Claude and Gemini, showcasing their potential to enhance both academic and personal productivity. Claude's artifact feature was highlighted for its ability to organize AI-generated content, providing a structured approach to integrating AI solutions in educational tasks. Meanwhile, Gemini's prowess in tech support and everyday problem-solving was noted as a testament to AI's versatility. The hosts concluded with advice for students entering the job market, encouraging them to leverage their AI skills to gain a competitive edge in their careers. Takeaways: --- Generative AI tools can substantially reduce the time spent on routine tasks like email writing. --- Higher education professionals can leverage AI for tasks such as creating meeting agendas and recommendations. --- Using AI requires a shift in how tasks are approached, focusing more on content creation. --- Schools may need to decide which AI tools to support based on their specific needs. --- AI tools like Microsoft Copilot can assist in writing by offering different styles and tones. --- Experimentation with AI in professional settings can lead to significant productivity improvements. The AI Goes to College podcast is a companion to the AI Goes to College newsletter ( [https://aigoestocollege.substack.com/] (https://aigoestocollege.substack.com/) ). Both are available at [https://www.aigoestocollege.com/] (https://www.aigoestocollege.com/) . Do you have comments on this episode or topics that you'd like us to cover? Email Craig atcraig@AIGoesToCollege.com.You can also leave a comment at [https://www.aigoestocollege.com/] (https://www.aigoestocollege.com/) . ... Read more

01 Oct 2024

53 MINS

53:38

01 Oct 2024


#14

Is ChatGPT Bull ... and How to Improve Communication with AI

Is ChatGPT bull ...? Maybe not. In this episode Rob and Craig talk about how generative AI can be used to improve communication, give their opinions of a recent article claiming that ChatGPT is bull$hit, and discuss why you need an AI policy. Key Takeaways: ---AI can be used to improve written communication, but not if you just ask AI to crank out the message. You have to work WITH AI. Rob gives an interesting example of how AI was used to write a difficult message. The key is to co-produce with AI, which results in better outcomes than if either the human or the AI worked alone. ---Is ChatGPT Bull$hit? A recent article in Ethics and Information Technology claims that ChatGPT (and generative AI more generally) is bull$hit. Craig and Rob aren't so sure, although the authors make some reasonable points. ---You need an AI policy, even if your college doesn't have one yet. Not only does a policy help you manage risk, a clear policy is necessary to help students understand what is, and is not acceptable. Otherwise, students are flying blind. Hicks, M.T., Humphries, J. & Slater, J. (2024). ChatGPT is bullshit.Ethics and Information Technology, 26(38). [https://doi.org/10.1007/s10676-024-09775-5] (https://doi.org/10.1007/s10676-024-09775-5) [https://link.springer.com/article/10.1007/s10676-024-09775-5] (https://link.springer.com/article/10.1007/s10676-024-09775-5) Mentioned in this episode: AI Goes to College Newsletter ... Read more

29 Jul 2024

34 MINS

34:12

29 Jul 2024


#13

AI in higher ed: Is it time to rethink grading?

In this episode of AI Goes to College, Craig and Rob dig into the transformative impact of artificial intelligence on higher education. They explore three critical areas where AI is reshaping the academic landscape, offering valuable perspectives for educators, administrators, and students alike. The episode kicks off with a thoughtful discussion on helping students embrace a long-term view of learning in an era where AI tools make short-term solutions readily available. Craig and Rob tackle the challenges of detecting AI-assisted cheating and propose innovative approaches to course design and assessment. They emphasize the importance of aligning learning objectives with real-world skills and knowledge retention, rather than focusing solely on grades or easily automated tasks. At the end of it all, they wonder if it's time to rethink grading. Next, the hosts examine recent developments in language models, highlighting the remarkable advancements in speed and capabilities available in Anthropic’s new model, Claude 3.5 Sonnet. They introduce listeners to new features like "artifacts" that enhance user experience and discuss the potential impacts on various academic disciplines, particularly in programming education and research methodologies. This segment offers a balanced view of the exciting possibilities and the ethical considerations surrounding these powerful tools. The final portion of the episode covers issues related to the complex world of copyright issues related to AI-generated content. Craig and Rob break down the ongoing debate around web scraping practices for AI training data and explore the potential legal and ethical implications for AI users in academic settings. They stress the importance of critical thinking when utilizing AI tools and provide practical advice for educators and students on responsible AI use. Throughout the episode, the hosts share personal insights, anecdotes from their teaching experiences, and references to current research and industry developments. They maintain a forward-thinking yet grounded approach, acknowledging the uncertainties in this rapidly evolving field while offering actionable strategies for navigating the AI revolution in higher education. This episode is essential listening for anyone involved in or interested in the future of education. It equips listeners with the knowledge and perspectives needed to adapt to and thrive in an AI-enhanced academic environment. Craig and Rob's engaging dialogue not only informs but also inspires listeners to actively participate in shaping the future of education in the age of AI. Whether you're a seasoned educator, a curious student, or an education technology enthusiast, this episode of AI Goes to College provides valuable insights and sparks important conversations about the intersection of AI and higher education. Mentioned in this episode: AI Goes to College Newsletter ... Read more

15 Jul 2024

45 MINS

45:24

15 Jul 2024


#12

Encouraging ethical use, AI friction and why you might be the problem

We're in an odd situation with AI. Many ethical students are afraid to use it and unethical students use it ... unethically. Rob and Craig discuss this dilemma and what we can do about it. They also cover the concept of AI friction and how Apple's recent moves will address this under appreciated barrier to AI use. Other topics include: ---Which AI chatbot is "best" at the moment ---Using AI to supplement you, not replace you ---Why you might be using AI wrong ---Active learning with AI, ---and more! --- The AI Goes to College podcast is a companion to the AI Goes to College newsletter (https://aigoestocollege.substack.com/). Both are available athttps://www.aigoestocollege.com/. Do you have comments on this episode or topics that you'd like us to cover? Email Craig atcraig@AIGoesToCollege.com.You can also leave a comment athttps://www.aigoestocollege.com/. ... Read more

01 Jul 2024

39 MINS

39:10

01 Jul 2024


#11

The problem with prompt engineering, GPT-4o, and AI hysteria

In this episode of "AI Goes to College," Rob and Craig discuss ---the implications of OpenAI's GPT-4 Omni (GPT-4o) ---AI fatigue and hysteria, and ---why prompt design is better than prompt engineering. Craig and Rob explore the implications of GPT-4 Omni's enhanced capabilities, including faster processing, larger context windows, improved voice capabilities, and an expanded feature set available to all users for free. They emphasize the importance of exploring and experimenting with these new technologies, highlighting the transition from prompt engineering to prompt design for a more user-friendly approach. They discuss how prompt design allows for a more iterative and creative process, stressing the need for stakeholders to adapt and incorporate generative AI tools effectively, both in teaching and administrative roles within higher education. Through their conversation, Rob and Craig address the hype and hysteria surrounding generative AI, encouraging listeners to approach these tools with curiosity and a willingness to adapt. They advocate for a balanced perspective, acknowledging both the benefits and risks associated with integrating AI technologies in educational settings. Rob suggests creating a prompt library to capture successful prompts and outputs, facilitating efficiency and consistency in utilizing generative AI tools for various tasks. They also emphasize the importance of listening to stakeholders and gathering feedback to inform effective implementation strategies. Rob and Craig conclude the episode by underscoring the value of continuous exploration, experimentation, and playfulness with new technologies, encouraging listeners to share their experiences and creativity in utilizing generative AI effectively. To stay updated on the latest trends in generative AI and its impact on higher education, listeners are invited to subscribe to the "AI Goes to College" newsletter and watch informative videos on the AI Goes TO College YouTube channel. The hosts invite feedback and suggestions for future episodes, fostering a dynamic and interactive community interested in leveraging AI technologies for educational innovation. Overall, this episode provides valuable insights into navigating the evolving landscape of generative AI in higher education, empowering educators and administrators to adopt a proactive and adaptable approach towards leveraging AI tools for enhanced teaching and administrative practices. --- The AI Goes to College podcast is a companion to the AI Goes to College newsletter ( [https://aigoestocollege.substack.com/] (https://aigoestocollege.substack.com/) ). Both are available athttps://www.aigoestocollege.com/. Do you have comments on this episode or topics that you'd like us to cover? Email Craig atcraig@AIGoesToCollege.com.You can also leave a comment at [https://www.aigoestocollege.com/] (https://www.aigoestocollege.com/) . ... Read more

28 May 2024

24 MINS

24:44

28 May 2024


#10

The future of generative AI, a great Chrome extension, and using AI to examine an exam

In this episode, Craig discusses: ---My vision of the future of generative AI ---Harpa - a great AI Chrome extension ---Using Claude to examine an exam ---Should higher ed fear AI? The highlights of this newsletter are available as a podcast, which is also called AI Goes to College. You can subscribe to the newsletter and the podcast at https://www.aigoestocollege.com/. The newsletter is also available on Substack: (https://aigoestocollege.substack.com/). ... Read more

02 May 2024

15 MINS

15:58

02 May 2024


#9

Ethics of Human-AI Co-Production (announcement)

On Tuesday, April 30 at 5 P.M. Eastern time, I’ll be giving a talk on the ethics of human-AI co-production. This is part of an annual series called the Marbury Ethics Lectures. I’m quite honored to be the speaker; two years ago, the speaker was then Louisiana Governor John Bell Edwards. Anyone in the area is welcome to attend in-person, but the event will also be live streamed: [https://mediasite.latech.edu/Mediasite/Play/8aa374384ff541bc8d76dcf98be7aab91d] (https://mediasite.latech.edu/Mediasite/Play/8aa374384ff541bc8d76dcf98be7aab91d) I’d love it if you could join us! -- The AI Goes to College podcast is a companion to the AI Goes to College newsletter ( [https://aigoestocollege.substack.com] (https://aigoestocollege.substack.com) /). Both are available at [https://www.aigoestocollege.com/] (https://www.aigoestocollege.com/) .Do you have comments on this episode or topics that you'd like Craig to cover? Email him atcraig@AIGoesToCollege.com.You can also leave a comment at [https://www.aigoestocollege.com/] (https://www.aigoestocollege.com/) . Mentioned in this episode: AI Goes to College Newsletter ... Read more

29 Apr 2024

02 MINS

02:22

29 Apr 2024


#8

Does AI hurt critical thinking and new tools, good and bad

In this episode of AI Goes to College Craig dives deep into the world of AI in education, exploring new tools and models that could revolutionize the way we approach learning and teaching. Join Craig as he shares insights from testing various AI models and introduces a groundbreaking tool called The Curricula. In this episode, Craig talks about: --- A terrible new anti-AI detection "tool" --- Does AI hurt critical thinking and academic performance? --- How not to talk about AI in education --- Claude 3 takes the lead --- Using Google Docs with Gemini --- Claude 3 Haiku - Best combination of speed and performance? --- The Curricula - A glimpse of what AI can be Anti-AI detection tool There's a terrible new tool that supposedly helps students get around AI detection systems (which don't work well, by the way). Faculty, you have nothing to worry about here. The tool is a joke.  Does AI hurt critical thinking and academic performance? A recent article seems to provide evidence that AI is harmful to critical thinking and academic performance. But, as is often the case, online commenters get it wrong. The paper doesn't show this at all. How not to talk about AI in education An author affiliated with the London School of Economics wrote an interesting article about how NOT to talk about AI in education. Craig comments on what the article got wrong (in his view). Using Google Docs with Gemini There are some interesting integrations between some Google tools, including Docs and Gemini. It works ... OK, but it's a good start. Claude 3 Haiku If you haven't checked Claude 3 Haiku, you should. It may offer the best performance to speed combination in the market.  The Curricula The Curricula is an amazing new tool that creates comprehensive learning guides for virtually any topic. Check it out at <a href= "https://www.thecurricula.com/">https://www.thecurricula.com/</a>.  Listen to the full episode for the details.  To see screenshots and more, check out Issue #6 of the AI Goes to College newsletter at <a href= "https://aigoestocollege.substack.com/">https://aigoestocollege.substack.com/</a> -- The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href= "https://aigoestocollege.substack.com/">https://aigoestocollege.substack.com/</a> ). Both are available at https://www.aigoestocollege.com/. Do you have comments on this episode or topics that you'd like Craig to cover? Email him at craig@AIGoesToCollege.com.  You can also leave a comment at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a> .  ... Read more

10 Apr 2024

29 MINS

29:30

10 Apr 2024


#7

Why AI needs a human in the loop and a useful slide generator

In this week's episode of AI Goes to College, Craig covers a range of topics related to generative AI and its impact on higher education. Here are the key highlights from the episode: ---Importance of Human Review: Craig share a humorous yet enlightening experience with generative AI that emphasizes the crucial role of human review in ensuring the appropriateness and accuracy of AI-generated content. ---New Features for Chat GPT Teams: The latest developments in chat GPT teams, including improved chat sharing, GPT store functionality, and image generation options, offer exciting possibilities for collaborative AI use. ---Slide Speak: Craig explores Slide Speak, a promising tool for quickly creating slide decks from documents using AI. While it's not yet perfect, it shows great potential for streamlining the presentation preparation process. Now, here are the key takeaways for you: 1️⃣ Human Review is Crucial: Always ensure that AI-generated content goes through human review, especially for important and public-facing materials. 2️⃣ Collaborative AI: New features in chat GPT teams foster better collaboration and creativity in AI-powered conversations and content creation. 3️⃣ Streamlining Presentations: Tools like Slide Speak show promise for simplifying and expediting the process of creating slide decks, though they may need some manual adjustments for perfection. Tune in to the full episode for more insights and the latest developments in generative AI! And don't forget to subscribe to the AI Goes to College newsletter for detailed insights and practical tips. Let's keep embracing the future of AI in higher education together! --- The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href= "https://aigoestocollege.substack.com/">https://aigoestocollege.substack.com/</a>). Both are available at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a>. Do you have comments on this episode or topics that you'd like Craig to cover? Email him at craig@AIGoesToCollege.com.  You can also leave a comment at https://www.aigoestocollege.com/.  ... Read more

05 Apr 2024

20 MINS

20:51

05 Apr 2024


#6

Why AI doesn't follow length instructions, the best $40 you can spend, and more

This week's episode covers: ---Generative AI's paywall problem ---Anthropic release new Claude models that beat GPT ---Google has a bad week ---Why generative AI doesn't follow length instructions (and what you can do about it) ---The best $40 you can spend on generative AI ---More Useful Things releases some interesting AI resources ---Chain of thought versus few-shot prompting --- AI generated description --- Welcome to AI Goes to College, where we navigate the ever-changing world of generative AI in higher education. In this thought-provoking episode, I, your host, Dr. Craig Van Slyke, delve into the latest developments in the realm of generative AI, from the paywall problem to Anthropic's groundbreaking Claude models that outperform GPT. This episode sheds light on the ethical considerations and challenges facing academic researchers when working with biased training data and the potential limitations in reflecting findings from behind-the-paywall academic journals. But it's not all about the challenges. I also uncover the exceptional potential of Anthropic's new Claude models and the significance of competition in driving innovation and performance in the AI landscape. You'll be immersed in the intriguing discussion about Google's stumbling block in implementing ethical guardrails for generative AI, a pivotal reminder that human oversight remains crucial in the current stage of AI utilization. And let's not forget about practical tips. I share game-changing insights on prompting generative AI, covering the nuances between few shot and chain of thought prompting, and reveal the best $40 investment for enhancing productivity in your AI endeavors. The conversation doesn't end there. I invite you to explore the transformative applications of generative AI in education through a fascinating interview with an industry expert. This episode promises to reshape your perspective on the potential and challenges of generative AI in higher education and leave you equipped with valuable knowledge and practical strategies for navigating this dynamic landscape. Join us as we uncover the profound impact of generative AI on academic research, and gain invaluable insights that will shape your approach to utilizing AI effectively for success in the educational sphere. If you find this episode insightful, don't miss the chance to subscribe to the AI Goes to College newsletter for further invaluable resources and updates. Let's embark on the journey to embracing and leveraging generative AI's potential in higher education. ... Read more

22 Mar 2024

25 MINS

25:49

22 Mar 2024


#5

Empowering Students and Faculty with Generative AI: An Interview with Dr. Rob Crossler

Generative AI is transforming education, not just for learning, but also for performing administrative tasks. In this special episode of AI Goes to College, Craig and Dr. Rob Crossler of Washington State University talk about how generative AI can help students learn and faculty streamline those pesky administrative tasks that most of us find so irritating. Rob and Craig dig into a wide array of topics, including the early adoption of technology and the risks it brings, the need to experiment and accept occasional failure, and our ethical obligation to help students learn to use generative AI effectively and ethically. We also discuss the AI digital divide and its potential impacts. Here are just a few of the highlights: ---Rob shares an example of how generative AI helped with a challenging administrative task. ---Rob explains how some students avoid using AI due to fears over being accused of cheating.   ---Rob and Craig discuss the need to encourage experimentation and accept failure. ---Craig questions whether students understand the boundaries around ethical generative AI use. ---Rob emphasizes the need to help students gain expertise with generative AI in order to prepare them for the evolving job market. ---Rob talks about how he uses generative AI to encourage critical thinking among his students. --- The AI Goes to College podcast is a companion to the AI Goes to College newsletter (<a href= "https://aigoestocollege.substack.com/)">https://aigoestocollege.substack.com/)</a>. Both are available at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a>.  Do you have comments on this episode or topics that you'd like Craig to cover? Email him at <a href= "mailto:craig@AIGoesToCollege.com">craig@AIGoesToCollege.com</a>.   You can also leave a comment at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a>.    ... Read more

11 Mar 2024

40 MINS

40:00

11 Mar 2024


#4

Detecting fake answers, Zoom meeting magic, and Gemini is pretty awesome

Welcome to AI Goes to College! In this episode, your host, Dr. Craig Van Slyke, invites you to explore the latest developments in generative AI and uncover practical insights to navigate the changing landscape of higher education. Discover key takeaways from Dr. Van Slyke's firsthand experiences with Google's Gemini and Zoom's AI Companion, as he shares how these innovative tools have enhanced his productivity and efficiency. Gain valuable insights into Google's Gemini, a powerful AI extension with the potential to revolutionize administrative tasks in higher education. I'll delve into the finer aspects of Gemini's performance, extensions, and its implications for the academic community. But that's not all—explore the fascinating potential of ChatGPT's new memory management features and get a sneak peek into OpenAI's impressive video generator, SORA. Dr. Van Slyke provides a candid overview of these cutting-edge AI advancements and their implications for educational content creation and engagement. Additionally, you'll receive expert guidance on recognizing AI-generated text, equipping you with the tools to discern authentic student responses from those generated by AI. Uncover valuable tips and strategies to detect and address inappropriate AI use in academic assignments, a crucial aspect in today's educational landscape. Join Dr. Craig Van Slyke in this enlightening episode as he navigates the dynamic intersection of generative AI and higher education, providing invaluable insights and actionable strategies for educators and professionals. Tune in to gain a deeper understanding of the transformative role of generative AI in higher education, and learn how to effectively leverage these innovative tools in your academic pursuits. Embrace the future of AI in education and stay ahead of the curve with AI Goes to College! To subscribe to the AI Goes to College newsletter, go to <a title="AI Goes to College newsletter" href= "https://www.aigoestocollege.com">AIGoesToCollege.com/newsletter</a>.    --- Transcript --- Craig [00:00:14]: Welcome to AI Goes to College, the podcast that helps higher education professionals navigate the changes brought on by generative AI. I'm your host, doctor Craig Van Slyke. The podcast is a companion to the AI Goes to College newsletter. You can sign up for the newsletter at ai goes to college.com/ newsletter. This week's episode covers my impressions of Google's Gemini. Here's a spoiler. I really like it. An overview of an awesome zoom feature that a lot of people don't know about. Craig [00:00:47]: A new memory management feature that's coming to chat gpt soon, I hope. OpenAI's scary good video generator, and I'll close with insights on how to recognize AI generated text. Lately, I've found myself using Google's Gemini pretty frequently. I just gave a talk, actually, I'm about to give a talk. By the time you listen to this, I will have given a talk on the perils and promise of generative AI at the University of Louisiana Systems for our future conference. I wanted to include some specific uses of generative AI for the for administrative tasks. I have a lot of use cases for academic tasks, but I wanted something more for the staff side of the house. Gemini was a huge help. Craig [00:01:33]: It helped me brainstorm a lot of useful examples, and then I found one I wanted to dial in on, and it really helped quite a bit with that. I didn't do a side by side comparison, but Gemini's performance felt pretty similar to Chat GPT's. By the way, I use Gemini advanced, which is a subscription, service, and it's kind of Google's equivalent to chat GPT 4. One of the most promising aspects of Gemini is that it has some extensions that may prove really useful in the long run. The extensions will let you do a lot of things, for example, asking questions of Gmail messages and Google Drive documents. There's also a YouTube extension that looks interesting. My initial testing yielded kinda mixed results. It did well in one test, but not so well in another. Craig [00:02:22]: I'll try to do a longer little blurb on this later. The Gmail extension did a pretty good job of summarizing recent conversations. So I don't know. I I think it's something to keep an eye on. So currently, there are extensions for a lot of the Google Suite, Gmail, Google Docs, Google Drive, Google Flights, Google Hotels, Google Maps, and YouTube. So this is certainly worth keeping an eye on. And if you weren't aware, Gemini is Google's replacement for Bard. They rolled out some new underlying models and did a rebrand a few weeks ago. Craig [00:02:54]: So, anyway, if you haven't checked it out in a while, I think it's probably worth doing. One of my favorite new ish, AI tools is Zoom's meeting summary. This thing is really awesome. It actually came out as part of their AI companion, which they released last fall, but didn't really get a lot of press that I saw. And the AI companion does a number of things. I may do a longer, deeper dive on that later. But the killer use for it right now for me is to summarize meetings. It is just awesome. Craig [00:03:25]: If you're like me, you might get caught up in the discussions that go on during a meeting, and you forget to take some notes. And you go back a few days later to start working on whatever project you were involved with in that meeting, and you've forgotten some of the details. So this happens to me a lot. I'm kind of sad to admit, but Zoom's AI companion can help with this tremendously. Just click on start summary, and AI companion will do the rest. A little while after the meeting, if you're the host, you'll receive an email with the summary, and you can just send that to the other attendees. The summaries are also available in your Zoom dashboard, and they're easy to edit or share from there. I find the summaries to be surprisingly good. Craig [00:04:11]: In the newsletter, AI Goes to College, I give an example, kind of a redacted example, of a summary of a recent meeting, and it's pretty comprehensive. And I I looked over the summary right after the meeting, not long after the meeting, and it really captured the essence of the meeting. It gives a quick recap, and then it summarizes, kind of broken down by topics, the entire meeting, and it also lays out next steps. Couple of things to keep in mind, though. 1st, only the meeting host can start the summary. So if you're just an attendee, you're not the host, you can't start the summary. You can ask the host to do it, but you can't do it. AI companion is only available to paid users. Craig [00:04:50]: And because AI companion is only available to paid users, the same can be said about the meeting summaries. If you're using Zoom through your school and you're hosting a session and you don't see the summary button, if I were you, I'd contact your IT folks and see if AI companion needs to be enabled. There are a number of other features of AI companion, but I haven't tried them yet. One particularly interesting capability is the ability to ask questions during the meeting. So you go over to the AI companion, ask questions, and the AI companion will answer based on the transcript of what it's done so far or what it's kinda transcribed so far in the meeting. So if you come in late and you wanna catch up, you don't have to say, hey. Can somebody catch me up with what we've done so far? You can just go over to AI Companion, ask the same thing. Now I haven't tried this yet, but I'm going to. Craig [00:05:40]: In the meantime, you can check out the features. There's a link in the show notes and in the newsletter. By the way, you can subscribe to the newsletter by going to aigosetocollege.com/newsletter. There were a couple of interesting news items, kind of developments over the last week or so that I wanted to bring to your attention. One that has a lot of potential is ChatGPT's new memory management features. So soon, at least I hope it's soon, ChatGPT will be able to remember things across conversations. You'll be able to ask chat gpt to remember specific things, or it'll learn over time kind of on its own, and its memory will get better over time. Unfortunately, very few users have access to this new feature right now. Craig [00:06:30]: I don't. I'm refreshing my browser frequently, hoping I get it soon. But OpenAI promised to share plans for a full rollout soon. I don't know when that is, but soon. So I'm basing a lot of what I'm gonna say right now on OpenAI's blog post that announced the feature, and I'll link to that in the show notes, or you can check out the newsletter. So, for example, in your conversations, you might have told chat gpt that you like a certain tone, maybe a conversational casual tone, maybe a more formal or academic tone in your emails. In the future, Chat GPT will craft messages based on that tone. Let's say you teach management. Craig [00:07:09]: Once ChatGPT knows that and remembers it, when you brainstorm assignment ideas, it'll give you recommendations for management topics, not, you know, English topics or philosophy topics. If you've explained to chat gpt that you'd like to include something like reflection questions in your assignments, it'll do so automatically in the future. And I'm sure there are gonna be a lot of other really good use cases for this somewhere down the road. There's gonna be a personalization setting that allows you to turn memory on and off. That'll be useful when you're trying to do some task where it's beyond what you normally do, and you don't wanna mess up ChatGPT's memory. Another cool feature is temporary chat. The temporary chat doesn't use memory, and it unfortunately also won't appear in your chat history. I think that might be a little bit of a problem. Craig [00:07:57]: Seems to me that memory is the next logical step from custom instructions, which pro users are able to use. Custom instructions lets you give ChatGPT persistent instructions that apply to all conversations. So, for example, one of my custom instructions is respond as a very knowledgeable, trusted adviser and assistant. Responses should be fairly detailed. I would like chat GPT to respond as a kind but honest colleague who is not afraid to provide useful critiques. Of course, I'm I do have some privacy concerns about the memory feature. We're gonna need to figure some of those out. You need to be cautious about anything you put into generative AI regardless of this memory feature. Craig [00:08:41]: Your school may have policies that restrict what you can put in. Best bet is just to assume that whatever you put into a generative AI tool is gonna be used for training the models down the road. So I don't know. Wanna keep an eye on that. I think it's gonna be a really interesting feature that could improve performance quite a bit. So I'll give updates. As soon as I get access to the memory feature, I'll give you a full review. The other thing that came out recently, kind of stole some of Gemini's thunder, was OpenAI's SORA. Craig [00:09:10]: That's s o r a. It creates videos based on prompts. And there are a number of tools out there that will produce videos based on prompts, but, you know, they're okay at best. I've tried a couple of them and have abandoned them pretty quickly. Sora is scary. It is absolutely amazing what that tool will produce given some pretty simple prompts. So Sora creates these realistic and imaginative scenes from text instructions. That's from Sora's website. Craig [00:09:42]: And the prompts can be pretty simple. So here's an example of a prompt that they had for a very professional looking video. Here's the whole prompt. Historical footage of California during the gold rush. And there's a link to this video in the show notes, or you can go to Sora's website, which is just openai.com/sora, s o r a. And you can check out the video there. It's probably not as good as what somebody could do who is really a professional cinematographer, but it's really good. When it's released to the public, I can see it being used for a lot of kind of b roll footage, maybe not the part of the main story. Craig [00:10:22]: A lot of news organizations like to use b roll. B roll is just kind of this generic or footage that really isn't part of an interview or or the actual story that's being covered. It might be useful for spicing up online learning materials or for creating recruiting videos or some things like that. I don't know. We're we're gonna have to see. It's not available to the public yet. Although the videos are pretty fantastic, there are some odd things about a few of them. There's a great video of a woman walking down a Tokyo street. Craig [00:10:54]: Every digital person in the video seems to have mostly the same cadence to their walk. It almost looks choreographed. They're not perfectly in sync, but it's close enough to make everything seem a little bit odd and a little bit artificial. And if you look closely enough, you can see little oddities in a lot of the videos, but you kinda have to look for them, at least I did. Right now, Soarer videos are limited to 1 minute, but that'll probably change in the future. One of the things I really like about the Soarer website is that OpenAI includes some failures as well. There's a video of a guy running the wrong way on a treadmill, and there's also a kind of disturbing video of gray wolf pups that seem to emerge from a single pup. It's a little odd. Craig [00:11:42]: Fascinating, though. So I can see this being used for a lot for training videos. I think it could maybe enhance the engagement capabilities of some online learning materials, but I can also see Sora and some of similar tools that are likely to emerge as being a time sink. It's intriguing to create cool new images and videos to add you to your website, your lectures, and presentations. But I can see myself wasting a lot of time on something that, at the end of the day, may not make much difference. I just did this. Was trying to get DALL E and some other AI tools to produce an image for the presentation I'm gonna give or just gave, depending upon when you're listening. And, you know, it got to where it was kind of okay. Craig [00:12:30]: But, eventually, I took about 5 or 10 minutes and just went into a drawing tool and drew one that actually was better at making the point I wanted to make. So kind of beware of the rabbit hole of AI. It it's real, and you can really waste a lot of time. Although, I do have to say it's kind of fun. And there's nothing wrong with wasting a little bit of time, not really wasting, but learning the capabilities of the tool. Alright. Here is my tip of the week. If you give any at home assignments in your courses, you probably received responses that were generated by a generative AI tool. Craig [00:13:07]: And if you haven't, yeah, you probably will soon. It's just the way things are now. AI detectors do not work reliably, although I think they may be getting a little bit better. So what can you do? Well, the best approach, and this is my opinion and that of a lot of other experts, is to modify your assignments to make it harder for students just to cheat with generative AI. I'll write quite a bit about this in an upcoming newsletter edition, and I'll talk about that here on the podcast as well. But in the meantime, getting better about sniffing out generative AI written text is probably a good thing to do. The first thing I suggest is to run your assignments through 1 or 2 generative AI tools. Do this for several assignments, and you'll start to see some common characteristics of generative AI responses. Craig [00:13:57]: Look. Let's face it. A lot of the students who used AI tools inappropriately are lazy, and they'll just copy and paste the answer into Word or into the response section of your learning management system or whatever, they're not gonna work very hard to try to mask the generative AI use. They were willing to work that hard. Maybe they'd use generative AI more appropriately. So if you know a little bit about the TELs, the the indicators of generative AI text, it can be useful in kind of correcting those students. I really encourage you to go to the newsletter, ai goes to college.com/newsletter. You can sign up there and subscribe because a lot of what I'm gonna tell you is kind of hard to explain, but it's pretty clear once you see it. Craig [00:14:44]: So go to the newsletter. The first thing is that generative AI has kind of a standard way that it formats longer responses. It goes back to the fact that a lot of these tools use something called markdown language to format the more complex responses. Markdown is a markup language. I know that's kind of confusing that uses symbols to allow formatted text using a plain text editor rather than a normal word processor. I use markdown to create the newsletter. Because generative AI systems often use markdown, they tend to focus text in kind of a limited number of ways. For example, generative AI tools love number or bulleted lists with bold faced headings, often with details set off with a colon. Craig [00:15:31]: So it'll be bullet point, bold face, colon. It doesn't always do that, but but it's often something like that. Like I said, I put a couple of examples in the newsletter, so you might wanna check that out. So one of the first clues for me is if I see something that's formatted in that way, I start to get really suspicious. Like, it's a reasonable way to format things, and if you use markdown language, it's a pretty good way to format things. But I'm guessing, unless you're maybe in computer science or information systems or something like that, not a lot of your students are using markdown language. So when I see this kind of formatting, I start to get a little bit suspicious. The next tell is the complexity of the answer. Craig [00:16:19]: In my principles of information systems class, the online assignments are really simple. They're just intended to get students to think about the material before the lectures or to reinforce something from a lecture. So I expect 3 or 4 sentence answers. Maybe longer ones for some of the assignments, but usually they're pretty brief. Well, when I get a really long detailed response for example, I've got an assignment where I just say it's along the lines of how did you use web 2.0 technologies during COVID for your school your schoolwork. What technologies did you use? Which worked well and which were challenging? Well, if you put that into CHaT GPT, you get this really nice numbered and bulleted list that's very extensive, and it's quite a good answer in a lot of ways. But it's way too long, it's way too detailed, And so if I saw an answer that was was like that, I'm pretty sure, not just pretty sure, I'm sure the student was using generative AI. And if you look at the answer that that's in the newsletter, you can see that the answer is very impersonal. Craig [00:17:33]: The true answers say things like my school or I really liked or I hated this tool or that tool, sometimes they'll crack on their teachers a little bit or on their schools. The generative AI response is very cold and very factual. And then generative AI likes to use kind of bigger words. In the answer that I put in the newsletter, it uses socioeconomic, which, yeah, students know that word maybe. But how many of them are gonna use it? Continuity of instruction. I've never had a student say, I'm concerned about continuity of instruction. That kind of language is a pretty huge indicator that somebody's using generative AI. Of course, clever students who are industrious can take what generative AI produces and put it in their own words. Craig... ... Read more

28 Feb 2024

23 MINS

23:36

28 Feb 2024


#3

Perplexity.ai, a mini-rant, and a successful experiment

  In this episode, Craig has a mini-rant about misleading click-bait headlines, discusses two recent generative AI surveys, gives the rundown on Google's rebrand from Brard to Gemini and Perplexity.ai and shares a modest experiment in redesigning an assignment to prevent generative AI academic dishonesty (which is a fancy way to say cheating). More details are available at <a href= "https://www.aigoestocollege.com/p/newsletter/">https://www.aigoestocollege.com/p/newsletter/</a>, where you can subscribe to the AI Goes to College newsletter. Contact Craig at <a href= "https://www.aigoestocollege.com/">https://www.aigoestocollege.com/</a> or <a href= "mailto:craig@EthicalAIUse.com">craig@EthicalAIUse.com</a> --- Transcript --- Craig [00:00:10]: Welcome to episode number 2 of AI Goes to College, the podcast that helps higher ed professionals try to figure out what's going on with generative AI. I'm your host, doctor Craig Van Slyke. So this week, I give you a mini rant. It's not a full rant, but a mini rant about misleading headlines, Talk about Google's release of a new model and its big rebrand from Bard to Gemini. My favorite part is gonna be when I talk about dot AI, which is generating a lot of interest right now, and I think it's tailor made for higher ed, even though I don't think that they're restricting the audience to higher ed and some promising results from a little experiment I did in redesigning an assignment. I'm gonna hit the highlights in this episode of the podcast. But if you want the full details, go to AI goes to college.com And click on the newsletter link and subscribe to my newsletter. A lot more details, screenshots, that sort of thing there. Craig [00:01:09]: So here's my rant. Cengage, and if you're in higher ed, you know who Cengage is. They call themselves course material publishers, Just released its 2023 digital learning pulse survey. As far as I can tell, this is the 1st time the survey gathered data about AI. The results are pretty interesting. It says only 23% of faculty at 4 year schools thought that their institutions were prepared for AI related changes, and that number was only 16% for 2 year schools faculty at 2 year schools. 41% of faculty across the 2 different types of institutions thought that generative AI would bring considerable or massive amounts to change to their institutions. What bothers me about this survey, is really not the survey itself, But how it's being reported? So the headline of the article from which I kind of learned about this survey read, Survey reveals only 16% of faculty is ready for Gen AI in higher ed, which is not at all what the survey was about. Craig [00:02:22]: The survey, at least the part of it I'm talking about, asked 2 generative AI related questions. Do you think your institution is is prepared for AI related changes. And how much will AI tools change your institution over the next 5 years? So first of all, that really isn't specific to generative AI, although I think that's what most people would interpret, AI as. The title of the article that led me to the survey said that faculty aren't ready. Well, that's not what the survey asked about. It didn't ask if the faculty were ready, although that would have been a good thing to ask. It asked if they thought their institutions were ready. So I want to caution all of you to do something you already know you should be doing. Craig [00:03:09]: Read these click headlines, and there are a lot of them. Read the articles with a critical eye. If it's something that's important, if it's something that you're going to try to rely on To make any sort of a decision or to form your attitudes, take the time to look at the underlying data. Don't just look at how that particular author is putting the data. Look at the data yourself. All of that being said, I think we're probably not especially well prepared collectively for generative AI, And that's not a big surprise. It's still relatively new, and it's changing very rapidly. So we'll see. Craig [00:03:48]: Speaking of changes, Google Bard is now Google Gemini, and it's not just a rebrand. So Google also, as part of the rebrand, announced that they have some new models. So with Gemini, formerly Bard, which you can find at gemini.google.com. There are 2 versions at the moment, Gemini and Gemini advanced, and this is kind of the same as Chat GPT and Chat GPT Pro. The nomenclature is a little bit confusing. Gemini is a family of models. Ultra is the big dog high performance model. Pro is kind of the regular model, and Nano is a light version optimized for efficiency, which I think signals that Google is gonna make a push into AI on mobile devices. Craig [00:04:36]: I was pretty confused about the names and what models there were and that sort of thing. So I asked Gemini to explain it to me. The details of that conversation are in the newsletter, which is available at AI goes to college .com/newsletter. Gemini is kind of like GPT 3.5. It's fine for most things. If Gemini isn't up to the task, try Gemini advanced, which is kind of like GPT 4. So far, I've been pretty happy with, my use of Gemini advanced. It did a good job of helping me unravel the various names and models related to Gemini, And I've played with it for some course related tasks, and it's performed pretty well. Craig [00:05:17]: I'm not sure I'd give up ChatGPT Pro for Gemini advanced, but it's a nice option, and I'm playing around with all of them. So there's no big surprise. Know, your experiences may vary, but I would suggest that you try it out for yourself. If you do, I'd love to hear your impressions. You can send those to me at craig, That's craig@ethicalaiuse.com. There was another generative AI, education survey that was out in the news recently, a higher ed think tank, and I don't even know what that means. HEPI released a policy note then included some results from a survey of 1250 undergraduate students in the UK. According to that survey, 53% of students used GAI to help with their studies, but only 5% said that they were likely to use AI to cheat. Craig [00:06:13]: I don't doubt that the statistics accurately reflected the survey responses, but I'm pretty skeptical about both of these numbers. 53% seems pretty high, for students that have actually used generative AI to help them with their studies, And 5% seems pretty low for those who said that they might use AI to cheat, but I don't know. I think there are a lot more Students that are either using or going to use generative AI kind of at the edges of, ethical use. So I still think that the uptake of generative AI among students is lower than some of us might think or some people might think, even especially if we consider regular use. So they may have played around with it, but I just don't think many students are using it regularly yet. One of the reasons I like this particular little article, which is linked in the newsletter, is that it did discuss the digital divide problem. The digital divide is real, and it has real consequences for a lot of aspects of society, including higher ed. We need to keep chipping away at the digital divide if we truly wanna adjust society. Craig [00:07:27]: And generative AI is just going to widen The digital divide. More details in the newsletter, which you can can access at AI goes to college.com/newsletter. It feels like it ought to be a drinking game. How many times will they say AI Goes to College .com/newsletter? So let's get to the resource of the week. There's been a lot of online chatter about Perplexity dot AI. The gist of all of this talk is that Perplexity is becoming kind of a go to generative AI tool when you wanna uncover sources. There's a lot of hype that says this is going to Be the new Google, and I'm not so sure about that, but it is a very useful tool. Exactly what it is is a little unclear at first. Craig [00:08:16]: 1st, I'm gonna read you verbatim what the about page says. Perplexity was founded on the belief that searching for information should be a straightforward, efficient Variance free from the influence of advertising driven models. We exist because there's a clear demand for a platform that cuts through the noise of information overload delivering precise, user focused answers in an era where time is a premium. I couldn't argue with any of that, but I don't know what it means. Their overview page is a little bit better, And and it talks about some of the use cases for Perplexity Point AI, answering questions, exploring topics in-depth, Organizing your library and interacting with your data. I can personally attest that perplexity is Pretty good with the first 3, but I haven't really tried it with my own files yet. Here's a problem that we have with search. If you go into Google or some other search engine, you want information, but a search engine doesn't really give you the information you want. Craig [00:09:21]: It gives you a list of websites that may or may not include somewhere the information that you want. Perplexity is much more about giving you the actual information that you're trying to to get and telling you where it got that information from. And so that's a fundamental difference, and I think it could ultimately reshape how we search for information on the web, and I think that's a good thing. There are a number of things that set perplexity apart. It's got a copilot mode that gives you a guided search experience. That can be really helpful. So what it does is it will ask you first, do you wanna focus on particular types of Resources. So right now, it's got 6 different categories. Craig [00:10:09]: Well, 5 and then an all. So it's got all where you search across the entire Internet, Academic, and this is a big one, where it searches only in published academic papers. Writing doesn't really search the web. It just helps you. It'll generate text or chat, without searching the web. Wolfram Alpha is a computational knowledge engine. It'll search YouTube, and it'll also search Reddit, which I think is pretty interesting. So you can go broad or you can go really narrow. Craig [00:10:41]: Another thing that it does is it will ask clarifying questions when it feels like it's necessary. Feels like. That's weird. When, it somehow in its large language model brain thinks that it's necessary. I give an example in the newsletter where I'd say I want to explain generative AI to a nonexpert audience. The audience will be intelligent but won't have any Background in AI or computer science, what topics would you cover? And instead of just giving me an answer, now I'm using, Perplexity's Copilot here. It says, what is the main purpose of your explanation? Basic understanding, application to risks and benefits. And so you can specify basic understanding or applications of generative AI or what are the risks or benefits or you can choose all of those or You can provide some other sort of clarifying information. Craig [00:11:35]: That's really useful, so it doesn't take you down as many kind of unproductive paths. Perplexity's Copilot will even kinda show you the steps it took in generating your response or its response rather. So you can take a look at that in the, newsletter. I know I'm saying that a lot, but there's a nice little screenshot that'll give you a better idea of what I'm talking about there. You can also look at the underlying sources that perplexity used to generate its responses. So for example, the little, explain generative AI prompt that I gave it. It came up with 24 different sources along with an answer. So I can dig into any one of those 24 sources to see exactly what it was talking about. Craig [00:12:25]: And when perplexity gives me its answer, it Gives footnotes, little numbers that refer back to those sources so you can dig in. You can also do the normal Chat thing like asking follow-up questions. So it it's really quite good. Here's one of my favorite, favorite, favorite Features of perplexity, it allows you to create what it calls collections. So collections allows you to group together different conversations, perplexity calls, conversations, threads. And so one one of my biggest frustrations with Chat gbt with the interface is I'll have some conversation with it and then wanna go back to that topic a couple of weeks later, and I Can't find that conversation because I've had 200 other conversations in the meantime. A little pro tip, you can search, your conversations in the mobile app, or as far as I know, you can't do it on the web interface yet. But you can always go in, search whatever keyword you're gonna search on, find the conversation, and then Add to it, and it'll pop up on the top of your list on the website. Craig [00:13:37]: So I I know that was kind of a muddled, description. But, if it's unclear, just email me, craig@ethicalaiuse.com. So these collections can be really, really useful. I was working on something for my dean recently. I was using perplexity. I put all of those conversations in a collection. So to me, Perplexity dot AI is one of the more interesting tools that I've seen come out recently. If you haven't checked it out, you should. Craig [00:14:07]: They have a free tier that you can play around with. And I And I'd used it and then almost immediately paid for an annual pro subscription. So, really, I encourage you to check it out. Alright. So here's my little experiment. So I'm teaching principles of information systems this term, And I include some pre class online assignments. These are simple little things that all I'm trying to do is get them to engage with the material little bit before class. They're easy, and I'm very lenient in the grading. Craig [00:14:41]: Basically, if you put any effort into it at all, you get full credit as long as it's not late. But, unsurprisingly, I noticed that some submissions looked suspiciously like they were generated with generative AI. 1st time, I let it slide and just, commented on it in class. The 2nd time, I required students to resubmit. I just Said I'm giving you a 0 for now. Put this in your own words, and I'll give you credit. I'm teaching the same class in the upcoming spring term. We're on the quarter system, by the way. Craig [00:15:14]: And so I started thinking about how to modify these assignments to keep students from just copying and pasting in a generative AI response. I know this is gonna sound incredibly lazy of me, but I don't wanna be spending my entire quarter dealing with academic honesty reports. I'd rather just prevent the problems in the 1st place, and we're gonna have to do this kind of thing. We're gonna have to rethink how we We do evaluations, how we assess learning, how we create our class activities. So I decided, I'm gonna try to modify an upcoming assignment. So this is a little activity I've used for years, And it's very simple. The original assignment was compare and contrast supply chain management systems and customer relationship management systems. Give 3 ways they're similar and 3 ways they're different. Craig [00:16:05]: Like I said, these are kinda little lame o activities, but but you you can see where I'm going with this. I want students to kind of have to look at what the 2 2 different types of enterprise systems that we're talking about are. You know, they'll start to get some understanding of kind of what they're all about, so I decided to change the assignment. And and I'm gonna give you an overview of what I did, but all the details are in the newsletter. So I basically said, Hey. I'm gonna give you a task, and then I'm gonna give you the answer that was given by generative AI. You're then going to compare that answer to the information in the textbook and briefly describe how the information from generative AI And the textbook are similar and how they're different. And then I go through and say, alright. Craig [00:16:56]: Here's what I put into, generative AI, here's what it spit back. And then the students had to to kind of do a little bit of work. I I was Pretty happy with the result. Some students absolutely missed what I was asking them to do, but that's okay because I'm not sure it was entirely clear. But the ones that got what I wanted them to do did a pretty good job of going back into the textbook and kind of seeing what the textbook said and then seeing what The answer, comparing that to the answer generative AI gave. Some students even went so far as to say, like, on page 247 of the textbook, it said this, And generative AI said that. And so I was pretty happy with the results considering I had about 15 minutes into revising the assignment. So I'm gonna do more of these. Craig [00:17:47]: As I said, I'm teaching the class again in the spring, so I'm gonna spend part of the break Redoing some of my assignments, my online activities to make it to where they have they can't just copy and paste the question into generative AI. And I'll report on those experiments, as I go through them. Alright. That's all I have for you today. I'm out of breath, and you're probably tired of listening. So I will talk to you next time. Thanks for listening to AI Goes to College. If you found this episode useful, you'll love the AI Goes to College newsletter. Craig [00:18:28]: Each edition brings you useful tips, news, and insights that you can use to help you figure out what in the world is going on with generative AI and how it's affecting higher ed. Just go to AI goes to college.com to sign up. I won't try to sell you anything, and I won't spam you or share your information with anybody else. As an incentive for subscribing, I'll send you the getting started with generative AI guide. Even if you're an expert with AI, You'll find the guide useful for helping your less knowledgeable colleagues.     ... Read more

15 Feb 2024

19 MINS

19:14

15 Feb 2024


#2

Should you trust AI?

In the debut episode of AI Goes to College, join host Craig Van Slyke as he delves into the critical question: Should you trust AI? Drawing on his expertise in the field, Craig explores the nuanced answer to this question, shedding light on the capabilities and limitations of generative AI in various contexts. Listeners will gain valuable insights into when it's appropriate to trust AI, and how to navigate the consequences of relying on its output. Additionally, Craig reviews Consensus, a promising AI research app, sharing his firsthand experience and recommendations for its use. The episode also covers recent news items, including Arizona State University's partnership with OpenAI and EdTech firm Anthology's AI policy framework for higher education. To wrap up, Craig shares his top choice for a paid generative AI service, highlighting the unique advantages of Poe and why it stands out amidst other options in the field. He offers practical advice for leveraging generative AI tools and emphasizes the importance of thoroughly understanding their capabilities before integration. Tune in to gain a comprehensive understanding of trusting, utilizing, and verifying generative AI, and discover valuable resources for effectively incorporating AI in the higher education landscape. Embrace the potential of AI as a powerful ally, but with a discerning eye. Don't miss the chance to expand your knowledge and make informed decisions in the ever-evolving world of generative AI. ... Read more

07 Feb 2024

17 MINS

17:44

07 Feb 2024


#1

AI Goes to College Trailer

In this episode, Craig provides an insightful overview of what to anticipate from the AI Goes to College podcast. He talks about his inspiration for launching the podcast and emphasizes how it can help higher education faculty and staff navigate the transformative impact of generative AI. Tune in to gain a clear understanding of the podcast's purpose and how it can support you in staying abreast of developments within the higher education landscape. Craig also tells you how you can get his new e-book, Getting Started with Generative AI: A Guide for Higher Ed Professionals. (It's free!) For more information, or to sign up for the AI Goes to College newsletter, go to https://www.aigoestocollege.com/ ... Read more

29 Jan 2024

05 MINS

05:42

29 Jan 2024