Thank you so much for coming along to our AI for Charities event at Microsoft. We had such a lot of questions for the Q&A that we couldn't cover them all off in time, so here are the questions we didn't manage to deal with in the room and the answers to them.
In true "put your money where your mouth is" style, we've used Microsoft CoPilot to help with these responses.
If you have further questions that you didn't manage to ask, or that have sprung to mind since, please drop us a line!
Google's Bard and Gemini are related, but they are not the same thing. Bard is Google's conversational AI chatbot, similar to CoPilot. It was released as part of Google's effort to compete with AI tools like ChatGPT.
Gemini, on the other hand, is Google's next-generation AI model, which builds upon Bard's foundation but is designed to be more advanced. Gemini was developed with the aim of offering better reasoning, understanding, and generating more accurate responses. Think of Gemini as the engine behind Google's future AI tools, including upgrades to Bard.
So, while Bard is the chatbot product, Gemini is the underlying technology that powers or enhances the next versions of AI models at Google.
Supporting staff to avoid "AI hallucinations" — instances where an AI model confidently produces incorrect or nonsensical information — involves a combination of training, process refinement, and careful oversight. Here are some key strategies to help mitigate these risks:
1. Understand AI's Limitations
• Training and Awareness: Staff should be made aware that AI models, like Bard, GPT, or any other, can generate confident-sounding but inaccurate responses. This awareness encourages critical thinking and double-checking.
• Recognize Hallucination Triggers: Teach staff about common situations where AI is more likely to "hallucinate," such as handling niche, ambiguous, or highly complex queries.
2. Verification and Validation Processes
• Human Oversight: Ensure AI-generated content, especially for critical or complex information, is always reviewed and verified by a knowledgeable human before it is used or published.
• Cross-checking with Reliable Sources: Encourage staff to fact-check AI outputs using trusted external sources or databases, especially when dealing with factual information.
• Use of Multiple AI Systems: Cross-reference results from different AI systems or models to identify discrepancies, which can indicate possible hallucinations.
3. Clear Role of AI in Workflow
• Define AI's Scope: Clearly define where AI should be used in a workflow. AI can assist with brainstorming, summarization, or initial drafts but should not be relied upon for final judgments in critical areas such as medical, legal, or high-stakes decisions.
• Guide Prompts Thoughtfully: Encourage staff to provide clear, well-structured prompts when using AI. Vague or complex queries can increase the likelihood of hallucinations.
4. Bias and Risk Mitigation Training
• Recognize and Mitigate Biases: Train staff to recognize that AI models can sometimes reflect biases in their training data, which may lead to hallucinations or skewed results.
• Promote Responsible Use: Implement guidelines that promote the responsible use of AI, especially in sensitive contexts, to reduce risks associated with inaccurate AI outputs.
5. Automated Guardrails and Tools
• Implement AI Fact-checking Tools: Use automated tools that analyse AI outputs for potential errors, particularly for factual claims. Google and other providers often have built-in features or third-party tools to highlight problematic areas in AI-generated content.
• Highlight Uncertainty: Encourage the use of features that flag uncertain AI responses, allowing staff to focus on validating information that AI might not be confident about.
By combining these approaches, staff can minimize the chances of falling victim to AI hallucinations while still benefiting from the efficiencies and insights AI can provide.
Here are some excellent resources tailored for beginners in charitable or non-profit organizations who want to learn how to use AI tools like Copilot, ChatGPT, and other LLM tools:
1. LinkedIn Learning Courses for Nonprofits: LinkedIn offers several free courses specifically designed for nonprofit professionals. These courses cover how to use generative AI for creating content, personalized outreach, and data analysis.
2. Microsoft Digital Skills Centre for Nonprofits: This platform provides instructor-led training on AI and generative AI, focusing on their implications for nonprofits. It includes hands-on experience with Microsoft AI tools.
3. AI Skills for Nonprofits Collection: Available on Microsoft Learn, this comprehensive learning path enhances the digital capabilities of nonprofit organizations. It covers the basics of AI and how to apply these skills in a nonprofit setting
4. TechSoup’s AI Resources: TechSoup offers various resources and training sessions to help nonprofits understand and implement AI tools effectively. They focus on ethical and responsible use of AI
5. DonorSearch’s Guide to AI Tools for Nonprofits: This guide explores the top AI tools available for nonprofits and provides tips for successfully implementing these tools into your workflow
These resources should help your team get started with AI tools in a way that’s accessible and relevant to your organization’s needs.
AI tools like Copilot and ChatGPT do not have direct access to academic articles that are behind paywalls
Microsoft Copilot respects SharePoint permissions and adheres to the same security and compliance standards as other Microsoft 365 services. Here’s how it works:
1. Permission-Based Access: Copilot only accesses content that the user has permission to view. If a document or file is restricted, Copilot will not be able to access or retrieve information from it.
2. Role-Based Access Control (RBAC): SharePoint uses RBAC to manage permissions. Copilot respects these roles and will only interact with content based on the user’s assigned permissions.
3. Compliance and Security: Copilot is built on the Microsoft 365 compliance framework, ensuring that all data interactions are secure and compliant with organizational policies.
4. Data Privacy: Copilot does not store any user data. It processes requests in real-time and does not retain information beyond the session.
Copilot will not look at other people’s work unless you have the appropriate permissions to access that content. This ensures that your organization’s data remains secure and private.
Building the capacity of young people in Tanzania to adapt AI for social impact involves a combination of education, practical training, and community engagement. Here are some strategies and resources that can help:
1. Local AI Communities and Labs:
o Tanzania AI Lab & Community: This initiative aims to empower individuals in Tanzania to harness AI for personal and national growth. They offer resources, networks, and events focused on responsible AI and social impact
2. Digital Skills Training Programs:
o Camara Education Tanzania: Their Youth Digital Skills program provides underprivileged youth with valuable digital skills, including web development, digital marketing, and graphics design. This program prepares participants for internships and career opportunities in the tech industry
3. Partnerships with International Organizations:
o UNDP Initiatives: The United Nations Development Programme (UNDP) supports youth businesses in Tanzania through training, access to finance, and resources targeted at specific industries like digital technology
4. Online Learning Platforms:
o Coursera and edX: These platforms offer free and low-cost courses on AI and related technologies. Courses like “AI For Everyone” by Andrew Ng on Coursera can provide a solid foundation.
o Microsoft Learn: Offers free learning paths and modules on AI, which can be particularly useful for beginners.
5. Hackathons and Competitions:
o Organize or participate in AI hackathons and competitions. These events provide hands-on experience and foster innovation among young people.
6. Mentorship and Networking:
o Establish mentorship programs where experienced professionals guide young learners. Networking events can also help connect youth with industry experts and potential employers.
7. Government and NGO Support:
o Advocate for government and NGO support to fund and promote AI education and training programs. Policies that encourage tech education and innovation can create a conducive environment for learning.
By leveraging these resources and strategies, you can help young people in Tanzania develop the skills and knowledge needed to use AI for social impact effectively.
Microsoft’s policy for nonprofit licenses is quite clear: organizations cannot share, transfer, rent, or resell nonprofit licenses or subscriptions with other organizations or individuals, even if they are part of the same group. This means that if your charity partners are registered as separate companies, they would need to apply for their own nonprofit licenses directly through Microsoft.
Each nonprofit organization must individually meet the eligibility requirements and go through the validation process to receive the benefits of Microsoft’s nonprofit programs. This ensures that the licenses are used appropriately and in accordance with Microsoft’s guidelines.
If your charity partners are interested in applying for these licenses, they can start by visiting the Microsoft Nonprofits page to learn more about the eligibility criteria and application process.
It’s understandable to be concerned about cybersecurity, especially when using free versions of tools. However, using free versions of tools like Copilot doesn’t inherently make your information more vulnerable. Here are some key points to consider:
1. Security Measures: Microsoft, for example, implements robust security measures across all its products, including free versions. This includes encryption, regular security updates, and compliance with industry standards.
2. User Practices: The security of your data also heavily depends on user practices. Ensuring that your team follows best practices, such as using strong passwords, enabling multi-factor authentication, and regularly updating software, can significantly reduce the risk of cyber attacks.
3. Awareness and Training: Educating your team about phishing attacks, social engineering, and other common cyber threats is crucial. Regular training sessions can help your team recognize and respond to potential threats.
4. Data Sensitivity: Be mindful of the type of data you store and share using these tools. For highly sensitive information, consider using additional security measures or more secure platforms.
5. Backup and Recovery: Regularly back up your data and have a recovery plan in place. This ensures that you can quickly restore your information in case of a cyber incident.
While the risk of cyber attacks is always present, taking proactive steps to secure your data and educate your team can help mitigate these risks.
All sign-ups are handled inline with our privacy policy and can unsubscribe at any time.
Thank you for contacting us.
We will get back to you as soon as possible.
Oops, there was an error sending your message.
Please try again later.
Company Registration Number: 06543233
Registered address:
1-Fix Limited
1F02 Arena Business Centre, 100 Berkshire Place, Winnersh Triangle, Wokingham, Berkshire RG41 5RD