Generative AI and ChatGPT IoA Institute of Analytics
In this post, I’ll provide a primer on ChatGPT, large language models, and generative AI, and discuss how these revolutionary technologies are positively impacting the contact centre. The news cycle has been so fast, it’s been hard to keep up with all the terminology. Depending on which article you read, you might see the terms ChatGPT, GPT, GPT-3, GPT-4, large language models (LLM), or generative AI all used interchangeably. Legal cases against OpenAI (mother of ChatGPT) are frequently emerging. This class-action suit accuses the company of using books as training data.
It is therefore no surprise that organisations are looking to utilise Generative AI within their businesses e.g., for chatbots or research tools. GPT LLMs can be a powerful AI assistant for agents when they are engaging with a customer. The agent can be 100 percent focused on the needs of the customer, while the GPT-powered assistant automatically retrieves the right information from the knowledge base and provides scripts to improve the outcome.
Reconsidering assessment for the Chat GPT era: QAA advice on developing sustainable assessment strategies
Organisations will need to consider the level of disclosure they are required to make regarding their use of generative AI, both internally to personnel and more publicly, depending on the AI use cases. A number of existing laws and regulatory requirements, as well as laws that are on the horizon, will require disclosure of certain types of AI use. Increasingly, there are also customer and staff expectations regarding levels of transparency regarding AI use that may affect them. Contracts for AI procurement, development or investment form part of the wider governance framework mitigating AI risk. Contracts for the procurement or use of a generative AI system require careful review to understand and, as far as possible, negotiate appropriate terms to address AI-specific risks in the allocation of rights, responsibilities and liability. Such contracts can look very different from a standard contract for a traditional piece of software.
- There is a clear lack of imagination in the development of ChatGPT use cases.
- What makes the AI-run bot compelling are its capabilities which mimic humans closely.
- There are loads of things we (in HE and education more broadly) need to think about and do when it comes to generative AI, both cognitively and practically.
Although generative AI is expected to help tackle a wide range of scientific and social problems, it has also come under fire for a range of ethical issues. The most engaging content tends to be the kind that provokes an emotional response or establishes a human connection. Readers are interested in the views, ideas and opinions of people with experience; they respond to genrative ai stories and anecdotes, and to new takes on topics they care about. While ChatGPT is trained on a large amount of data, it doesn’t have in-depth knowledge on every topic. And while OpenAI updates its datasets regularly, the data GPT-3 was trained on is only accurate up to September 2021, so its responses relating to events after this date may be factually inaccurate.
Generative AI: How ChatGPT Promises To Reinvent AI For Businesses?
Grade-free learning is practiced around the world to encourage intellectual risk taking – most frequently in the format of covered transcripts for first-year students. Evergreen State College2 in the United States is an accredited four-year college with no grading system in place. A massive movement of “ungrading3” occupies the chat rooms and libraries of pedagogically engaged university teachers the world around. Essentially, ungrading is minimising the use of points and weights on assignments, and instead providing feedback and focusing on student growth and learning. It costs a lot of time per student, something many institutions of higher education do not have, but that is part of the structural change that generative AI forces.
While AI has the potential to bring many benefits, it is important to also consider the ethical and social implications of its use. As AI continues to evolve, it will be important to ensure that it is used in ways that benefit society as a whole and that appropriate safeguards are in place to protect privacy and prevent discrimination. You may be wondering if ChatGPT will absorb your prompt and then
offer that information back to other users who enquire about
similar topics. The model is continuing to be trained based on rating and
monitoring its responses, but is not adding information from user
prompts to its training data.
A prolific businessman and investor, and the founder of several large companies in Israel, the USA and the UAE, Yakov’s corporation comprises over 2,000 employees all over the world. He graduated from the University of Oxford in the UK and Technion in Israel, before moving on to study complex systems science at NECSI in the USA. Yakov has a Masters in Software Development.
Voice generation models take a small sample of recorded voice conversation and create a simulated voice that can be used by software systems programmatically. These generative AI models don’t necessarily use LLMs, but some do incorporate LLMs in an effort to understand the meaning of a prompt. For this reason, these models are also referred to as text completion models. Given all of the data that the model has “read” previously, it can complete the next logical sentence, paragraph, or essay with a human-like quality.
My background is in law, communications, marketing and government relations. In 2018 I realised that AI was going to change our world dramatically so I completely changed my career by taking a year to study MSc AI and Data Science. I learned to code and was inspired by the technical opportunities AI offers, as well as the ethical challenges. I also realised that not everyone can take a year out to study intensively so I established AI Governance to inspire leaders and organisation to use AI with wisdom and integrity. Whether you are a leader today, or hope to become a leader tomorrow, it’s essential that you are able to use AI tools. In this course you’ll get hands-on experience so leaders and managers who aren’t IT experts will learn about AI, particularly new generative AI tools like ChatGPT.
ChatGPT is so advanced that it can purportedly admit its mistakes, challenge incorrect premises and reject inappropriate requests. The launch of ChatGPT by OpenAI last November has firmly put generative artificial intelligence (AI) on the map. The head of Google Sundar Pichai has recently commented that AI is more profound than fire or electricity or anything that humans have done in the past. To use ChatGPT, all you need is a computer with internet access and an account.
Updating regulations will hopefully help maintain online civility and create a level playing field for the development and deployment of future AI models in the EU and beyond. Finally, the content moderation rules of the DSA should be selectively extended to cover LGAIMs, including notification and action mechanisms, trusted flaggers, and comprehensive audits. Arguably, content moderation should take place at the AI generation stage, rather than ex-post, when the effects of AI-generated hate speech and fake news may be difficult to stop. Third, exceptionally, LGAIM developers should be subject to non-discrimination rules, including a version of Article 10 of the proposed AI Act.
Some regulators may require designated senior management responsibility for oversight of AI technology in the context of wider senior management responsibility regimes. Appropriate governance is central to responsible AI use and procurement, and is an area of focus for lawmakers and regulators globally. Data must be processed in compliance with any ownership rights, legal requirements, contractual terms and company policies. Some of the key areas for legal risk management – privacy, intellectual property (IP) infringement, and other legal and commercial restrictions on data use – are discussed below.