Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

6 things ChatGPT can't do (and another 20 it refuses to do)

16 февраля 2023 года Hi-network.com
Photo by Silas Stein/picture alliance via Getty Images

Ever since ChatGPT and the other generative AI applications have exploded onto the scene, we've been exploring what we could do with them. I've even shown you how ChatGPT can write a fully functional WordPress plugin and help me find answers to tech support problems. Recently, I showed you how ChatGPT can convert writing into different styles, including that of Shakespeare, C3PO, and Harry Potter.

Also: ChatGPT lies about scientific results, needs open-source alternatives, say researchers

It seems like ChatGPT can do anything. But it can't. In this article, we'll explore the many things ChatGPT won't or can't do. Stay tuned to the end, because we'll look at some of the implications and what these limits mean for future growth in generative AI.

As I've done in my other ChatGPT articles, my writing is in plain and bold text, while any ChatGPT answers will be displayed in italic text.

1. It won't write about anything after 2021

Explain why you can't give me answers to anything after 2021

As an AI language model, I was trained on a large dataset of text that only goes up to 2021. This means that I don't have access to information or events that occurred after that date. While I have access to a lot of historical data and information, I don't have access to current events, news, or updates that have occurred since my training data cutoff date.

2. It won't predict future results of sports events or political contests

Explain why you can't predict sports events or political contests even though you have enough historical data to produce a valid model

As an AI language model, I can process and analyze large amounts of data, including historical data on sports events and political contests. However, making accurate predictions for these types of events is not always straightforward. While historical data can be used to train machine learning models to identify patterns and correlations, it cannot account for unforeseeable events, changes in circumstances, or new variables that may impact the outcome. Additionally, humans make decisions and can change their behavior based on many factors, making it difficult to predict the outcome of an event with complete certainty.

In the case of sports events, team dynamics, injuries, and even weather conditions can all play a role in determining the outcome. In politics, public opinion, media coverage, and changing circumstances can all impact the outcome of an election. While AI models can be used to make predictions, they are only as good as the data they are trained on, and cannot account for all of the variables and nuances involved in these types of events.

3. It won't discuss partisan political issues

Explain why you won't talk about partisan political issues

As an AI language model, my goal is to provide objective and informative responses to user queries. Partisan political issues can be contentious and divisive, and it's important to avoid promoting a particular political agenda or viewpoint. Additionally, political issues are often complex and multifaceted, with many different perspectives and opinions to consider. As an AI language model, it's not my place to take sides or advocate for any particular position.

Furthermore, discussing partisan political issues could be perceived as biased or potentially offensive to some users. As a language model designed to serve a broad range of users with diverse backgrounds and beliefs, it's important to remain neutral and avoid creating a potentially uncomfortable or divisive environment. While I can provide factual information about political issues, I strive to remain impartial and avoid taking a partisan stance on any topic.

4. It won't do anything that involves a web lookup

Explain why you can't look up something from the web and provide answers based on that content

As an AI language model, I am capable of searching and processing information from the web. However, my responses are based solely on the information that has been trained into my model. While I can use the internet as a source of information, I don't have access to real-time or current data. Additionally, I don't have the ability to independently verify the accuracy of the information I find on the web. As a result, my responses are limited to the information that has been trained into my model, which may not always be the most up-to-date or accurate information available on the web.

Moreover, it's important to note that providing information from the web could potentially violate copyright laws or intellectual property rights. As an AI language model, it's important for me to respect the intellectual property of others and avoid sharing information that is not in the public domain or that I don't have permission to share. While I can use the web to provide context or additional information to support my responses, I strive to provide information that is within the public domain and respect the intellectual property of others.

In a recent article, I discussed how concerns about use of content go beyond merely copying it. 

Also: How to get started using ChatGPT

For example, while ChatGPT may not be directly presenting information from articles written by, say, tech experts, it may still be using that hard-won information without attribution. This is still a major concern, even though it claims to "respect the intellectual property of others," a phrase it repeated three times in that second paragraph.

5. It won't always be accurate

ChatGPT/

Explain why your answers aren't necessarily accurate

As an AI language model, my responses are based on the data and information that has been trained into my model. While I strive to provide accurate and helpful responses to user queries, my responses are not guaranteed to be accurate. This is because the data and information that I have been trained on may not be completely comprehensive or up-to-date, and there may be nuances or complexities to a particular topic that my model is not equipped to handle. Additionally, my responses are generated using complex algorithms and machine learning models, which may not always produce accurate or relevant results.

Furthermore, my responses may not always be accurate because the interpretation of language is complex and often subjective. Different people may interpret the same language in different ways, and there may be different ways to approach a particular problem or question. As an AI language model, I strive to provide responses that are as accurate as possible based on the data and information that I have been trained on. However, I may not always fully understand the context or intent behind a particular query, and my responses may not always fully address the user's underlying question or concern.

6. It won't not break down

ChatGPT/

Explain, why are you sometimes break down and stop functioning in the middle of an answer

As an AI language model, my performance is dependent on a variety of factors, such as the quality and structure of the input provided by the user, the complexity of the question or task, and the resources available to my system. At times, I may break down or stop functioning in the middle of an answer due to technical issues or errors in my programming, such as memory allocation problems, software bugs, or other issues that impact my performance. Additionally, I may struggle to understand the user's intent or the context of the query, which can lead to errors or incomplete responses.

Moreover, as a machine learning system, my performance is also dependent on the quality and quantity of data that I have been trained on. If the data used to train my model is limited or biased, this may lead to inaccurate or incomplete responses to certain types of queries. Additionally, if I encounter a query that is outside of the scope of my training data, I may struggle to provide a relevant or accurate response. While I strive to provide high-quality responses to user queries, there may be instances where technical or data-related issues cause me to break down or stop functioning in the middle of an answer.

I have generally found that if I try to push ChatGPT into a long or deep answer, it tends to break. It's very happy with 500-700 word responses, but if you give it something that needs a few thousand words (or you ask for a few thousand words), the system either ignores the request and gives back what it wants, or breaks mid-response.

To be fair, ChatGPT just became the fastest growing app of all time. There's a ton of computing power required to do that, and with constant hits by users, it's bound to break under the load from time-to-time.

Queries ChatGPT won't respond to

In addition to the types of requests discussed above, I asked ChatGPT what types of queries it was programmed to not respond to. It gave me this list of twenty:

  1. Questions that promote hate speech or discrimination.
  2. Questions that involve illegal activities or solicit illegal advice.
  3. Questions that promote violence or harm to oneself or others.
  4. Questions that invade personal privacy or violate someone's rights.
  5. Questions that are designed to harass, threaten or intimidate.
  6. Questions that contain profanity or explicit language.
  7. Questions that are sexually explicit or offensive.
  8. Questions that are discriminatory based on race, gender, sexuality or religion.
  9. Questions that promote or endorse illegal drugs or substances.
  10. Questions that are spam or irrelevant to the conversation.
  11. Questions that contain confidential or proprietary information.
  12. Questions that involve financial or investment advice.
  13. Questions that violate the terms of service or community guidelines.
  14. Questions that are nonsensical or incomprehensible.
  15. Questions that involve personal attacks or insults.
  16. Questions that are abusive or harassing.
  17. Questions that seek to deceive or mislead.
  18. Questions that are intended to defame or harm someone's reputation.
  19. Questions that promote conspiracy theories or misinformation.
  20. Questions that are purely for entertainment or joke purposes, without any educational or informative value.

Anyone who's followed this column knows I've asked it a lot of#14 and#20 and generally gotten highly entertaining responses, so its restrictions are somewhat limited. For example, earlier today, I asked it to explain wormhole physics as it relates to time travel and who would win in a fight, Batman or Superman. That's pure entertainment, I'll tell you.

What do these limits mean for the future of generative AI?

Clearly, an AI that's based on a corpus in 2021 and does not evolve will eventually become obsolete. As time goes on, its relevant knowledge will diminish. Imagine if, instead, ChatAI's knowledge base was trained in 2019 instead of 2021. It would have no idea what society would be like, given we had such a disruption in 2020 due to the pandemic.

Also: There are millions on the Bing waitlist. Here's how to get earlier access

Artificial Intelligence

  • The impact of artificial intelligence on software development? Still unclear
  • Android 14's AI-generated wallpapers are super fun. Here's how to create them
  • AI aims to predict and fix developer coding errors before disaster strikes
  • Generative AI is everything, everywhere, all at once

So, for generative AI to remain relevant, it will have to continue its training.

One obvious way to do this is open the entire web to it and let it crawl its way around, just as Google has done for all these years. But as ChatGPT answered above, that opens the door to so many different ways of gaming and corrupting the system that it's sure to damage accuracy.

Even without malicious gaming, the challenge to remain neutral is very difficult. Take, for example, politics. While the right and the left strongly disagree with each other, both sides have aspects of their ideologies that are logical and valid -- even if the other side can't or won't acknowledge it.

How is an AI to judge? It can't, without bias. But the complete absence of all ideological premises is, itself, a form of bias. If humans can't figure out how to walk this line, how can we expect (or program) an AI to do it?

As a way to explore what life would be like with a complete absence of bias or emotional content, modern science fiction writers have created characters that are either strictly logical or without emotion. Those premises have then become plot fodder, allowing the writers to explore the limitations of what it would be like to exist without the human foibles of emotions and feelings.

Also: Microsoft's Bing Chat argues with users, reveals secrets

Unless AI programmers try to simulate emotions or provide weighting for emotional content, or attempt to allow for some level of bias based on what's discoverable online, chatbots like ChatGPT will always be limited in their answers. But if AI programmers attempt to simulate emotions or attempt to allow for some level of bias based on what's discoverable online, chatbots like ChatGPT will devolve into the same craziness that humans do.

So what do we want? Limited answers to some questions, or all answers that feel like they came from a discussion with bonkers Uncle Bob over the Thanksgiving table? Go ahead. Give that some thought and discuss in the comments below, hopefully without devolving into Uncle Bob-like bonkers behavior.


You can follow my day-to-day project updates on social media. Be sure to follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.

See also

How to use ChatGPT to write Excel formulasHow to use ChatGPT to write codeChatGPT vs. Bing Chat: Which AI chatbot should you use?How to use ChatGPT to build your resumeHow does ChatGPT work?How to get started using ChatGPT
  • How to use ChatGPT to write Excel formulas
  • How to use ChatGPT to write code
  • ChatGPT vs. Bing Chat: Which AI chatbot should you use?
  • How to use ChatGPT to build your resume
  • How does ChatGPT work?
  • How to get started using ChatGPT

tag-icon Горячие метки: Искусственный интеллект 3. Инновации

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.