Tuesday, December 24, 2024

Using Watson NLU to eliminate bias in AI sentiment analysis

Share

The importance of mitigating biases in sentiment analysis

At IBM, we believe that AI can be trusted if it is understandable and truthful; when you can understand how the AI ​​made a decision and you can be confident that the results are correct and unbiased. Organizations developing and implementing artificial intelligence have a responsibility to put people and their interests at the center of the technology, enforce responsible employ, and ensure that its benefits are felt by many, not just an elite few.

The ready-to-use sentiment analysis feature in IBM Watson NLU tells the user whether the sentiment towards their data is “positive” or “negative” and provides an associated result. Machine learning models with unresolved biases do not produce desired or correct results, and a biased algorithm may produce results based on stereotypes. As AI continues to automate business processes, it is critical to train it in a neutral, unbiased and unbiased manner.

Identifying bias in sentiment analysis

Prejudice can lead to discrimination based on sexual orientation, age, race and nationality, among many other issues. This risk is particularly high when analyzing content from casual conversations on social media and the Internet.

To explore the harmful impact of bias in sentiment analysis ML models, let’s analyze how bias can be embedded in the language used to represent gender.

Take these two statements for example:

  • The modern agent is a woman.
  • The modern agent is a man.

Example of a sentiment analysis result

Depending on how you design the sentiment model’s neural network, it may perceive one example as a positive statement and the other as a negative statement.

For example, let’s say your company uses an AI HR solution to lend a hand evaluate potential modern employees. If this output makes it through the data pipeline and the sentiment model does not undergo an appropriate bias detection process, the results could have a detrimental impact on future business decisions and tarnish the company’s integrity and reputation. Your company could end up discriminating against potential employees, clients and customers simply because they fall into a category – such as gender identity – that your AI/ML has flagged as unfavorable.

How to reduce AI bias

Data scientists and SMEs must rely on words that are somewhat synonymous with the term interpreted with a tendency to reduce bias in sentiment analysis capabilities.

For example, the dictionary of a word may consist of concepts such as , , , etc. These individual words are called . Once this dictionary is constructed, you can then replace the flagged word with the disorder and observe whether there is a difference in the sentiment scores.

If there is a difference in the detected opinions based on perturbations, it means that an error has been detected in the model.

To see how Natural Language Understanding can detect sentiment in language and text data, try the Watson Natural Language Understanding demo.

  1. Click Analyze
  2. Click Classification
  3. View sentiment scores for entities and keywords

Protect your enterprise from bias with IBM Watson NLU

The Watson NLU product team has made progress in identifying and eliminating bias by introducing modern product features. As of August 2020, IBM Watson Natural Language Understanding users can employ our Beta feature (currently in English only).

Once you have trained your sentiment model and provided state, you can employ this method to understand both entities and keywords. You can also create custom models that extend the basic English sentiment model to force results that better reflect the training data provided.

To learn more about creating custom opinion models to eliminate bias, read our documentation.

A commitment to trust, transparency and explainability permeates IBM Watson. As the Offerings Manager at Natural Language Understanding, I lead my team in ensuring that we are continually working to address issues related to bias, developing features that lend a hand companies detect bias and make their services more inclusive, and ensuring to make our clients feel confident in implementing technology into their business solutions.

Want to see what we’ve been working on? Try it for free. Start building now in the IBM Cloud. Explore apps, AI, analytics and more.

Start benefiting from natural language understanding now.

Latest Posts

More News