Korean tech company Samsung has called on all its staff to temporarily stay off using generative AI tools such as ChatGPT on company matters.
This may be a result of what happened in April this year when sensitive information belonging to Samsung got into OpenAI’s hands.
No ad to show here.
In one report, an employee used ChatGPT to convert meeting notes into a presentation. The presentation was not meant for external parties.
This wasn’t the only instance.
In a bid to maximize efficiency, another employee prompted ChatGPT to optimize test sequences for identifying faults in chips.
This too was confidential.
The employee who may have had great intentions to make the process more efficient shared what is deemed confidential information, which apparently has resulted in Samsung doing a U-turn on prompting ChatGPT with sensitive intel.
The confidential information is now stored on to the OpenAI’s servers.
The tech giant has now issued a warning on some of the dangers of leaking confidential information to its employees.
While ChatGPT hangs on to confidential information, it does open a clear legislative precedent which needs to be set against AI servers, companies which may in most cases receive sensitive information.
Samsung is now developing its own version of AI for internal use.
The ban on the use of AI tools on its internal networks comes over fears that uploading sensitive information to these platforms may pose a significant security risk.
Also read: The Print grit, how a dying print keeps living in the digital age