Microsoft has launched Prompt Shields, a new security feature now generally available, aimed at safeguarding applications powered by Foundation Models (large language models) for its Azure OpenAI ...
In the rapidly evolving landscape of generative AI, business leaders are trying to strike the right balance between innovation and risk management. Prompt injection attacks have emerged as a ...
New tools for filtering malicious prompts, detecting ungrounded outputs, and evaluating the safety of models will make generative AI safer to use. Both extremely promising and extremely risky, ...
Mindgard announced the detection of two security vulnerabilities within Microsoft’s Azure AI Content Safety Service. The vulnerabilities enabled an attacker to bypass existing content safety measures ...
Microsoft's Azure teams for OpenAI Service and AI Content Safety launched what they call a Responsible AI capability that protects applications powered by artificial intelligence Foundation Models, ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results