Protected: Data Integrity, Data Security, and the new NIST Cybersecurity Framework
There is no excerpt because this is a protected post.
There is no excerpt because this is a protected post.
In the era of digital transformation, Large Language Models (LLMs) have emerged as powerful tools for businesses, enabling them to automate tasks, generate insights, and improve decision-making. However, the use of these models also brings forth significant privacy challenges. Despite the inherent privacy protections in enterprise solutions like Microsoft Azure OpenAI Services, residual privacy issues … Read more
67% of government agencies have increased their financial commitment to digital transformation. Long lines and endless paper documents no longer suffice – citizens now expect public services with less hassle and technology seamlessly embedded. However, this increased reliance on technology makes government agencies and institutions prime targets for cyberattacks – 30% of public sector agencies struggle … Read more
Quebec’s commitment to modernizing its data protection measures is evident in the provisions of Law25, the most important provisions of which came into effect on September 22, 2023. A significant component of this new legislation is the requirement for private companies to conduct Privacy Impact Assessments (PIAs). While already mandatory in certain circumstances for public … Read more
Hello, dear community! We are thrilled to announce the release of Version 3.5. Packed with new features, improvements, and fixes that are crafted based on your feedback and our commitment to enhancing your experience and productivity. Let’s dive in and explore what’s new and enhanced in this version! Now Available on Azure Marketplace Great news for … Read more
Large Language Models (LLMs) like Azure’s OpenAI service have become pivotal technology, enabling machines to understand and generate human-like replies to questions posed in a chat format. For organizations looking to augment those models with domain specific knowledge or for traditional ML applications such as sentiment analysis, fine tuning has emerged as an approach to … Read more
In the first part of this blog series, we discussed data privacy in Germany and the various obstacles associated with redacting Personally Identifiable Information (PII) in the German language. Now, in the second installment, we further explore the multifaceted landscape of German data privacy, shedding light on challenges that emerge not just from linguistic intricacies, … Read more
The Biden-Harris Administration recently enacted a sweeping Executive Order to forge America’s path in responsible AI development, encouraging both innovation and risk mitigation. The Executive Order spells out a multi-faceted plan touching upon AI safety and security, privacy, equity, civil rights, and more, with profound implications for organizations that are already embedded in the AI … Read more
The recently amended EU AI Act proposal we introduced in this blog post, would regulate “foundational models,” defined in Art. 3(1c) as “an AI model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks.” This blog post sets out … Read more
In the field of artificial intelligence, Large Language Models (LLMs) such as GPT-4 stand out as a major innovation, proving useful in a range of areas including automated customer support and creative content generation. Nonetheless, there exists a notable challenge in leveraging the capabilities of these models while also maintaining data privacy. This blog aims … Read more