COMMENTARY: The recent launch of DeepSeek, and the ensuing halting of registrations due to an alleged cyberattack, has
turned heads across the tech industry.
A Chinese AI company that developed a low-cost AI model using less technologically advanced chips, but performing on-par with models like OpenAI’s ChatGPT, upstart DeepSeek has everyone from Wall Street to Silicon Valley questioning the nation’s AI market prowess.
[
SC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Read more Perspectives here.]
While no one yet knows for sure how DeepSeek achieved such a feat, speculation abounds, making it difficult to know what’s real.
What we know for sure: DeepSeek has developed a highly effective AI model and people are flocking to try it out. Further details will surely come out in time, but for now, we are left with our own speculation. Here are three important lessons gleaned from recent developments, and what IT and security professionals need to know moving forward:
The AI market has been so hot, there was bound to be a correction
Some
reports state that, compared to alternatives like ChatGPT, DeepSeek can be up to 20-30 times more cost-effective to run, primarily because of its efficient training methods and use of less powerful hardware. Although such a feat would be a boon to the industry, many remain skeptical, questioning whether it’s all just too good to be true.
Today, we still don’t know how DeepSeek has achieved this feat. Whether from using older, cheaper NVIDIA H800chips? Leveraging unsanctioned chips? Benefiting from lenient labor laws? Or truly driving a breakthrough in AI model training innovation?
The takeaway for right now: the rate of investment in AI infrastructure was unsustainable
, and an AI market correction was bound to happen eventually. The recent developments did just that. Experts are now forced to ask themselves whether we need to spend the
estimated $100 million to $1 billion to train models, or if we can take the much cheaper, $5 million to $6 million model training approach that DeepSeek took. With the answer to that question still unknown, we must closely examine the security and ethics of this model before determining the path forward.
The industry must closely examine security, privacy, and ethics
Employees want to use AI because it makes them more productive, but the recent DeepSeek cyber-attacks show us that these AI tools are more vulnerable than we think. Without the proper governance policies, training programs
, and security controls, unsanctioned AI tools could become a huge risk for companies worldwide. We saw this in the last day, with the news that DeepSeek reportedly left a critical database publicly accessible,
exposing over 1 million records including user prompts, system logs, and API authentication tokens. There are still so many unanswered ethical, privacy, and security questions. Why won’t DeepSeek answer any questions about human rights? Are the terms of use any different from those we saw with TikTok that led to significant privacy risk and scrutiny? How much time and money did DeepSeek actually put into its security controls, or registration infrastructure, if it was hacked within a week of its launch? We must examine these questions closely before we make assumptions about the path forward.
Meanwhile, having a resilient approach to security that offers visibility into unapproved AI usage by detecting LLMs and related scripts on employee devices is important to mitigating any detrimental impact that may arise from DeepSeek’s use. Real-time data visibility tools can help IT departments track unauthorized downloads, flag suspicious activity, and ensure compliance with company policies. Companies that can strike the right balance between innovation and security will thrive in the AI-powered future. Those that can't will continue to fly blind and fail fast.
Businesspeople love competition, but let's remain cautiously optimistic
We’ve gone through many “operating system” battles in technology over the years, with wars over everything from PC operating systems and office applications to the cloud. Now we’re seeing the beginnings of the AI model or AI operating system war.
Economists and technologists alike must examine this situation with a cautiously optimistic eye. On the optimistic end, competition fuels innovation and better products. On a cautionary note, it’s still unclear if DeepSeek has the infrastructure in place to ensure organizations can run AI securely.
From a security perspective, the interest in DeepSeek has spiked significantly. While some are simply eager to use it and try it out, others are very concerned about the potential risks associated with it. Moving forward, we must all proceed with caution.
Tim Morris, chief security advisor, TaniumSC Media Perspectives columns are written by a trusted community of SC Media cybersecurity subject matter experts. Each contribution has a goal of bringing a unique voice to important cybersecurity topics. Content strives to be of the highest quality, objective and non-commercial.