Is Cybersecurity Encryption Ready to Break?

Cyberattacks are already bad today. But what if all encryption didn’t work? We are reaching a point now where global adversaries can crack encryption, and will be able to crack all encryption in the near future.

BY  OCTOBER 6, 2017


From mismanaged encryptions keys and system errors to eventual crypto cracking, Public Key Infrastructure (PKI) encryption has increasingly become more difficult to maintain as the needs for these encryption services exponentially increase.
Security adviser Roger A. Grimes has been installing PKIs for private and public companies for more than two decades. In a 2015 CSO article, 4 Fatal Problems with PKI, he discussed why PKI has too many moving parts. Even when it works perfectly, it doesn't solve the biggest security problems. Eventually it will stop working forever.
The complexities of these systems require the deployment and management of certificates, registration authority, directory management, digital signatures, key protocols and key validation. These systems are so complex that they are seldom installed properly and have so many errors that system operators often ignore them.  

In addition, Internet of Things (IoT) security providers are finding that PKI may work in Web applications but clearly were not designed for IoT devices. IoT processors are often so small that they don’t have the ability to update key certificates or embed any type of encryption at all. With encryption file sizes constantly increasing and the number of IoT connections reaching the billions, PKI encryption is effectively dead for IoT. 


With recent advances in quantum computing, there needs to be a focus on developing encryption that will not have its algorithms cracked, opening up a Pandora’s box of hacking. 
The National Institute of Standards and Technology (NIST) has been studying this problem and is focusing on post-quantum encryption solution proposals still open in its Post-Quantum Cryptography project. Although it is great to see NIST understanding the urgency of this potential crypto-cracking dilemma, there are industry experts that disagree with their approach. 
Recently there was an interesting debate among security industry professionals on the respected blog Schneier on Security. It was in response to a post about a research paper on RSA cryptography after quantum computing.
The researchers’ answer: Just make the encryption key algorithms bigger, more complex and more costly. How big? Using the calculations of the readers, a one-terabyte public key. Since IoT hardly has space for kilobytes, this is just not the direction to go. Not only will these resources hogging crypto-algorithms take valuable processing space, they will also use network resources and take longer.


Over the years I have reviewed hundreds of cybersecurity companies. The people that normally have the best solutions are the ones that already know the problems coming from current technologies. Sadly, they often need to wait until the problems come before they can get people’s attention and offer different solutions. 
The real problem in current cryptography is the very thing that makes the technology work. A hacker can identify and exploit the encryption repeating processes to crack the system and take control. Today’s encryption algorithms are static in nature, repeating processes over and over. Their behaviors are expected. Patterns are anticipated. In fact, hackers today are using artificial intelligence to quickly define these patterns. This is why quantum and super-computing can hack current cryptology.


There is a solution to this problem. Successfully accomplished, patented and deployed by a company called MerlinCryption, the Anti-Statistical Block Encryption (ASBE) leverages dynamic algorithmic complexity and employs stochastic randomization in many aspects of its encryption process. Because all output is variable, there is no static behavior to monitor. 
The key word is variable. Even a quantum computer cannot crack this encryption that protects data as it is created, viewed, edited, shared, stored and moved across any communications channel or in the cloud. The key then vanishes after use, leaving no trace of the encryption process.
Authentication is also an important part of security. Most authentication factors are based on something you know, something you have or something you are. Attackers can imitate the authentication rights of employees or systems to gain access and control. MerlinCryption has innovated a new fourth category of authentication factors using information that is temporary and always unique. These factors are not deterministic, but stochastic in nature. 
Finally, MerlinCryption offers true end-to-end, person-to-processor and processor-to-processor encryption and authentication. Its smallest key is more than 10522 stronger than AES’s 256 bit key. There’s good news for IoT providers too. It offers a 58 KB Low Overhead Platform with a 284 KB Embedded Encryption Platform that can fit in the smallest microprocessors. Oh, it’s cheaper too. Not bad.   


I seldom focus on encryption solutions because, as we are aware in the cybersecurity business, it addresses only a part of the problem. The potential of breaking all authentication and encryption is serious though. Allowing cyberattackers a wide-open cyberdefense without minimally hardening our systems would be catastrophic. It would allow cyberattackers to strike at will. 
It’s nice to end an article discussing all the problems in a specific area of cybersecurity and then detailing immediate solutions available. The warnings we are getting from both the private and public sector in IoT security issues is chilling. I will be speaking at a major IoT convention about this very issue. The question is: Are we going to talk about it, or do it?
Larry Karisny is the director of ProjectSafety.org, an adviser, consultant, speaker and writer supporting advanced cybersecurity technologies in both the public and private sectors. He will be speaking at the IoT Evolution Expo in Orlando, Fla. on Thursday, Jan. 25, 2018 from 10-10:55 a.m. discussing IoT security strategies.


The Race to Cyberdefense, Artificial Intelligence and the Quantum Computer

The power grid, oil and gas, and even existing telecoms are perfect targets for funding and development of these technologies.

By Larry Karisny August 8, 2017

I've been following cybersecurity startups and hackers for years, and I suddenly discovered how hackers are always ahead of the rest of us — they have a better business model funding them in their proof of concept (POC) stage of development.
To even begin protecting ourselves from their well-funded advances and attacks, cyberdefense and artificial intelligence (AI) technologies must be funded at the same level in the POC stage.
Today, however, traditional investors not only want your technology running, they also need assurances that you already have a revenue stream — which stifles potential new technology discovery at the POC level. And in some industries, this is dangerous.
Consider the fast-paced world of cybersecurity, in which companies are offered traditional funding avenues as they promote their product's tech capabilities so people will invest. This promotion and disclosure of their technology, however, gives hackers a road map to the new cyberdefense technologies and a window of time to gain knowledge on how to exploit them.
This same road map exists for technologies covered in detail when standard groups, universities, governments and private labs publish white papers — documents that essentially assist hackers by giving them advanced notice of cyberdefense techniques.
In addition to this, some hackers receive immediate funding through nation states that are coordinating cyberwarfare like the traditional military and others are involved in organized secret groups that fund the use of ransomware and DDoS attacks. These hackers get immediate funding and then throw their technology on the Internet for POC discovery.
One project that strongly makes a case for rapidly funding cyberdefense technologies in an effort to keep up with hackers is the $5.7 billion U.S. Department of Homeland Security's (DHS) EINSTEIN cyberdefense system, which was deemed obsolete upon its deployment for failing to detect 94 percent of security vulnerabilities. As this situation illustrates, the traditional methods of funding cyberdefense — taking years of bureaucratic analysis and vendor contracts — does not work in the fast technology discovery world of cyberdefense. After the EINSTEIN project failure, DHS decided to conduct an assessment — it's currently working to understand if it's making the right investments in dealing with the ever-changing cyberenvironment.
But it also has other roadblocks, as even large technology companies and contractors with which DHS does business have their own bureaucracies and investments that ultimately deter the department from getting the best in cyberdefense technologies. And once universities, standards groups, regulation and funding approvals are added to these processes, you're pretty much assured to be headed for another disaster.
But DHS doesn’t need to develop these technologies itself. The department needs to support public- and private-sector POCs to rapidly mature and deploy new cyberdefense technologies. This suggestion is supported by what other countries are successfully doing — including our adversaries.
The same two things that have motivated mankind all through history — immediate power and money — are now motivating hackers, and cyberdefense technologies are taking years to be deployed. So I'll say it again: The motivational and funding model of cyberdefense technologies must change. The key to successful cyberdefense technology development is making it as aggressive as the hackers that attack it. And this needs to be done at the conceptual POC level.
The concern in cyberdefense (and really all AI) is the race to the quantum computer.
Quantum computer technologies can’t be hacked, and in theory, its processing power can break all encryption. The computational physics behind the quantum also offer remarkable capabilities that will drastically change all current AI and cyberdefense technologies. This is a winner-takes-all technology that offers capability with absolute security capabilities — capabilities that we can now only imagine.
The most recent funding source for hackers is Bitcoin, which uses the decentralized and secure blockchain technology. It has even been used to support POC funding in what is called an Initial Coin Offering (ICO), the intent of which is to crowdfund early startup companies at the development or POC level by bypassing traditional and lengthy funding avenues. Because this type of startup seed offering has been clouded with scams, it is now in regulatory limbo.
Some states have passed laws that make it difficult to legally present and offer an ICO. While the U.S. seems to be pushing ICO regulation, other countries are still deciding what to do. But like ICOs or not, they offer first-time startups an avenue of fast-track funding at the concept level — where engineers and scientists can jump on newer technologies by focusing seed money on testing their concepts. Bogging ICOs down with regulatory laws will both slow down legitimate POC innovation in the U.S. and give other countries a competitive edge.
Another barrier to cyberdefense POC funding is the size and technological control of a handful of tech companies. Google, Facebook, Amazon, Microsoft and Apple have become enormous concentrations of wealth and data, drawing the attention of economists and academics who warn they're growing too powerful. Now as big as major American cities, these companies are mega centers of both money and technology. They are so large and control so much of the market that many are beginning to view them as in violation of the Sherman Antitrust Act. So how can small startups compete with these tech giants and potentially fund POCs in areas such as cyberdefense and AI? By aligning with giant companies in industries that have the most need for cyberdefense and AI technologies: critical infrastructure.
The industries that are most vulnerable and could cause the most devastation if hacked are those involved in critical infrastructure. These large industries have the resources to fund cyberdefense technologies at the concept level — and they would obtain superior cyberdefense technologies in doing so.
Cyberattacks to critical infrastructure could devastate entire country economies and must be protected by the most advanced cyberdefense. Quantum computing and artificial intelligence will initiate game-changing technology in both cyberdefense and the new intellectual property deriving from quantum sciences. Entering these new technologies at the POC level is like being a Microsoft or Google years ago. Funding the development of these new technologies in cyberdefense and AI are needed soon — but what about today?
Future quantum computer capabilities will also demand immediate short-term fixes in current cyberdefense and AI. New quantum-ready compressed encryption and cyberdefense deep learning AI must be funded and tested now at the concept level. The power grid, oil and gas, and even existing telecoms are perfect targets for this funding and development. Investing today would offer current cyberdefense and business intelligence protection while creating new profit centers in the licensing and sale of these leading-edge technologies. This is true for many other industries, all differing in their approach and requiring specialized cyberdefense capabilities and new intelligence gathering that will shape their future.

So we must find creative ways of rapidly funding cyberdefense technologies at the conceptual level. If this is what hackers do and it's why they're always one step ahead, shouldn't we work to surpass them?


Cybersecurity Industry Must Adopt Cyberdefense Tech that Utilizes Analytics, Artificial Intelligence

We must recognize that our cyberdefense technologies are not working and will not work. Cases in point: Our most sensitive cyberoffense technologies have been hackedpower companies admit they would have great difficulty stopping a cyberattack and are being asked to be prepared to operate at much less than full capacity under a cyberattack; 70 percent of oil and gas companies have been attacked — and the threat is growing.
The cybersecurity industry is in chaos and needs to move toward new technologies — cyberdefense technologies that are beginning to leverage analytics, machine learning and artificial intelligence (AI). Hackers are taking advantage of the same technologies, so the cyberdefense industry needs to jump on board. Let's quit playing catch-up and instead take a proactive approach to cybersecurity.
So what is this industry doing wrong, and how can we change it?


One of the core principles in cybersecurity is to establish a baseline of what the operational and industrial system is doing. Once this is done, you can:
  • define your security policies;
  • evaluate the risk;
  • look at security technologies that could reduce the risk;
  • evaluate the potential threat impact cost verses the cost of the security technology;
  • get management approval; and then
  • deploy the security technology. 
Sounds simple, right? Not so. 
We have layered so much hardware, network and software on top of each other that we truly can't see what our systems are doing. And if we can't see what our systems are doing, how can we establish a system baseline of what is normal in daily system operations? The fact is that we can't see it, which is not a good start to one of the most basic principles of security. This must change.


Conventional cybersecurity generally points everything to the human first while the system's machine actions are doing most of the operational and industrial processes. As metadata grows, it becomes increasingly difficult to manage and understand. Even the best analytic algorithms can't keep up and are themselves subject to error. 
Human error is the major reasons for cyberbreaches, and we are pointing increasing complex systems toward people who can neither see nor understand what the systems are doing; it is a dangerous scenario to continually disconnect the human from massively automated systems that run without audit. Hackers know this, and they will continually exploit these systems until new technologies can deeply and consistently view and audit our operational baseline.    
People need to be able to see with deep inspection the structured and unstructured data that run the systems. Without this being done first, a true operations and security baseline cannot be established, leaving the system exposure to cyberattacks. AI, machine learning and analytics can assist in the viewing of this data, but exponentially increases the amount of structured and unstructured data that must be secured. These approaches also create vulnerabilities because they layer additional algorithms and software over critical data and systems actuaries. This gives hackers a targeted system exploit capability that could allow a complete hijacking of system processes. This is being done while humans are continually being removed from our system processes.


Industry experts are warning of the use and abuse of AI and its use in both cyberdefense and hacking. 
As Sean Carroll, a cosmology and physics professor at the California Institute of Technology told Vox.com, "It is absolutely right to think very carefully and thoroughly about what those consequences might be, and how we might guard against them, without preventing real progress on improved artificial intelligence."
And Nick Bostrom, director of the Future of Humanity Institute at Oxford University, also told Vox.com that “the transition to machine superintelligence is a very grave matter, and we should take seriously the possibility that things could go radically wrong. This should motivate having some top talent in mathematics and computer science research the problems of AI safety and AI control.”
Even the newest neural network technologies that Google is using — the basis of its DeepMind Artificial Intelligence technologies — can be hacked. The reason is that we're using existing technologies to learn what our systems are doing, so we are essentially adding points of offensive exploit to cyberdefense technologies that are supposed to reduce the attack vector. The cybersecurity industry is, in essence, going in the wrong direction. 
A good example of this is tech giants buying up AI cybersecurity startups. This is being done while the DARPA Cyber Grand Challenge demonstrated how AI could hack into AI. Machine learning and AI connect to a very sensitive part of operational and industrial control systems. That’s how it learns. Hackers can use AI to watch what AI is doing, which in turn can offer total control of the machine systems. All third- and fourth-Generation programing language (code) can be hacked, period. We must find a migration path to codeless fifth-generation programing language (5GL) that uses codeless signature patterns.


I have discussed the use of 5GL in previous articles and spoke about the technology at Oak Ridge National Laboratory. I clearly discussed how we need to use 5GL codeless patterns in parallel with existing operational and industrial system technologies. This use of 5G in cybersecurity as a system auditing tool could be the much-needed answer to new cyberdefense technologies.
A company called On Point Cyber has been watching the development of these 5GL technologies for years, and CEO Tom Boyle said he thinks the timing is right for 5GL.
"Disruptive technologies must have a migration path back to existing technologies and forward to newer technologies. To achieve this, we first index all the current structured and unstructured data, then run them in parallel to the new 5GL codeless signature pattern technologies," he said "This offers a real-time deep inspection of the operational system security baseline and the immediate detection of anything not part of that baseline. 
Boyle also noted that what's great about 5GL technology is that it can be used without changing any of the current operational and industrial system technologies.
"These newer technologies can then offer older technologies a migration path to code vs. codeless signature pattern technologies that could even be used in the Quantum computer," he added. "The use of 5GL in cyberdefense could prove the most important use of this technology today. Clearly, we need to do something different.”


We are entering dangerous times in cybersecurity, and both the public and private sectors must recognize the urgency in finding an industry correction. Immediately invest in cybersecurity technologies that offer more than calculated risk remediation. We are throwing things on the wall that could potentially put our cyberdefense technologies in greater danger. We need to find solutions that stop cyberattacks. 
In the confusion of pretty words and explanations of cyberdefense technologies, government officials and CEOs are asking the simple question, "Can I invest in cyberdefense technologies that work?" It is time to answer that question with the recognition that we need to move on to entirely new technologies that can secure us today and prepare us for the future.