When you want it done right, the first time.
When you want it done right, the first time.
Technology Decisions, LLC specializes in the strategic planning, understanding, and the tactical bottom line benefits of technology to the enterprise for senior executives.
We have practiced network design, security, management and performance optimization for over twenty-five years.
Our principal, Matthew McCormick, has served as an expert witness on domestic and international patent infringement cases. His opinions have been sought out by major investment houses and consulting firms.
We bring common sense analysis and solutions to business problems and efficiency. Never endorsing technology for technology's sake, rather listening carefully to your business challenges, and only recommending and applying the right technology as and in a best practice way if it is the right tool for you at this time, bringing long term benefits.
Too many networks are poorly designed, understood, underperform, managed, costing too much, and ultimately, failing to deliver service. It's inevitable, if not done correctly.
If you want it done right the first time, saving you time, pain, and money, hire us.
We LIKE them TOO much!!!
Covid-19 is “Causing hiccups for the algorithms that run behind the scenes in inventory management, fraud detection, marketing, and more,” says Heaven. “Machine-learning models trained on normal human behavior are now finding that normal has changed, and some are no longer working as they should.” Streaming networks that have a sudden increase in viewers has impacted the accuracy of recommendations. The severity of the impact varies, depending largely on the purpose of the AI/ML-based program. Automated inventory systems may assume there is an error with the number of orders when they are suddenly much higher than usual.
Automated inventory systems may assume there is an error with the number of orders when they are suddenly much higher than usual. While machine learning is designed to be responsive, issues may arise when a new data set is too different than the one used for training.
“Many of these problems with models arise because more businesses are buying machine-learning systems but lack the in-house know-how needed to maintain them.”
Experts in machine learning, artificial intelligence, and data science are needed to monitor for potential breaks in the system.
The constantly changing situation makes optimization without human intervention even more difficult, and may prove to some who thought ML and AI could function wholly independently, that will not always be the case.
“You need a data science team who can connect what’s going on in the world to what’s going on the algorithms. An algorithm would never pick some of this stuff up.”
August 26, 2021, by Jennifer Gregory, SecurityIntelligence.com
"Data poisoning against security software that uses artificial intelligence (AI) and machine learning (ML) is likely the next big cybersecurity risk. According to the RSA 2021 keynote presentation by Johannes Ullrich, dean of research of SANS Technology Institute, it’s a threat we should all keep an eye on.
“One of the most basic threats when it comes to machine learning is one of the attackers actually being able to influence the samples that we are using to train our models,” Ullrich said at RSA.
With this new threat quickly emerging, defenders must learn how to spot data poisoning attacks and how to prevent them. Otherwise, you will make business and cybersecurity decisions based on faulty data.
What Is Data Poisoning?
When attackers tamper with data used to train AI models, it effectively becomes ‘poisoned.’ Because AI relies on that data to learn how to make accurate predictions, the predictions generated by the algorithm will be incorrect.
Threat actors are now messing with data in ways that can be used for cyberattacks. For example, they can do a lot just by changing data for a recommendation engine. From there, they can get someone to download a malware app or click on an infected link.
Data poisoning is so dangerous because it uses AI against us. We are increasingly putting our trust in AI predictions for so many aspects of our personal lives and our work. It does everything from helping us choose a movie to watch to telling us which customers might cancel their service.
As digital transformation sped up due to COVID-19, AI became even more common. Digital transactions and connections are the norm rather than the exception.
Data Poisoning and Cybersecurity Tools
Threat actors are using data poisoning to infiltrate the very tools defenders are using to spot threats, too. First, they can change the data or add data to generate incorrect classifications. In addition, attackers also use data poisoning to generate back doors.
This increase of data poisoning attacks on AI tools means businesses and agencies may hesitate to turn to those tools. It also makes it more challenging for defenders to know what data to trust.
During the keynote, Ullrich said the solution starts with having thorough knowledge of the models used by AI cybersecurity tools. If you don’t understand what protects your data, it becomes challenging to tell whether those techniques and tools are accurate.
Identifying Data Poisoning Attacks
Data poisoning attacks are challenging and time consuming to spot. So, victims often find that when they discover the issue, the damage is already extensive.
In addition, they don’t know what data is real and what data has been manipulated. Often data poisoning attacks are an inside job and committed at a very slow pace. Both make the changes in the data easy to miss.
During the RSA session ‘Evasion, Poisoning, Extraction and Inference: The Tools to Defend and Evaluate’, Abigail Goldsteen of IBM Research recommended cybersecurity professionals turn to Adversarial Robustness 360 Toolbox (ART) to identify, stop and prevent data poisoning attacks. This open-source toolkit allows developers to quickly create, analyze and attack, and then rapidly select the right defense methods for machine learning models.
Using the Tools We Have
So, should you not use AI? At this point, it would not be practical to abandon it completely. Doing so will result in threat actors simply using AI and ML to create attacks that we cannot defend against.
Instead, as defenders, we must not blindly trust the tools and the data we have. Becoming more knowledgeable in how the algorithms work and routinely checking the data for anomalies will help us keep ahead of attacks.
CSO Senior Writer, CSO | APR 12, 2021 2:00 AM PDT, csoonline.com
"How data poisoning attacks and corrupts machine learning models. Data poisoning can render machine learning models inaccurate, possibly resulting in poor decisions based on faulty outputs. With no easy fixes available, security pros must focus on prevention and detection.
Machine learning adoption exploded over the past decade, driven in part by the rise of cloud computing, which has made high performance computing and storage more accessible to all businesses. As vendors integrate machine learning into products across industries, and users rely on the output of its algorithms in their decision making, security experts warn of adversarial attacks designed to abuse the technology.
Most social networking platforms, online video platforms, large shopping sites, search engines and other services have some sort of recommendation system based on machine learning. The movies and shows that people like on Netflix, the content that people like or share on Facebook, the hashtags and likes on Twitter, the products consumers buy or view on Amazon, the queries users type in Google Search are all fed back into these sites' machine learning models to make better and more accurate recommendations.
It's not news that attackers try to influence and skew these recommendation systems by using fake accounts to upvote, downvote, share or promote certain products or content. Users can buy services to perform such manipulation on the underground market as well as "troll farms" used in disinformation campaigns to spread fake news.
"In theory, if an adversary has knowledge about how a specific user has interacted with a system, an attack can be crafted to target that user with a recommendation such as a YouTube video, malicious app, or imposter account to follow," Andrew Patel, a researcher with the Artificial Intelligence Center of Excellence at security vendor F-Secure explained in a blog post. "As such, algorithmic manipulation can be used for a variety of purposes including disinformation, phishing scams, altering of public opinion, promotion of unwanted content, and discrediting individuals or brands. You can even pay someone to manipulate Google’s search autocomplete functionality."
What is data poisoning?
Data poisoning or model poisoning attacks involve polluting a machine learning model's training data. Data poisoning is considered an integrity attack because tampering with the training data impacts the model's ability to output correct predictions.
The difference between an attack that is meant to evade a model's prediction or classification and a poisoning attack is persistence: with poisoning, the attacker's goal is to get their inputs to be accepted as training data. The length of the attack also differs because it depends on the model's training cycle; it might take weeks for the attacker to achieve their poisoning goal.
Data poisoning can be achieved either in a blackbox scenario against classifiers that rely on user feedback to update their learning or in a whitebox scenario where the attacker gains access to the model and its private training data, possibly somewhere in the supply chain if the training data is collected from multiple sources.
No easy fix
The main problem with data poisoning is that it's not easy to fix. Models are retrained with newly collected data at certain intervals, depending on their intended use and their owner's preference. Since poisoning usually happens over time, and over some number of training cycles, it can be hard to tell when prediction accuracy starts to shift.
Reverting the poisoning effects would require a time-consuming historical analysis of inputs for the affected class to identify all the bad data samples and remove them. Then a version of the model from before the attack started would need to be retrained. When dealing with large quantities of data and a large number of attacks, however, retraining in such a way is simply not feasible and the models never get fixed. It can cost in the range of $16M to train a model once.
Prevent and detect
Given the difficulties in fixing poisoned models, model developers need to focus on measures that could either block attack attempts or detect malicious inputs before the next training cycle happens—things like input validity checking, rate limiting, regression testing, manual moderation and using various statistical techniques to detect anomalies.
Click on a file to download.
Some applications don't work as well in the cloud as they did on premises, which forces a reverse migration. A recent study (download above) from security provider Fortinet, conducted by IHS Markit, found that most companies have moved a cloud-based app back on premises after they failed to see anticipated returns. In the survey of 350 global IT decision makers, 74% reported they had moved an application back to their own infrastructure. Moving workloads is costly and disruptive. Changing the location of a workload isn't easy, and there is a lot of risk in moving workloads around.
Often the motivation is cost savings thought we could save a lot of money and getting rid of managing infrastructure. One company moved a data analytics application from the company's data center to a public cloud offering, opting to have the application hosted by Microsoft Azure so they could more easily scale up or down as needed at a lower cost.
"We thought it was Capex versus Opex. We thought we could save a lot of money and get rid of managing infrastructure," an IT manager explained. "But we were wrong."
There were problems from the start. His IT workers noticed latency issues right away, and they identified limitations within their networking equipment that further hindered the app's performance. Cloud performance testing is necessary for application migration. Budget time and money for it.
"We kept throwing compute resources and storage resources at it," the manager explained, and that drove up costs. The app was moved back to on premises equipment. This process to repatriate the application took eight months.
Companies moving apps out of the cloud typically do so after finding that they're experiencing latency issues or increased security and compliance challenges.
Those observations track with the results of the Fortinet survey. According to the report, 52% of those who moved workloads from the cloud back on premises said either performance or security issues were the primary reasons for their decision. An additional 21% cited regulatory issues as the driving factor.
Some companies see higher costs than they expected. Some find they're not getting the uptime they expected from the cloud vendor. Still others hit complexities that slow down their systems.
Misunderstood applications and operations. Some very high-volume systems that have particular technical requirements, such as high-volume transactional databases, don't work well in the cloud. There are some apps that may not be understood well enough in terms of how they are really connected to other things and they end up requiring more connectivity and talk to more things than was realized. So by the time you go through all the hops and links and security, things are much slower in the cloud than you thought it would be.
Know what should go, and what should stay. Not every application belongs in the cloud. Many IT managers cloud means lift and shift. It does not. The data analytics application previously mentioned was a multi-tenant application, it wasn't an elastic application, and it did not use a virtualized environment very well. Also, the app relied on data that resided within the data center, a factor that contributed to the app's poor performance in the cloud.
Many cloud inexperienced IT departments treat the cloud like a virtual data center and they don't change their operations or procedures when they move to the cloud. Application evaluation is crucial. What should stay and what can go to the cloud. Do your cloud-intended applications need to be optimized for the cloud?
Big Data is dead. Collecting a lot of data is literally useless, if the data is not properly utilized. The key is the systematic exploration of the data with a right set of questions. For instance, is the data uniform or irregular? Is there a significant amount of variation in the data set? Is it buried in a mass of other irrelevant information? Can it be easily extracted and transformed? Is it possible to load the data at a reasonable speed? Can it be thoroughly analyzed? Can powerful insights be garnered? Otherwise, Big Data alone in an old style is really obsolete.
Computers can just make dumb happen faster.
Wrong is not a service we offer.
Cheap is not really cheap. It means doing it and paying for it over and over again.
Good. Fast. Cheap. Pick 2.
Being REALLY BAD at what you do has only rarely been an impediment to being in business.
Monopoly in any language is profane.
Be clear & upfront regarding your brand. Use as a first filter for bad clients/customers.
Ignorance, arrogance, denial, luck, & hope are NOT strategies!!!
Thinking is the rarest commodity on Earth!!!
Too much automation, not enough service.
KISSS = Keep It Super Simple Silly Sweetheart!!!!
I want to say "thank you" for all the times I have been told no. They made me look further, work harder, work longer, search farther, and, ultimately, find better. Thank you. Thank you. Thank you.
A reporter once asked Pablo Casals, one of the world's most famous cellists, why he still practiced six hours a day at the age of 85. His response was, "I think I'm getting better".
"No" is definitely part of the key to happiness.
The world will end in a "fat finger", i.e. Hawaii Missile, East Coast Tsunami, etc.
While God has a plan, no one else does.
Everyone wants to be rich. No one ever asks at what cost?
When did the obvious ever dawn on anyone?
American culture could benefit from greater commitment to service like South Korea or the UK: British Airways, asian airlines.
Courtesy and service cannot be automated nor ever should be.
The good news is there are lots of choices. The bad news is there are lots of choices.
I fly Business Class or better. I insist.
Air transportation and I are NOT on speaking terms!
Technology Decisions, LLC abides by the following Code of Ethics, which we take most seriously:
Professional responsibility of fair dealing toward the our clients, past and present, and the general public.
Professional responsibility of adhering to generally accepted standards of accuracy, truth, and good taste at all times.
Never to represent conflicting or competing interests, nor shall we be placed in a position where our interest is, or may be, in conflict with duty to the client.
Safeguard the confidences of both present and former clients, and shall not accept retainers which may involve the disclosure or use of these confidences to the disadvantage or prejudice of such clients.
Never intentionally disseminate false or misleading information, and we obligate ourselves to use as much care as is humanly possible to avoid dissemination of false or misleading information.
Never intentionally or recklessly injure the professional reputation of others.
In performing services for a client, never accept fees, commissions, or any other valuable consideration in connection with those services from anyone other than the client.
Prior to the commencement of the services to be performed, make the client fully aware of the fee structure, and all associated costs.
Never be in conflict by retaining ownership in any company selling or leasing products where such interest constitutes a conflict of interest.
As soon as possible, sever the relationship with any organization when we know or should know that continued employment would require us to conduct ourselves contrary to the good conduct principles of this code.