/ By David Linthicum / 0 Comments

In a recent speech, Google and Alphabet CEO Sundar Pichai called for new regulations in the world of AI, with the obvious focus that AI has been commoditized by cloud computing. This is no surprise, now that we’re debating the ethical questions that surround the use of AI technology: most especially, how easily AI can weaponize computing—for businesses as well as bad actors.

Pichai highlighted the dangers posed by technologies such as facial recognition and “deepfakes,” in which an existing image or video of a person is replaced with someone else’s likeness using artificial neural networks. He also stressed that any legislation must balance “potential harms ... with social opportunities.”

To read this article in full, please click here

/ By David Linthicum / 0 Comments

In a recent speech, Google and Alphabet CEO Sundar Pichai called for new regulations in the world of AI, with the obvious focus that AI has been commoditized by cloud computing. This is no surprise, now that we’re debating the ethical questions that surround the use of AI technology: most especially, how easily AI can weaponize computing—for businesses as well as bad actors.

Pichai highlighted the dangers posed by technologies such as facial recognition and “deepfakes,” in which an existing image or video of a person is replaced with someone else’s likeness using artificial neural networks. He also stressed that any legislation must balance “potential harms ... with social opportunities.”

To read this article in full, please click here

/ By communityteam@solarwinds.com / 0 Comments

Submitted for your approval; a story of cloud horrors.

One of performance issues impacting production.

Where monthly cloud billing began spiraling out of control.

 

The following story is true. The names have been changed to protect the innocent.

 

During my consulting career, I’ve encountered companies at many different stages of their cloud journey. What was particularly fun about walking into this shop is they were already about 75% up into public cloud. The remaining 25% was working towards being migrated off their aging hardware. They seemed to be ahead of the game, so why were my services needed?

 

Let’s set up some info about the company, which I’ll call “ABC Co.” ABC Co. provides medical staff and medical management to many hospitals and clinics, with approximately 1,000 employees and contractors spread across many states. Being in both medical staffing and recordkeeping, ABC Co. was subject to many compliance regulations such as HIPAA, PCI, etc. Their on-premises data center was on older hardware nearing end of life, and given the size of their IT staff, they decided to move out of the data center business.

 

The data center architect at ABC Co. did his homework. He spent many hours learning about public cloud, crunching numbers, and comparing virtual machine configurations to cloud-based compute sizing. Additionally, due to compliance requirements, ABC Co. needed to use dedicated hosts in the public cloud. After factoring in all the sizing, storage capacity, and necessary networking, the architect arrived at an expected monthly spend number: $50,000. He took this number to the board of directors with a migration plan and outlined the benefits of going to the cloud versus refreshing their current physical infrastructure. The board was convinced and gave the green light to move into the public cloud.

 

Everything was moving along perfectly early in the project. The underlying cloud architecture of networking, identity and access management, and security were deployed. A few workloads were moved up into the cloud to great success. ABC Co. continued their migration, putting applications and remote desktop servers in the cloud, along with basic workloads such as email servers and databases. But something wasn’t right.

 

End users started to complain of performance issues on the RDP servers. Application processing had slowed down to a crawl. The employee’s ability to perform their tasks was being impeded. The architect and cloud administrators added more remote desktop servers into the environment and increased their size. Sizing on the application servers, which were just Microsoft Windows Servers in the public cloud, were also increased. This alleviated the problems, albeit temporarily. As more and more users were logging in to the public cloud-based services, performance and availability took a hit.

 

And then the bill showed up.

 

It was creeping up slowly to the anticipated $50,000 per month. Unfortunately, as a side effect of the increasing resources, the bill had risen to more than triple the original estimates presented to the board of directors. In the peak of the “crisis,” the bill had surpassed $150,000 per month. This put the C-suite on edge. What was going on with the cloud migration project? How is the bill so high when they were promised a third of what was being spent? It was time for the ABC Co. team to call for an assist.

 

This is where I entered the scene. I’ll start this next section of the story by stating this outright: I didn’t solve all their problems. I wasn’t a savior on a white horse galloping in to save the day. I did, however, help ABC Co. start to reduce their bill and get cloud spend under control.

 

One of the steps they implemented before I arrived was to use scripted shutdown of servers during non-work hours. This cut off some of the wasteful spend on machines not being used. We also looked at the actual usage on all servers in the cloud. After running some scans, we found many servers not in use for 30 days or more being left on and piling onto the bill. These servers were promptly shut down, archived, then deleted after a set time. Applications experiencing performance issues were analyzed, and it was determined they could be converted to a cloud native architecture. And those pesky ever-growing remote desktop boxes? Smaller, more cost-effective servers were placed behind a load balancer to automatically boot additional servers should the user count demand it. All these are a few of the steps to reducing the cloud bill. Many things occurred after I had left, but this was a start to send them on the right path.

 

  So, what can be learned from this story? While credit should be given for the legwork done to develop the strategy, on-premises virtual machines and public cloud-based instances aren’t apples to apples. Workloads behave differently in the cloud. The way resources are consumed has costs behind it; you can’t just add RAM and CPU to a problem server like you can in your data center (nor is it often the correct solution). Many variables go into a cloud migration. If your company is looking at moving to the cloud, be sure to ask the deep questions during the initial planning phase—it may just save hundreds of thousands of dollars.

/ By communityteam@solarwinds.com / 0 Comments

Submitted for your approval; a story of cloud horrors.

One of performance issues impacting production.

Where monthly cloud billing began spiraling out of control.

 

The following story is true. The names have been changed to protect the innocent.

 

During my consulting career, I’ve encountered companies at many different stages of their cloud journey. What was particularly fun about walking into this shop is they were already about 75% up into public cloud. The remaining 25% was working towards being migrated off their aging hardware. They seemed to be ahead of the game, so why were my services needed?

 

Let’s set up some info about the company, which I’ll call “ABC Co.” ABC Co. provides medical staff and medical management to many hospitals and clinics, with approximately 1,000 employees and contractors spread across many states. Being in both medical staffing and recordkeeping, ABC Co. was subject to many compliance regulations such as HIPAA, PCI, etc. Their on-premises data center was on older hardware nearing end of life, and given the size of their IT staff, they decided to move out of the data center business.

 

The data center architect at ABC Co. did his homework. He spent many hours learning about public cloud, crunching numbers, and comparing virtual machine configurations to cloud-based compute sizing. Additionally, due to compliance requirements, ABC Co. needed to use dedicated hosts in the public cloud. After factoring in all the sizing, storage capacity, and necessary networking, the architect arrived at an expected monthly spend number: $50,000. He took this number to the board of directors with a migration plan and outlined the benefits of going to the cloud versus refreshing their current physical infrastructure. The board was convinced and gave the green light to move into the public cloud.

 

Everything was moving along perfectly early in the project. The underlying cloud architecture of networking, identity and access management, and security were deployed. A few workloads were moved up into the cloud to great success. ABC Co. continued their migration, putting applications and remote desktop servers in the cloud, along with basic workloads such as email servers and databases. But something wasn’t right.

 

End users started to complain of performance issues on the RDP servers. Application processing had slowed down to a crawl. The employee’s ability to perform their tasks was being impeded. The architect and cloud administrators added more remote desktop servers into the environment and increased their size. Sizing on the application servers, which were just Microsoft Windows Servers in the public cloud, were also increased. This alleviated the problems, albeit temporarily. As more and more users were logging in to the public cloud-based services, performance and availability took a hit.

 

And then the bill showed up.

 

It was creeping up slowly to the anticipated $50,000 per month. Unfortunately, as a side effect of the increasing resources, the bill had risen to more than triple the original estimates presented to the board of directors. In the peak of the “crisis,” the bill had surpassed $150,000 per month. This put the C-suite on edge. What was going on with the cloud migration project? How is the bill so high when they were promised a third of what was being spent? It was time for the ABC Co. team to call for an assist.

 

This is where I entered the scene. I’ll start this next section of the story by stating this outright: I didn’t solve all their problems. I wasn’t a savior on a white horse galloping in to save the day. I did, however, help ABC Co. start to reduce their bill and get cloud spend under control.

 

One of the steps they implemented before I arrived was to use scripted shutdown of servers during non-work hours. This cut off some of the wasteful spend on machines not being used. We also looked at the actual usage on all servers in the cloud. After running some scans, we found many servers not in use for 30 days or more being left on and piling onto the bill. These servers were promptly shut down, archived, then deleted after a set time. Applications experiencing performance issues were analyzed, and it was determined they could be converted to a cloud native architecture. And those pesky ever-growing remote desktop boxes? Smaller, more cost-effective servers were placed behind a load balancer to automatically boot additional servers should the user count demand it. All these are a few of the steps to reducing the cloud bill. Many things occurred after I had left, but this was a start to send them on the right path.

 

  So, what can be learned from this story? While credit should be given for the legwork done to develop the strategy, on-premises virtual machines and public cloud-based instances aren’t apples to apples. Workloads behave differently in the cloud. The way resources are consumed has costs behind it; you can’t just add RAM and CPU to a problem server like you can in your data center (nor is it often the correct solution). Many variables go into a cloud migration. If your company is looking at moving to the cloud, be sure to ask the deep questions during the initial planning phase—it may just save hundreds of thousands of dollars.

/ By communityteam@solarwinds.com / 0 Comments

Back from Austin and home for a few weeks before I head...back to Austin for a live episode of SolarWinds Lab. Last week was the annual Head Geeks Summit, and it was good to be sequestered for a few days with just our team as we map out our plans for world domination in 2020 (or 2021, whatever it takes).

 

As always, here's a bunch of stuff I found on the internetz this week that I think you might enjoy. Cheers!

 

Critical Windows 10 vulnerability used to Rickroll the NSA and Github

Patch your stuff, folks. Don't wait, get it done.

 

WeLeakInfo, the site which sold access to passwords stolen in data breaches, is brought down by the FBI

In case you were wondering, the website was allowed to exist for three years before it was finally shut down. No idea what took so long, but I tip my hat to the owners. They didn't steal anything, they just took available data and made it easy to consume. Still, they must have known they were in murky legal waters.

 

Facial recognition: EU considers ban of up to five years

I can't say if that's the right amount of time; I'd prefer they ban it outright for now. This isn't just a matter of the tech being reliable, it brings about questions regarding basic privacy versus a surveillance state.

 

Biden wants Sec. 230 gone, calls tech “totally irresponsible,” “little creeps”

Politics aside, I agree with the idea that a website publisher should bear some burden regarding the content allowed. Similar to how I feel developers should be held accountable for deploying software that's not secure, or leaving S3 buckets wide open. Until individuals understand the risks, we will continue to have a mess of things on our hands.

 

Microsoft pledges to be 'carbon negative' by 2030

This is a lofty goal, and I applaud the effort here by Microsoft to erase their entire carbon footprint since they were founded in 1975. It will be interesting to see if any other companies try to follow, but I suspect some (*cough* Apple) won't even bother.

 

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

In today's edition of "do as I say, not as I do", Google reminds us that their new motto is "Only slightly evil."

 

Technical Debt Is like a Tetris Game

I like this analogy, and thought you might like it as well. Let me know if it helps you.

 

If you are ever in Kansas City, run, don't walk, to Jack Stack and order the beef rib appetizer. You're welcome.

/ By Matt Asay / 0 Comments

In tech circles, we have two primary faults: We’re overly eager to usher in the future and ironically too quick to disregard it when it doesn’t come as fast as we projected. Take, for example, two persistent myths making the rounds today: First, that cloud spend is sending data center spending off a cliff and, second, that AI is overhyped snake oil that mostly fails enterprise buyers.

Let’s take those in order.

Myth #1: Your data center is doomed

Gartner kicked off the first myth, with analyst Dave Cappuccio positing that 80 percent of enterprises will shutter their data centers by 2025 (compared to 10 percent that did so in 2018). Aggressive? Sure, but Cappuccio gives some solid reasons for his thinking: “As interconnect services, cloud providers, the Internet of Things (IoT), edge services, and SaaS offerings continue to proliferate, the rationale to stay in a traditional data center topology will have limited advantages.”

To read this article in full, please click here

/ By PCMag.com Latest Articles / 0 Comments
The SmartDry is multi-sensor device that can save you money by eliminating unnecessary running time from your traditional clothes dryer.
/ By communityteam@solarwinds.com / 0 Comments

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp with ideas for improving security at the DoD by finding vulnerabilities and continuously monitoring agency infrastructure.

 

An early 2019 report from the Defense Department Officer of Inspector General revealed how difficult it’s been for federal agencies to stem the tide of cybersecurity threats. Although the DoD has made significant progress toward bolstering its security posture, 266 cybersecurity vulnerabilities still existed. Most vulnerabilities have only been discovered within the past year—a sure sign of rising risk levels.

 

The report cited several areas for improvement, including continuous monitoring and detection processes, security training, and more. Here are three strategies DOD can use to tackle those remaining 200-plus vulnerabilities.

 

1. Identify Existing Threats and Vulnerabilities

 

Identifying and addressing vulnerabilities will become more difficult as the number of devices and cloud-based applications on defense networks proliferates. Although government IT managers have gotten a handle on bring-your-own-device issues, undetected devices are still used on DoD networks.

 

Scanning for applications and devices outside the control of IT is the first step toward plugging potential security holes. Apps like Dropbox and Google Drive may be great for productivity, but they could also expose the agency to risk if they’re not security hardened.

 

The next step is to scan for hard-to-find vulnerabilities. The OIG report called out the need to improve “information protection processes and procedures.” Most vulnerabilities occur when configuration changes aren’t properly managed. Automatically scanning for configuration changes and regularly testing for vulnerabilities can help ensure employees follow the proper protocols and increase the department’s security posture.

 

2. Implement Continuous Monitoring, Both On-Premises and in the Cloud

 

While the OIG report specifically stated the DoD must continue to proactively monitor its networks, those networks are becoming increasingly dispersed. It’s no longer only about keeping an eye on in-house applications; it’s equally as important to be able to spot potential vulnerabilities in the cloud.

 

DoD IT managers should go beyond traditional network monitoring and look more deeply into the cloud services they use. The ability to see the entire network, including destinations in the cloud, is critically important, especially as the DoD becomes more reliant on hosted service providers.

 

3. Establish Ongoing User Training and Education Programs

 

A well-trained user can be the best protection against vulnerabilities, making it important for the DoD to implement a regular training cadence for its employees.

 

Training shouldn’t be relegated to the IT team alone. A recent study indicates insider threats pose some of the greatest risk to government networks. As such, all employees should be trained on the agency’s policies and procedures and encouraged to follow best practices to mitigate potential threats. The National Institute of Standards and Technology provides an excellent guide on how to implement an effective security training program.

 

When it comes to cybersecurity, the DoD has made a great deal of progress, but there’s still room for improvement. By implementing these three best practices, the DoD can build off what it’s already accomplished and focus on improvements.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.

/ By communityteam@solarwinds.com / 0 Comments

Omar Rafik, SolarWinds Senior Manager, Federal Sales Engineering

 

Here’s an interesting article by my colleague Brandon Shopp with ideas for improving security at the DoD by finding vulnerabilities and continuously monitoring agency infrastructure.

 

An early 2019 report from the Defense Department Officer of Inspector General revealed how difficult it’s been for federal agencies to stem the tide of cybersecurity threats. Although the DoD has made significant progress toward bolstering its security posture, 266 cybersecurity vulnerabilities still existed. Most vulnerabilities have only been discovered within the past year—a sure sign of rising risk levels.

 

The report cited several areas for improvement, including continuous monitoring and detection processes, security training, and more. Here are three strategies DOD can use to tackle those remaining 200-plus vulnerabilities.

 

1. Identify Existing Threats and Vulnerabilities

 

Identifying and addressing vulnerabilities will become more difficult as the number of devices and cloud-based applications on defense networks proliferates. Although government IT managers have gotten a handle on bring-your-own-device issues, undetected devices are still used on DoD networks.

 

Scanning for applications and devices outside the control of IT is the first step toward plugging potential security holes. Apps like Dropbox and Google Drive may be great for productivity, but they could also expose the agency to risk if they’re not security hardened.

 

The next step is to scan for hard-to-find vulnerabilities. The OIG report called out the need to improve “information protection processes and procedures.” Most vulnerabilities occur when configuration changes aren’t properly managed. Automatically scanning for configuration changes and regularly testing for vulnerabilities can help ensure employees follow the proper protocols and increase the department’s security posture.

 

2. Implement Continuous Monitoring, Both On-Premises and in the Cloud

 

While the OIG report specifically stated the DoD must continue to proactively monitor its networks, those networks are becoming increasingly dispersed. It’s no longer only about keeping an eye on in-house applications; it’s equally as important to be able to spot potential vulnerabilities in the cloud.

 

DoD IT managers should go beyond traditional network monitoring and look more deeply into the cloud services they use. The ability to see the entire network, including destinations in the cloud, is critically important, especially as the DoD becomes more reliant on hosted service providers.

 

3. Establish Ongoing User Training and Education Programs

 

A well-trained user can be the best protection against vulnerabilities, making it important for the DoD to implement a regular training cadence for its employees.

 

Training shouldn’t be relegated to the IT team alone. A recent study indicates insider threats pose some of the greatest risk to government networks. As such, all employees should be trained on the agency’s policies and procedures and encouraged to follow best practices to mitigate potential threats. The National Institute of Standards and Technology provides an excellent guide on how to implement an effective security training program.

 

When it comes to cybersecurity, the DoD has made a great deal of progress, but there’s still room for improvement. By implementing these three best practices, the DoD can build off what it’s already accomplished and focus on improvements.

 

Find the full article on Government Computer News.

 

The SolarWinds trademarks, service marks, and logos are the exclusive property of SolarWinds Worldwide, LLC or its affiliates. All other trademarks are the property of their respective owners.