/ By communityteam@solarwinds.com / 0 Comments

The SolarWinds crew including sqlrockstar, chrispaap, and I just returned stateside after a successful jaunt across the Atlantic at VMworld Europe in Barcelona, Spain. Thank you to all of the attendees who joined us at Tom’s speaking sessions and at our booth. Thank you to Barcelona for your hospitality!

 

Below are a few pictures of the SolarWinds team as we walked the walk and talked the talk of monitoring with discipline.

 

The SolarWinds Family at VMworld Europe 2017 in BarcelonaThe SolarWinds Family Team Dinner

 

 

Our journey doesn’t stop with the end of the VMworld two-continent tour. We are about to ignite a full course of monitoring with discipline in Orlando. At Microsoft Ignite, visit us in Booth #1913 for the most 1337 swag as well as fantastic demos on monitoring hybrid IT with discipline.

Let us know in the comments if you'll be joining us in Orlando for Microsoft Ignite.

/ By communityteam@solarwinds.com / 0 Comments

The SolarWinds crew including sqlrockstar, chrispaap, and I just returned stateside after a successful jaunt across the Atlantic at VMworld Europe in Barcelona, Spain. Thank you to all of the attendees who joined us at Tom’s speaking sessions and at our booth. Thank you to Barcelona for your hospitality!

 

Below are a few pictures of the SolarWinds team as we walked the walk and talked the talk of monitoring with discipline.

 

The SolarWinds Family at VMworld Europe 2017 in BarcelonaThe SolarWinds Family Team Dinner

 

 

Our journey doesn’t stop with the end of the VMworld two-continent tour. We are about to ignite a full course of monitoring with discipline in Orlando. At Microsoft Ignite, visit us in Booth #1913 for the most 1337 swag as well as fantastic demos on monitoring hybrid IT with discipline.

Let us know in the comments if you'll be joining us in Orlando for Microsoft Ignite.

/ By Steve Cochran / 0 Comments

One of the greatest reservations about the cloud is whether it’s wise, or even beneficial, to relinquish control of the infrastructure to a third party. This is a question you’ve likely asked yourself, and have heard repeatedly from your team members. In some cases, the move to the cloud may be viewed as a threat to job security.

IT teams are used to owning their own stuff. With the cloud, IT teams no longer need to purchase hardware and software, and hook it all up. Gone are the racks and the blinking lights. Moving to the cloud requires a change in ownership and mindshare. For many of us, it’s not a comfortable proposition.

But how much control are we really giving up? I contend that it’s not as much as we may believe.

To read this article in full or to leave a comment, please click here

/ By Bob Violino / 0 Comments

One of the most popular forms of cloud computing is software-as-a-service (SaaS), defined as a software distribution model in which a service provider hosts applications for customers and makes them available to these customers via the internet.

SaaS is one of the three major categories of cloud services, along with infrastructure-as-a-service (IaaS) and platform-as-a-service (PaaS).

Given its ease of access, SaaS has become a common delivery model for many types of business applications, and it has been incorporated into the delivery strategies of enterprise software vendors.

To read this article in full or to leave a comment, please click here

/ By Andrew C. Oliver / 0 Comments
/ By David Linthicum / 0 Comments

There’s yet another cloud service from AWS: Amazon Lex, which lets developers build conversational interfaces into applications for voice and text. It uses the same deep learning technologies that power Amazon's Alexa voice assistant.

Lex lets you quickly build natural language conversational bots, aka chatbots. Microsoft has a similar technology, called the Microsoft Bot Framework. This seems to be a common service that most public cloud providers are looking to provide, not to mention many third parties that offer chatbot technology as well.

To read this article in full or to leave a comment, please click here

/ By communityteam@solarwinds.com / 0 Comments

     I know, I'm a day late and quite possibly 37 cents short for my coffee this morning, lol  Let's jump in, shall we?

 

          Equifax breach!  Ok, you all have been having the conversations about this already from Shields Down Conversation Number Two.  So, I figured I would invite some of my friends from our security products to join in with me to discuss this from a few different angles.

 

          My take will be from a business strategy, or lack of, standpoint.  143 million people, roughly, had their personal data exposed due to the negligence of a properly executing a simple patching plan.  Seriously?  Is this blog series live and viewable?  I am not the only person who implements patching, monitoring, log and event management in my environments, right, this IS known.  What I do not get is the "why?" for the love of everything holy, do businesses not follow these basic practices. 

 

          CIxO or CXO's are not the individuals that implement these practices personally.  However, it is their duty to the company and their core values to prioritize and place the right people into action and validating these security plans are being carried out. 

Think about that for a moment and then realize that there was a patch produced for the vulnerability Equifax failed to remediate in March.  This breach happened, as we all know, in mid-May.  Where is the validation?  Where was the plan?  Where is the ticketing system tracking the maintenance that should've completed on their systems?  There are so many questions that I have as this is not a small shop it is an ENTERPRISE organization???

 

          Now let me take it another step further.  Equifax dropped another juicy nugget of information of another breach in March.  Don't worry though it was an entirely different attack…  (smh)  However, the incredible part is that some of the upper-level folks were able to sell their stock.  That makes my heart happy, you know, to know that they had the time to sell their stock before they released information on being breached.  Hat's off to them for that, am I right? 

 

          Then, another company decided they needed to market and sell to the same people. (Individuals informed of their newly founded high(er) risk of identity theft, credit fraud, and pretty much never trusting any company again) Credit monitoring (for a reduced fee) that just so happens to use, none other than, Equifax services!  I'm still blown away by this…

 

          Ok, composure…  I have been told, in my inbox recently, that when you have products like 3rd party software, patching is limited and that organizations service level agreements for uptime of application do not allow patching on some of their servers.  I hear you!  I am a big believer that some patching servers can cause software to stop working or result in downtime also.  However, this is where you have to implement a lab and test patching.  You should check your patching regardless to make sure you are not causing issues with your environment in the first place.  IMHO

 

          I will implement patching on test servers usually on a Friday, and then I will verify the status of my applications on the server.

          I will also go through my security checks to validate no new holes or revert have happened before I implement in production within two weeks. 

 

 

          Bringing this all home to the strategy at hand, when you are an enterprise corporation with large amounts of personal data of your trusting customers that are the very reason you are as large as you are, you better DARN WELL have a security plan that is overseen by more than one individual!  Come on; this is not a small shop or even a business that could have a stance of "who would want our customer data" this is a company that holds data about plenty of consumers with great credit.  It is figuratively like a buffet for hackers.

 

          The C-level of this company should have known every month what has been done for patching, SQL monitoring, log, events, and traffic monitoring to know there were unpatched servers.  The biggest issue I see is when you have the "we cannot have downtime for patching" scenario. 

 

          Your CxO or CIxO has to be your IT champion!  They have to go nose to nose with their peers to make sure their security plans with proper actions get implemented 100%.  They place the people to do their plan, and it is their responsibility to ensure it gets done and not blocked at any level.

 

          Enough venting, for the moment, and let me bring in some of my friends for their take on this Equifax nightmare that is STILL unfolding! Welcome joshberman, just one of my awesome friends here at SolarWinds, who always offers up great security ideas and thoughts.

 

          Dez, summed up things nicely in her comments above, but let's go back to the origins of this breach and explore the timeline of events to illustrate a few points.

 

  • March 6th: the exploited vulnerability, CVE-2017-5638, became public.
  • March 7th: Security Analysts began seeing attacks propagate which was designed to exploit this flaw.
  • Mid-May: Equifax tracked the date of compromise back to this window of time.
  • July 29th: the date Equifax discovered a breach had occurred.

 

          That had a proper patch management strategy been set in place and backed by the right patch management software to enable the patching of third-party applications, it is likely that Equifax might not have succumb to such a devastating attack. This applies even if testing had been factored into the timeline, just as Dez recommends. "Patch early, patch often" certainly applies in this scenario though given the voracious speed of hackers to leverage newly discovered vulnerabilities as a means to their end. All said and done, if there is one takeaway here it is that patching as a baseline IT security practice, is and will forever be a must.

 

          Beyond the obvious chink in Equifax's armor, there is a multitude of other means by which they could have thwarted this attack, or at least minimized its impact.

 

          That's fantastic information Josh and appreciate your thoughts.  I also asked mandevil (Robert) for his thoughts on this topic, can we all just give just a little bit props for the THWACK handle he has, lol. I'm excited as he was on vacation and literally came back and knocked out some thought for me.  Much appreciated!

 

          Thanks, Dez.  "We've had a breach and data has been obtained by entities outside of this company."

What a sinking feeling if you are the one responsible for maintaining a good security posture. If this is you or even if you are tangentially involved in security, I hope this portion of this post helps you understand the importance of securing data at rest as it pertains to databases.

 

Securing data in your database.

 

          The only place data can't be encrypted is when it is in cache (memory). While data is at rest (on disk) or in flight (on the wire), it can and should be encrypted if it is deemed sensitive. This section will focus on encrypting data at rest. There are a couple different ways to encrypt data at rest when it is contained within a database. Many major database vendors like Microsoft (SQL Server) and Oracle provide a method of encrypting called Transparent Data Encryption (TDE). This allows you to encrypt the data in the files at the database, tablespace, or column level depending on the vendor. Encryption is implemented using certificates, keys, and strong algorithms and ciphers.

 

Links for more detail on vendor TDE description and implementation:

 

SQL Server TDE

Oracle TDE

 

          Data encryption can also be implemented using an appliance. This would be a solution if you would want to encrypt data but the database vendor doesn't offer a solution or licensing structures change with the usage of their encryption. You may also have data outside of a database that you'd want to encrypt that would make this option more attractive (think of log files that may contain sensitive data). I won't go into details about different offers out there, but I have researched several of these appliances and many appear to be highly securitized (strong algorithms and cipers). Your storage array vendor(s) may also have solutions available.

 

What does this mean and how does it help?

 

          Specifically, in the case of Equifax, storage level hacks do not appear to have been employed, but there are many occurrences where storage was the target. By securing your data at rest on your storage tier, it can prevent any storage level hacks from obtaining any useful data. Keep in mind that even large database vendors have vulnerabilities that can be exploited by capturing data in cache. Encrypting data at the storage level will not help mitigate this.

 

What you should know.

 

          Does implementing TDE impact performance? There is overhead associated with encrypting data at rest as the data needs to be decrypted when read from disk into cache. That will take additional CPU cycles and a bit more time. However, unless you are CPU constrained, the impact should not be noticeable to end-users. It should be noted that index usage is not effected by TDE. Bottom line is if the data is sensitive enough so that the statement at the top of this section gets you thinking along the lines of a resume generating event, the negligible overhead impact of implementing encryption should not be a deterrent from its use. However, don't encrypt more than is needed. Understand any compliance policies that govern your business (PCI, HIPAA, SOX, etc.).

 

Now to wrap this all up

 

 

          When we think of breaches, especially those involving highly sensitive data or data which falls under the scope of regulatory compliance, SIEM solutions certainly come to mind. This software performs a series of critical functions to support defense-in-depth strategies. In the case of Equifax, their most notable influence comes in the way of lower the time of detection of either compromise or the breach itself. On one hand, they support the monitoring and alerting of anomalies on the network that could indicate compromise. On the other, they can signal the ex-filtration of data – the actual event of the breach – by monitoring traffic on endpoints and bringing to the foreground spikes in outbound traffic which depending on the details, may otherwise go unnoticed. I'm not prepared to make the assumption that Equifax was lacking such a solution, but given this timeline of events and their lag in response, it begs the question.

 

 

As always, thank you all for reading and keep up these excellent conversations.

 

 

~Dez~

/ By Ahsan Awan / 0 Comments

Units of space are everywhere. We can see them all around us. When we package a unit of space distinctly, we call it a container. Whether on ships in the sea, or in a storage closet, we use containers to organize things into isolated and distinct units of space. In the computing world, containers have become the distinguishing structure used to isolate and organize elements of software code. Just like containers on ships or in the closet, they can be visualized.

In today’s web-centric IT world, application programming interfaces (APIs), command-line interfaces (CLIs), microservices, and other deployable software code releases all tend to reside in containers. In point of fact, container utilization has become a powerful way in which to build and maintain consistently efficient quality-controlled systems, and it has become a cornerstone of modern devops.

To read this article in full or to leave a comment, please click here