/ By Isaac Sacolick / 0 Comments

I’ve been using low-code and no-code platforms for almost two decades to build internal workflow applications and rapidly develop customer-facing experiences. I always had development teams working on Java, .NET, or PHP applications built on top of SQL and NoSQL datastores, but the business demand for applications far exceeded what we could develop. Low-code and no-code platforms provided an alternative option when the business requirements matched the platform’s capabilities.

I recently shared seven low-code platforms developers should know and what IT leaders can learn from low-code platform CTOs. Many of these platforms have been around longer than a decade, and some support tens of thousands of business applications. Over time these platforms have improved capabilities, developer experiences, hosting options, enterprise security, devops tools, application integrations, and other competencies that enable rapid development and easy maintainenance of functionally rich applications.

To read this article in full, please click here

/ By David Linthicum / 0 Comments

According to the CDC, opioids were involved in 46,802 overdose deaths in 2018 (69.5 percent of all drug overdose deaths). For those of you living in the United States, this is old news.

As the pandemic stretches on, deaths in the U.S. from opioids and other habit-forming drugs such as alcohol, are likely to rise in 2020. We’re at a point where most health organizations are deeply concerned.

We could reduce the number of deaths by helping addicts in better ways. The combinations of cloud, artificial intelligence, and IoT (Internet of Things) working together could replace rehab clinics as the preferred way to overcome dangerous or unhealthy addictions.

To read this article in full, please click here

/ By David Linthicum / 0 Comments

According to the CDC, opioids were involved in 46,802 overdose deaths in 2018 (69.5 percent of all drug overdose deaths). For those of you living in the United States, this is old news.

As the pandemic stretches on, deaths in the U.S. from opioids and other habit-forming drugs such as alcohol, are likely to rise in 2020. We’re at a point where most health organizations are deeply concerned.

We could reduce the number of deaths by helping addicts in better ways. The combinations of cloud, artificial intelligence, and IoT (Internet of Things) working together could replace rehab clinics as the preferred way to overcome dangerous or unhealthy addictions.

To read this article in full, please click here

/ By James Kobielus / 0 Comments

Much of the anti-adversarial research has been on the potential for minute, largely undetectable alterations to images (researchers generally refer to these as “noise perturbations”) that cause AI’s machine learning (ML) algorithms to misidentify or misclassify the images. Adversarial tampering can be extremely subtle and hard to detect, even all the way down to pixel-level subliminals. If an attacker can introduce nearly invisible alterations to image, video, speech, or other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively.

Growing threat to deployed AI apps

This is no idle threat. Eliciting false algorithmic inferences can cause an AI-based app to make incorrect decisions, such as when a self-driving vehicle misreads a traffic sign and then turns the wrong way or, in a worst-case scenario, crashes into a building, vehicle, or pedestrian. Though the research literature focuses on simulated adversarial ML attacks that were conducted in controlled laboratory environments, general knowledge that these attack vectors are available will almost certainly cause terrorists, criminals, or mischievous parties to exploit them.

To read this article in full, please click here

/ By James Kobielus / 0 Comments

Much of the anti-adversarial research has been on the potential for minute, largely undetectable alterations to images (researchers generally refer to these as “noise perturbations”) that cause AI’s machine learning (ML) algorithms to misidentify or misclassify the images. Adversarial tampering can be extremely subtle and hard to detect, even all the way down to pixel-level subliminals. If an attacker can introduce nearly invisible alterations to image, video, speech, or other data for the purpose of fooling AI-powered classification tools, it will be difficult to trust this otherwise sophisticated technology to do its job effectively.

Growing threat to deployed AI apps

This is no idle threat. Eliciting false algorithmic inferences can cause an AI-based app to make incorrect decisions, such as when a self-driving vehicle misreads a traffic sign and then turns the wrong way or, in a worst-case scenario, crashes into a building, vehicle, or pedestrian. Though the research literature focuses on simulated adversarial ML attacks that were conducted in controlled laboratory environments, general knowledge that these attack vectors are available will almost certainly cause terrorists, criminals, or mischievous parties to exploit them.

To read this article in full, please click here

/ By David Linthicum / 0 Comments

The notion of the intelligent edge has been around for a few years. It refers to placing processing out on edge devices to avoid sending data all the way back to the centralized server, typically existing on public clouds.

While not always needed, the intelligent edge is able to leverage machine learning technology at the edge, moving knowledge building away from centralized processing and storage. Applications vary, from factory robotics to automobiles to on-premises edge systems residing in traditional data centers. It’s good in any situation where it makes sense to do the processing as close to the data source as you can get.

We’ve wrestled with this type of architectural problem for many years. With any distributed system, including cloud computing, you have to consider the trade-off of process and storage placement on different physical or virtual devices. The intelligent edge is no different.

To read this article in full, please click here

/ By David Linthicum / 0 Comments

The notion of the intelligent edge has been around for a few years. It refers to placing processing out on edge devices to avoid sending data all the way back to the centralized server, typically existing on public clouds.

While not always needed, the intelligent edge is able to leverage machine learning technology at the edge, moving knowledge building away from centralized processing and storage. Applications vary, from factory robotics to automobiles to on-premises edge systems residing in traditional data centers. It’s good in any situation where it makes sense to do the processing as close to the data source as you can get.

We’ve wrestled with this type of architectural problem for many years. With any distributed system, including cloud computing, you have to consider the trade-off of process and storage placement on different physical or virtual devices. The intelligent edge is no different.

To read this article in full, please click here

/ By Scott Carey / 0 Comments

The rumors of Amazon Web Services’ fall from the pinnacle were premature. In the push to democratize cloud computing services, AWS had the jump on everyone from the beginning, ever since it was spun out of the mega retailer Amazon in 2002 and launched the flagship S3 storage and EC2 compute products in 2006. It still does. 

AWS quickly grew into a company that fundamentally transformed the IT industry and carved out a market-leading position, and has maintained that lead — most recently pegged by Synergy Research at almost double the market share of its nearest rival Microsoft Azure, with 33 percent of the market to Microsoft’s 18 percent.

To read this article in full, please click here

/ By Matt Asay / 0 Comments

In the last dozen years or so, we’ve witnessed a dramatic shift from general purpose databases (Oracle, SQL Server, etc.) to purpose-built databases (360 of them and counting). Now programming languages seem to be heading in the same direction. 

As developers move to API-driven, highly elastic infrastructure (where resources may live for days instead of years), they’re building infrastructure as code (IaC). But how to build IaC remains an open question. For a variety of reasons, the obvious place to start was with imperative languages like C or JavaScript, telling a program how to achieve a desired end state.

To read this article in full, please click here

/ By David Linthicum / 0 Comments

According to Gartner, “Distributed cloud is the distribution of public cloud services to different physical locations, while the operation, governance and evolution of the services remain the responsibility of the public cloud provider.” Analysts go on to explain that the distributed cloud provides a flexible agile environment for applications and data that require low-latency, data cost reduction, and data residency.

This idea is not new; I’ve used it to remove latency and/or comply with data sovereignty laws from time to time. At its essence, the advantage is for end-users to have cloud computing resources closer to the physical location where the business activities happen, thus reducing latency.

To read this article in full, please click here