Wibu-Systems Blog https://www.wibu.com/uk/blog.html Thu, 07 Dec 2017 09:06:48 +0100 Thu, 07 Dec 2017 09:06:48 +0100 t3extblog extension for TYPO3 Crossing the Licensing Migration Chasm Mon, 04 Dec 2017 12:59:00 +0100 https://www.wibu.com/uk/blog/article/crossing-the-licensing-migration-chasm.html post-77 https://www.wibu.com/uk/blog/article/crossing-the-licensing-migration-chasm.html Terry Gaul Established due diligence best practices provide a roadmap to ensure a successful migration to a modern, flexible and robust licensing system. Crossing the Licensing Migration Chasm by Terry Gaul 04-12-17

Cloud initiatives, SaaS, subscriptions, pay-per-use, and a bevy of new, customer-centric licensing models are wreaking havoc with some ISVs who are struggling to keep up with their own antiquated licensing engine or are unsure as to how to adapt one of these new models and best satisfy their customers. One thing is for sure - when the dust settles, the most competitive ISVs will be those who have employed a flexible license management system that enables them to easily evaluate, implement, and tweak their licensing model to keep pace with ever changing consumer preferences, while at the same time, profiting from creative software monetization strategies that are optimal for their business.

What’s holding back some ISVs is the perception that the migration process from their existing “build your own” licensing system or legacy 3rd party system entails a prolific, resource-intensive and costly effort. And the most efficient migration path is not always crystal clear. Hence, the chasm. Among the many challenges ISVs face is the migration of existing data, especially if they still have to support an existing customer base while undertaking the migration. In most cases, there will be two licensing systems running in parallel for a defined period during the transition.

It has been our experience that the most important factor in a successful migration is for the ISV to be most diligent upfront to gain a thorough understanding of the short term migration issues and the market dynamics and associated licensing requirements that will support long term business objectives.

There are many questions that need to be considered during the due diligence phase of the migration effort: 

  • Make vs. buy: What are the pros and cons of implementing a home-grown solution vs. buying an off-the-shelf licensing system? Are there enough resources and internal expertise to perform the transition most efficiently?
  • Migration scenarios: Patch an existing system or convert to an entirely new system? Run the old and new systems in parallel for a transitory period? For how long?
  • Protection: How should license protection be built into the process to protect against IP theft, reverse engineering, and software piracy?
  • Licensing: Are different licensing models required? Is the licensing process and activations the same for all products? Will there be hardware or software activations, or both? Is there need to create new licenses for older versions of your products? Is there a long term strategic product development plan that includes a roadmap for entering new markets?

As confusing and daunting as the migration process may seem, it should be comforting to know that there are established best practices available that provide a roadmap to efficiently cross the chasm and ensure a successful migration to a modern, flexible and robust licensing system.

For starters, you can read an article that appeared in our KEYnote magazine that describes in detail several different paths that have proven successful in real-world migrations to our CodeMeter protection and licensing platform, or spend an hour in our upcoming Webinar, Streamlining Licensing Migration from 3rd Party Systems, to be held on December 13, 2017 at 6:00 pm CET/9:00 am PST, and see a live demonstration.

]]>
Pay-Per-Use licensing: its time has come Tue, 07 Nov 2017 10:59:00 +0100 https://www.wibu.com/uk/blog/article/pay-per-use-licensing-its-time-has-come.html post-76 https://www.wibu.com/uk/blog/article/pay-per-use-licensing-its-time-has-come.html Terry Gaul The pay-per-use model is widely embraced by consumers, has tangible benefits for ISVs and embedded system developers, and is industry-agnostic. Pay-Per-Use licensing: its time has come by Terry Gaul 07-11-17

Pay-per-use software licensing is not a new concept. In fact, as discovered in a recent Google search, the business model was under consideration as far back as 1993 (Host Users Seek License Details, Computerworld, May 24, 1993), when visionaries at companies like IBM perceived potential value in the novel concept. The idea was well before its time, perhaps, particularly given that the commercialization of the Internet and the realization of its powerful impact was just underway and the build out of enterprise IP networks was still in its infancy.

Today, however, the rise in cloud-based computing is driving market demand away from conventional perpetual licensing and toward next generation consumption based services in the form of software-as-a-service, infrastructure-as-a-service, and other subscription models that base pricing on actual service usage. The pay-per-use model has come of age and is being widely embraced by consumers, particularly those with low volume needs or those whose usage fluctuates in and out of peak periods.

The pay-per-use model is relatively straightforward: use of the product is metered and customers pay only for service they use, much like pay-per-view TV or publishers and research firms who sell access to high value content on per-use or per-download basis.

The pay-per-use model has tangible benefits for ISVs and embedded system developers as well as end users.

Benefits to customers include low start-up costs, month-to-month affordability, and convenience. In low usage scenarios, the model makes expensive, specialized software more affordable and accessible to smaller businesses. It is also beneficial to customers in environments where usage fluctuates over time, so when the software is not being used, the customer is not paying for it.

Software vendors, on the other hand, benefit from enhanced customer relationships. The pay-per-use model also provides valuable market information, as vendors gain greater feedback as to product usage and can retool and refine their pricing models and packaging to better serve their customer demands and improve revenues.

As consumers become more sophisticated and selective in their licensing preferences, it is incumbent upon the ISV to be capable of deploying new business models that satisfy their customers, particularly in a highly competitive market. Software licensing now is a mechanism by which vendors can differentiate themselves in the marketplace while enriching their customer relationships and building trust and loyalty for the future.

In the industrial realm, pay-per-use licensing has become more relevant as well, driven by recent developments in machine connectivity, the globalization of manufacturing processes, and the interest in customized manufacturing for production runs of maybe even only single pieces. Pay-per-use allows them to pay on the go for the machine lease, the consumables, the raw material, or the software package they specifically requested, at the time they really need it.

The most successful ISVs will be those who have the tools to roll out a pay-per-use licensing model as easily as they would for conventional permanent or subscription licenses, with automated billing and integration of the process into ERP, CRM, e-commerce and other back office business platforms.

If you are considering adopting pay-per-use licensing, you will be interested in attending our upcoming webinar, Monetizing Software, Machines, and Materials with New Business Models, on Thursday, November 16, 2017 at 9:00 am PST / 6:00 pm CET. The webinar will review different application scenarios for pay-per-use licensing and demonstrate the technical implementation using our CodeMeter License Central platform. You can view the agenda and register here.

]]>
Cybersecurity for Government and Industry Thu, 19 Oct 2017 09:52:00 +0200 https://www.wibu.com/uk/blog/article/cybersecurity-for-government-and-industry.html post-75 https://www.wibu.com/uk/blog/article/cybersecurity-for-government-and-industry.html Terry Gaul Cybercrime will cost up to $6 trillion by 2021 - nearly half of today’s US GDP and more profitable than the global trade of all major illegal drugs combined. Cybersecurity for Government and Industry by Terry Gaul 19-10-17

Cybercrime will cost up to $6 trillion by 2021, according to a report recently released by Cybersecurity Ventures. This colossal number is equivalent to nearly half of today’s US Gross Domestic Product (GDP) and more profitable than the global trade of all major illegal drugs combined.

The report links cybercrime costs to damage and destruction of data, stolen money, lost productivity, theft of intellectual property, theft of personal and financial data, embezzlement, fraud, post-attack disruption to the normal course of business, forensic investigation, restoration and deletion of hacked data and systems, and reputational harm.

Beyond the financial consequences, cybercrimes jeopardize the trustworthiness of the connected economy, disrupt global commerce, and threaten critical infrastructure, ultimately putting lives at risk.

BSA | The Software Alliance, a leading advocate for the global software industry, has been an ongoing industry champion of software innovation, anti-piracy, and security and recently released their cybersecurity agenda, Security in the Connected Age. The agenda defines elements of cybersecurity that government policymakers can evaluate and help them to prioritize legislation that will most effectively strengthen policies to protect citizens from cyber threats.  The agenda urges the US government to expand its role in improving cybersecurity, both domestically and abroad, and to work closely with industry to:

  • Promote a secure software ecosystem by creating industry benchmarks, developing tools to understand critical information, and strengthening security research and vulnerability disclosure;
  • Strengthen government’s approach to cybersecurity by modernizing government IT, harmonizing federal cybersecurity regulations, and incentivizing adoption of the National Institute of Standards and Technology’s framework;
  • Pursue international consensus for cybersecurity action by supporting international standards development, as well as adopting and streamlining international security laws;
  • Develop a 21st century cybersecurity workforce by increasing access to computer science education and opening new paths to cybersecurity careers; and
  • Advance cybersecurity by embracing digital transformation, leveraging the potential of emerging technologies and forging innovative partnerships to combat emerging risks.

One key area of emphasis in the agenda is the need to drive IoT cybersecurity through adoption of proven software security best practices. Organizations are encouraged to integrate security-by-design principles into IoT standards and guidance, and develop frameworks for assessing risk and identifying security measures. This is where industry can play a major role through participating in global organizations like the Industrial Internet Consortium, Trusted Computing Group, and the Silicon Trust whose members are working diligently towards developing standards and best practices that address cybersecurity among other important industrial initiatives.

A good example of such an initiative is the IIC Industrial Internet Security Framework (IISF), a technical report developed by members from 25 different organizations. The IISF is the most in-depth cross-industry-focused security framework comprising expert vision, experience and security best practices. It reflects thousands of hours of knowledge and experiences from security experts, collected, researched and evaluated for the benefit of all IIoT system deployments.

]]>
Creative Software Monetization Strategies Wed, 13 Sep 2017 09:45:00 +0200 https://www.wibu.com/uk/blog/article/creative-software-monetization-strategies.html post-74 https://www.wibu.com/uk/blog/article/creative-software-monetization-strategies.html Terry Gaul The next generation of software monetization is about enabling business models that provide additional opportunity for monetization to drive growth. Creative Software Monetization Strategies by Terry Gaul 13-09-17

“The next generation of software monetization is not just about IP protection nor limited to licensing alternatives (perpetual versus term), but rather about enabling business models that provide additional opportunity for monetization to drive growth.”

I found this statement to be a key takeaway from a recent Gartner report, Disruption in Software Business Models Creates New Opportunities for Monetization. This notion is based on several recent trends in the industry:

  • The transformation of software licensing models from upfront cost with an add-on maintenance contract to more recurring revenue models, like time-based or feature-based subscriptions.
  • The enablement of new pricing scenarios that are more end-user friendly and easier for the publisher to manage entitlements.
  • The granular ability to track application usage, which paves the way for attractive consumption-based pricing models and provides developers with valuable analytics and insights for next generation products.

Gartner highlighted several assumptions that will drive these future transformations:

  • By 2018, 50% of independent software vendors (ISVs) will use concurrent licensing (based on users) as the primary licensing strategy compared with the majority using node-lock models today.
  • By 2019, 80% of ISVs will use multiple licensing models (such as consumption/metered services, capacity, node lock and concurrent) for software monetization.

It’s interesting to note that similar dynamics are driving transformations in the embedded system market segment as well. According to Gartner, embedded developers should consider that: 

  • By 2019, 20% of intelligent device manufacturers (IDMs) will move from no protection for embedded software to a node-lock model as the primary software licensing strategy for monetization beyond the hardware.
  • By 2020, 15% of Intelligent Device Manufacturers will be exploring/piloting concurrent (based on users) and consumption (metered services) software licensing strategies in order to further monetize on embedded software.

With these industry shifts occurring, embedded device developers are realizing the potential benefits of recurring revenue models for themselves as well. Gartner points out that, for example, a medical device manufacturer, can offer hospitals and medical centers with flexible pricing options that alleviate the high upfront capital equipment cost with a subscription-based model that is more manageable. As a result more customers can access the medical equipment they otherwise could not afford.

Agfa HealthCare, a leading provider of diagnostic imaging and healthcare IT solutions, is a good case in point. The company’s digital computed radiography system encompasses the most cutting-edge technology in clinical research, but many small laboratories, orthopedic doctors, and other facilities were hard pressed to afford the upfront investment for hardware and software. To accommodate the needs of the vast low-end market, the company rolled out a time-based licensing model that allows the user to only pay according to the imaging volume they needed, which made the solution more affordable for providers and their patients who could benefit from the state-of-the-art technology while opening up new markets for the company.

As these transformations continue to alter the software licensing and monetization landscape, the next question is what tools are needed and how best to implement these new business models. Do the software publishers and embedded device manufacturers develop and rely upon their own expertise to manage the process or partner with an expert in the field to help them commercialize these models? In the case of Agfa Healthcare, they chose to utilize CodeMeter, Wibu-Systems’ proven software security and licensing solution, to help them fulfill their business vision. You can read the full story here.

]]>
Time to Speak a Common Language in the IIoT Thu, 31 Aug 2017 14:07:00 +0200 https://www.wibu.com/uk/blog/article/time-to-speak-a-common-language-in-the-iiot.html post-72 https://www.wibu.com/uk/blog/article/time-to-speak-a-common-language-in-the-iiot.html Marcellus Buchheit Do we all share a common understanding of IIoT terms? Most likely not, and that’s why the IIC continues to update its IIoT Vocabulary Report. Time to Speak a Common Language in the IIoT by Marcellus Buchheit 31-08-17

In our daily lives, how frequently have we heard someone say “let’s make sure we are on the same page”, whether it be during a personal interaction or a business communication? Pretty often I would say, because it is very easy to get caught up in our comfortable jargon and buzzwords that are prevalent in our particular environments, but not be so readily understandable by people outside of our close circles.

With the rapid growth of the Industrial IIoT and the wide diversity of stakeholders and industries involved, “getting on the same page” has become more difficult, yet more important than ever. For example, do we all share a common understanding of terms and concepts like authentication, operational technology, root of trust, vulnerability and other similar terms that are frequently mentioned in articles, technical documents, and other presentations and publications? Most likely not, and that’s why the Industrial Internet Consortium (IIC) continues to update its IIoT Vocabulary Report.

The second version of the report (v2.0) was developed by members of the IIC Vocabulary Task Group which is comprised of software architects, business experts, and security experts and released on July 24. The report contains vocabulary terms and definitions considered relevant to the IIoT. The goal of the document is to enable all stakeholders in the IIoT ecosystem – system architects, IT managers, plant managers, and business decision makers – to communicate with each other effectively. Many of the terms were updated from the first report originally released in 2016 and new terms introduced to keep pace with the rapidly evolving IIoT nomenclature.

Anish Karmarkar, IIC Vocabulary Task Group Chair, and Director, Standards Strategy & Architecture at Oracle, said in an IIC news release: “The Industrial Internet comprises a diverse set of industries and people with various skill sets and expertise. Often, concepts and terminology in one field will have different meanings in another, leading to confusion. Industrial Internet projects succeed when participants can communicate using common vocabulary terms and definitions. The IIC Industrial Internet Vocabulary Technical Report v2.0 ensures all IIoT stakeholders are speaking the same language, avoiding what would otherwise be an IIoT ‘Tower of Babel.’”

Many people think that working on a vocabulary document would be quite boring. In actuality, the opposite is true. The weekly meetings are more emotionally driven than any other industrial internet meetings that I have attended. By contrast, other meetings may have 20 attendees, but the moderator is content to generate just a few responses from the attendees. At a vocabulary meeting, however, we may sometimes have just five attendees but the moderator needs to queue the speakers because people get excited and respond to a comment at the same time! As a result, the meeting requires one’s full attention (unwise to attempt to read your unrelated emails during the discussion, for example). And the content is intellectually challenging. Sometimes people will spend a long time discussing a simple phrase or even a single word, but in the end most decisions are agreed upon unanimously.

Working on the industrial internet vocabulary report is also quite stimulating. IoT continues to be over hyped in the information and industrial world and many words and phrases are “misused”. By presenting a modern vocabulary with a strong logical model behind different words and combinations of words gives the Industrial Internet Consortium a more structured approach to leading the IoT world down the proper path, at least in the communication about IoT.

In all, the report provides a standard definition for more than 140 terms commonly used in IIC reference and architectural documents. The full report, including terms, definitions and sources, can be downloaded here on the IIC website.

]]>
U.S. Introduces New Cybersecurity Legislation Tue, 15 Aug 2017 11:57:00 +0200 https://www.wibu.com/uk/blog/article/us-introduces-new-cybersecurity-legislation.html post-73 https://www.wibu.com/uk/blog/article/us-introduces-new-cybersecurity-legislation.html Terry Gaul Will this legislation remedy the market failure that has occurred and encourage device manufacturers to compete on the security of their IoT products? U.S. Introduces New Cybersecurity Legislation by Terry Gaul 15-08-17

U.S. Senators recently introduced legislation intended to improve the cybersecurity of Internet-connected devices. The Internet of Things (IoT) Cybersecurity Improvement Act of 2017 would require that devices purchased by the U.S. government meet certain minimum security requirements. The main points of the bill are aimed at vendors who supply the U.S. government with IoT devices who would have to ensure that their devices are patchable, do not include hard-coded passwords that can’t be changed, and are free of known security vulnerabilities.

Senator Mark Warner, a co-author of the bill, stated: “My hope is that this legislation will remedy the obvious market failure that has occurred and encourage device manufacturers to compete on the security of their products.”

The recent spate of malware attacks and the public exposure of IoT device vulnerabilities in so many sectors have elevated the visibility of cybersecurity and it is encouraging to see that these issues are being addressed at the highest levels. And while this legislation is a positive step forward, the effort begs the question, Is it Enough? And if the answer is no, then the responsibility is on the device developers (where it should be) to step up their efforts to use technologies that are available today to ensure that the devices that are proliferating in the commercial markets are safe, ensure privacy, and maintain data security.

The many facets of security that need to be addressed with Internet-connected devices go well beyond the security requirements put forth in the IoT Cybersecurity bill. For example, developers need to consider authentication or licensing of components based on their unique identity, monitoring and securing system integrity, protection of data and communication, and secure updates and upgrades, and that’s just to name a few.

Oliver Winzenried, CEO and Founder of Wibu-Systems AG, outlined key areas that should be addressed in developing a security framework to protect IoT vulnerabilities. In each of these areas, mechanisms exist that can be implemented today:

  • IP Protection: the actual assets – the IP in the code – can be encrypted with lightweight symmetric encryption and only decrypted on the fly.
  • Product Protection: protect against counterfeiting products by encrypting data and decrypting only on licensed machines.
  • Flexible Licensing: provide variable licensing options like pay-per-use, renting, subscription, etc. for software features. Vendors decide how licenses are deployed, either in app stores or user license portals.
  • Tamper Protection: application code is digitally signed using asymmetric cryptography, with root public keys as securely stored anchors of trust. The devices validate authenticity and integrity themselves.
  • Device identity: Connected devices authenticate themselves with tamper-proof private keys for example. Open standards like OPC UA are excellent solutions for trusted devices of different manufacturers to operate together.

You can read Oliver’s full comments in his article, Security Frameworks to Set the IoT and IIoT in Motion.

]]>
Strengthening Encryption Protections Wed, 19 Jul 2017 15:09:00 +0200 https://www.wibu.com/uk/blog/article/strengthening-encryption-protections.html post-71 https://www.wibu.com/uk/blog/article/strengthening-encryption-protections.html Terry Gaul Unlike the often used obfuscation approach, Blurry Box cryptography offers software protection that is completely based on publicly available methods. Strengthening Encryption Protections by Terry Gaul 19-07-17

It seems like every day we hear about damaging and costly cyberattacks resulting from pirated software, theft of digital Intellectual Property, stolen personal, financial and medical data, or malicious tampering of consumer IoT devices and connected industrial machine systems in the IIoT. What’s most alarming about these attacks is that many times hackers were able to exploit a vulnerability in the very protection mechanisms designed to secure them.

For centuries now, encryption schemes, from simple ciphers to complex symmetric and asymmetric cryptography, have been used as a formidable defense against hackers to protect data, communications, devices and systems. But just as encryption techniques have evolved and become more sophisticated, so have the abilities of cyber criminals to identify and attack vulnerabilities in code, cryptographic protocols or key management in even the most clever protection schemes. Encryption alone is not the end-all solution. For example, use of a weak algorithm for encryption and decryption may be insufficient to prevent a brute force attack. On the other hand, use of a strong encryption algorithm, but with an insecure implementation that may expose the decryption key, can render the application vulnerable to attack.

The fact is that there is no 100% secure solution in software protection. That’s why companies like Wibu-Systems are dedicated to the continuing development of novel technology-driven security solutions – staying steps ahead of the would-be hackers. Often times it is a collaboration that results in a breakthrough technology, as is the case of Wibu-Systems’ Blurry Box encryption that was developed in conjunction with the Karlsruhe Institute of Technology and the research center FZI. Blurry Box encryption technology was recently proven unbreakable in a global hacking contest.

Blurry Box is built upon the axiom known as Kerckhoffs’ Principle that states that the strength of the encryption system should depend upon the key being used, not the secrecy of the system. This approach is contrary to the often used obfuscation approach, which is otherwise known as  “security by obscurity”. Blurry Box cryptography offers software protection that is completely based on publicly available methods. The basic principle of Blurry Box cryptography is the use of one or more secure keys in a dongle and the fact that software is typically complex. Blurry Box cryptography uses seven published methods that greatly increase the complexity and time required for an attack to be successful.

As described in a recent article by Silicon Trust, Blurry Box splits each function block into several variants, which return the correct output of the original unencrypted function only for a specific input set. A wrapper function maps these inputs to the variants, which are encrypted with separate keys stored on a dongle. When the software is executed, the system only decrypts those variants that match the given input. Hackers will only ever see that part of the code that matches the previous input.

In traditional encryption, hackers could work their way through the function blocks in what is called a “copy-and-paste” attack. However, even if a hacker captures individual variants, the protected program is so complex that no hacker can derive additional variants from a specific subset that may become known to him. In essence, Blurry Box does not depend on making copy-and- paste attacks on individual variants impossible, but on making the attack strategy as a whole unfeasible.

The bottom line is that it would be easier and less expensive for a would-be attacker to develop similar software from scratch vs. attempting to crack an application protected by Blurry Box encryption.

Blurry Box can be employed to protect any software however it is deployed. In today’s smart factories, for example, Blurry Box can provide dramatic benefits, particularly in protecting sensitive information such as the technology or configuration data used in manufacturing processes. This invaluable data needs to be safeguarded against know-how theft, counterfeiting, and tampering. Applying Kerckhoffs’ Principle provides encryption methods associated with hardware anchors of trust to ensure IP confidentiality and the integrity and authenticity of digital signatures. You can read more technical details about Blurry Box, including use cases, in an article, Blurry Box Encryption Scheme and Why It Matters to Industrial IoT, published in the Industrial Internet Consortium’s Journal of Innovation.

You can also watch a brief animated description of Blurry Box and how it is integrated into Wibu-Systems’ CodeMeter Protection Suite.

 

]]>
IoT and Blockchain. A match made in heaven? Tue, 27 Jun 2017 07:59:00 +0200 https://www.wibu.com/uk/blog/article/iot-and-blockchain-a-match-made-in-heaven.html post-70 https://www.wibu.com/uk/blog/article/iot-and-blockchain-a-match-made-in-heaven.html Andreas Schaad A cloud-based licensing and software protection service with private Blockchain to address non-repudiable logging is under investigation. IoT and Blockchain. A match made in heaven? by Andreas Schaad 27-06-17

When I last googled Blockchain, 21 million results were returned, and I am sure this number is on the rise as government agendas, panels at international trade shows, and every vendor dealing in IT security are touching upon this technology. Blockchain has grown in popularity as cryptocurrencies have shaken the markets and opened new opportunities to make or lose huge sums of money.

In our constant process of managing our technical portfolio, we are monitoring emerging technologies such as Blockchain and verify whether they can be applied to our ongoing architectural transition towards complete and integrated cloud-based licensing and software protection in IoT and enterprise application management scenarios alike.

Let’s take a closer look at the technology that is behind the success of Blockchain. The core idea of a Blockchain is to provide a distributed database (often referred to as a ledger). Each record is represented by a block that contains a timestamp and reference to the previous block. A block may contain data such as financial transactions or records of generic events. In the context of Wibu-Systems applications, such a record may include software licensing information.

In a distributed system where peers do not necessarily trust each other, a decentralized and distributed database would ideally implement a set of desirable properties:

  • there is no need for a central broker or trusted third party
  • the blocks are public (within the peer group) and can be verified by any participant
  • without peer consensus the blocks are resistant to unwanted modification
  • a ledger can contain executable code based on defined conditions (smart contracts)

In a nutshell, a Blockchain tries to approximate a decentralized and distributed digital ledger that is used to record transactions across many computers so that the record cannot be altered retroactively without the alteration of all subsequent blocks and the collusion of the network.

Another important aspect is that participants in a Blockchain are represented by their public / private key pairs. So, unlike the identity of an Internet service, a Blockchain identity cannot be “confiscated”. Looking at identity management from a different perspective, a Blockchain could be an ideal public database for retrieving certificates or other types of digital identities without the need for trusted third parties. A Blockchain could be a fundamental element in establishing a root of trust regarding device identities in the IoT as well as recording device transactions.

However, the reality differs substantially from this ideal model, as visible in current Blockchain implementations such as Bitcoin. Before a transaction can enter a ledger, some heavy processing is required (solving a hashing problems). Not only is this processing energy-intensive (and impacting our environment on the scale of the CO2 emissions of aircraft); most processing power is also concentrated in only a certain set of geographical locations across the world. Lastly, computing a valid entry to the ledger happens in anything but real time. However, it must be added that Bitcoin is not the only Blockchain implementation, and other implementations such as Ethereum are closer in one or the other aspect to the ideal model.

Besides such technical limitations, our goal is to consider how Blockchain can fit in with our technology portfolio and our customers’ scenarios:

  1. Recording B2C transactions: Let’s assume a scenario where an ISV has licensed an end user with 100 printing tokens for a 3D Printer. Each time a unit is printed, one token is subtracted from the current balance and the printer refuses to print anymore once the account balance is down to zero.

    Questioning the use of Blockchain: Such a typical scenario already raises questions about the usefulness of Blockchain technology. Since this case illustrates a direct interaction between the ISV and the end user, there is no need to publicly record or validate transactions. In fact, current Wibu-Systems’ technology is sufficient to securely manage such “unit counters”.

    This situation may change once Wibu-Systems starts offering a fully cloud-based licensing service and consumers begin to require a non-repudiable transaction log.

  2. Validating B2B transactions: In some cases, a more flexible approach is to allow ISVs to generate licenses on premise and only periodically report back to Wibu-Systems how many licenses were generated (based on reading the values from a hardware module (the FSB) used for generating the licenses).

    Questioning the use of Blockchain: While a centralized ledger could be one possible way to record the generation of licenses and later validate the total number, there is only limited reason for why this could not be done with a standard database. Whether or not the ISV reports the correct figures is an issue of (socio-)technical trust and separate from the actual storage and validation process. In other words, even if we did implement a private Blockchain, we would still need to ensure that the transactions are generated reliably.

  3. Evaluating B2C transactions: When software is used based on the conditions stipulated by the ISV, a Blockchain entry could serve as an unforgeable data source to allow the calculation of remaining usage time or allowed feature invocations. However, the evaluation is done by program logic, separate from the actual ledger, unless Smart Contracts are used.

Considering the concept of Smart Contracts, Blockchains such as Ethereum already provide (almost) Turing-complete scripting languages. These could be used to define the execution of code based on transactions. For example, in the context of software licensing, code could be executed once a certain unit counter reaches a defined threshold. Yet again, Blockchains and Smart Contracts are designed for interactions in a network of (untrusted) peers and are thus not the primary choice for the software licensing supply chain.

Overall, at Wibu-Systems, we believe that Blockchain technologies will play an important role in future distributed systems (and are currently peaking on the hype curve), but they do not currently have a natural place in the existing application scenarios of our customers. However, we are actively developing a cloud-based licensing and software protection service (CmCloud) where a private Blockchain could be one possible way to address the non-repudiable logging of license transactions if an acceptable economic cost/benefit ratio for all participants can be met.

]]>
New Licensing Models Expand Market Potential Tue, 13 Jun 2017 11:20:00 +0200 https://www.wibu.com/uk/blog/article/new-licensing-models-expand-market-potential.html post-50 https://www.wibu.com/uk/blog/article/new-licensing-models-expand-market-potential.html Terry Gaul Modern usage-based licensing is trending in the software monetization market as customers gain say in how they want to consume their software. New Licensing Models Expand Market Potential by Terry Gaul 13-06-17

“Old School” perpetual software licensing agreements are rapidly falling out of favor as often times they place restrictions on product use that do not fit the dynamic business needs of the end user. Many smaller companies, for instance, benefit from the ability to tailor licensing usage and subsequent costs to reduce their upfront expenditures and more closely match their business cycles.

This is one reason why modern, usage-based licensing is trending in the software monetization market as customers gain increasing say in how they want to consume and pay for their software, so says Frost and Sullivan in their Global Software Licensing and Monetization Market report.

For ISVs, the flexibility to offer licensing models tailored more closely to their customers business needs can help them reach new markets that they might not have been able to achieve with a conventional perpetual licensing strategy.

Take for example, the case of Agfa HealthCare, a leading provider of diagnostic imaging and healthcare IT solutions for hospitals and care centers around the world. In the digital healthcare market, computed radiography is an important driver in making medical imaging more accessible, especially for smaller healthcare facilities in emerging countries. However, the upfront capital investment in equipment and software remains an important hurdle for healthcare providers with a relatively modest need for medical imaging.

According to Louis Kuitenbrouwer, Agfa HealthCare’s Vice President Imaging: "The low-end market for computed radiography is growing quickly. Small laboratories, orthopedic doctors and other healthcare facilities want to provide medical imaging, but often cannot afford the traditional upfront investment in hardware and software. Agfa HealthCare's Easy Payment Scheme, powered by Wibu-Systems' versatile license lifecycle management, provides digital imaging at an affordable and predictable price."

With this in mind, Agfa HealthCare developed a computed radiography solution that offered a complete digital imaging package, including equipment and software, without upfront investment. With the help of Wibu-Systems, they were able to implement a solution for time-based licensing that allows the healthcare providers to use the computed radiography package in a pay-per-use scenario. Their customers pay as they go, with a fixed down-payment followed by equal and regular installments, thus keeping upfront capital investment low and cost management easy.

Samith Kakkadan, Agfa HealthCare’s Imaging Product Manager, added: "The time-based licensing model allows the user to pay according to the imaging volume he needs - and also guarantees our return on the investment in the solution."

With Afga HealthCare’s Easy Payment Scheme, all a healthcare provider needs is an Internet connection and a debit or credit card. An online interactive portal reminds the healthcare provider to make each payment, which then allows the system to be used until the next installment's due date.

The ability to offer flexible licensing models is an important component in every ISV’s toolbox. You can read the complete story here and see how to create and implement new licensing models.

 

]]>
Keeping offline is not really safe Tue, 16 May 2017 08:44:00 +0200 https://www.wibu.com/uk/blog/article/keeping-offline-is-not-really-safe.html post-69 https://www.wibu.com/uk/blog/article/keeping-offline-is-not-really-safe.html Rüdiger Kügler In today’s age of the IoT, true offline computers are a relic of a bygone age. Go for proxy strategies or securely controlled outbound connections. Keeping offline is not really safe by Rüdiger Kügler 16-05-17

“We are keeping our computers offline. That way, they are safe.” I don’t know whether the IT professionals at French carmakers, British NHS hospitals, or German train operators believed this. What I do know is: “WannaCry” has probably infected many tens of thousands of computers in more than 100 countries.

One common tactic of cybercriminals is to send emails with viruses hidden in a deceptively trustworthy attachment. Careless users might download and open the attached file. If their virus scanners are not kept up to date or if they have the bad luck to be one of the first victims of an attack (before the virus scanners have learned about the attack), the virus will infect their systems.

But “WannaCry” did not affect personal computers. It affected ticket machines and digital timetables. It is quite reasonable to assume that nobody used a ticket machine to read their emails while waiting for their train to arrive. Let us consider other attack scenarios. Viruses on websites work not unlike their email counterparts. Again, the entire disaster begins with a single action on the part of the user: They visit an affected website. An error in the browser, specifically a flaw in a browser plugin. The malware is executed and the computer infected. One high-profile example is the infamous “BKA-Trojaner” in Germany. While the email attack would seem to be 100% the fault of the unwitting, this scenario exploits a weakness in the software itself.

Another, even more insidious scenario needs no actions on the part of the user at all. Almost every modern computer has certain services and processes that accept and process data. These processes and services sit in the background and wait to be called. If an attacker finds a way to call such a process and get it to accept manipulated data that gives him access to other parts of the system, he can take over the entire machine. This attack can happen at any time when the computer is connected to a public network or when the attacker is already in the internal network.

Complex software like operating systems mean that such weak spots will always be around. There are experts tasked with finding them. Some of them might be motivated by the wish to improve the systems they are working with: They tell the software developers about the possible exploits and give them the time they need to patch the problem with an update. After that time, they publish their findings and warn the public about the problem. And then there are their criminal brethren: They might sell their findings to the highest bidder. An exploit known only to an exclusive group of people can be used and abused by them without fear of repercussions until it becomes known to the actual software developer. This is big business. Still, it is easy for users to protect themselves against known exploits, simply by installing all security updates as soon as they are published. This goes for every operating system and is definitely not limited to Windows.

“We are keeping our computers offline. That way, they are safe.” This might be true if, that is, the computer is indeed completely and absolutely offline, never goes online, and never comes into contact with other devices that are or have ever been online. The second condition already shows us that this is virtually impossible in current practice. Even the most highly classified environment, like a nuclear reprocessing plant, will have visits from service technicians who install updates to the systems. And where do the updates come from? They might come directly from the technicians’ laptops or from a CD that was produced on a different computer. Even if you took the time to print out the update code and type it manually into the target machine, a virus might well be hidden in that code itself.

The greatest and most fatal drawback of the “keeping offline” strategy is that it bars the gates to immediate security updates. In essence, once the attacker has a foot in the door, everything is there for the taking, and there is nobody to stop him.

“Going online” in a closely controlled fashion might be an appealing alternative. The computer that needs to be protected is connected to a closed network, which is fenced off with a firewall. The firewall only allows outgoing connections that the computers on the inside need to establish themselves. These connections can then be used to download and install updates.

But can you trust the official update servers? What if they are infected as well? In these cases, a “proxy strategy” might help, with the updates only released to the network after an internal inspection. The update is installed and tested on a fenced-off trial computer. After clearance and an appropriate wait, the update is then put on the internal “proxy server”, from where the computers on the network can get it. The greater the testing resources you are prepared to invest, the less time you need to wait before releasing the update. After two to five days, other users in the wild will have encountered problems with the update if there are any. Waiting for longer than five days seems like gross negligence.

But even this strategy of controlled sourcing and releasing is no panacea. The necessary checks cost valuable resources and time – time during which the systems in the network are not protected by the updated software. And the outgoing connection might be enough for criminals to steal data if they managed to get even a single infected computer into the network. That is why even outgoing communication needs to be as limited and closely controlled as possible. It should only allow named computers and only the essentially required ports. The old rule holds true: As little as possible, as much as necessary.

In today’s age of Industrie 4.0 and the Internet of Things, true offline computers are a relic of a bygone age. Last week’s attack shows that the strategy indeed exacerbates the risks because of the many older un-patched and un-updated systems that are caught in the fray. A “proxy strategy” with the manual checking and releasing of updates is an effective, but costly solution. A securely controlled outbound connection for security updates is a cheaper alternative. It also requires a certain amount of effort, because its rules need constant supervision and some critical thinking. However, it could also be used for installing updates to the licenses on the connected computers, both device-bound CmActLicenses and licenses on secure CmDongles.

]]>