Taxes and Expenses management made easy for freelancers and self-employees

Coconut has today launched a new smart current account combining banking and accounting services, designed specifically for the UK’s ever-growing freelance and self-employed workforce.

The app-based business current account will prepare customers for HMRC’s ‘Making Tax Digital’ which starts rolling out from April 2019. Coconut features automated tax management and expense tools, giving customers visibility into how much tax they owe with a real-time estimate, while also categorising their expenses for tax and allowing them to stay on top of client payments with instant notifications.

The Coconut Start account which ultimately aims to eliminate tax returns, is free and can be opened in minutes on a mobile – instead of waiting for weeks which is the current norm. The app will offer optional extra services, such as VAT management.

Recent data from Coconut’s Self-Employment Survey highlights how money becomes complicated business when you work for yourself. Unexpected tax bills, unpaid invoices and financial admin are holding people back from finding financial security, with a quarter of respondents admitting that budgeting for taxes is one of the top five headaches when working for yourself. Keeping track of expenses (24%) and completing tax returns (22%) were also high on the list of challenges that self-employed people face, with almost a quarter storing their receipts in a box waiting to be sorted at the end of the year.

Sam O’Connor, Co-Founder and CEO of Coconut said:
“The growth of self-employment in the UK is one of the biggest structural changes in our workforce of our time, but self-employed people are still one of the most underserved groups of businesses when it comes to banking products and services that meet their specific needs.

“Staying on top of tax and expenses, getting paid on-time or managing an unpredictable flow of income is a big worry and time-suck for customers. And this is only going to become a bigger burden with Making Tax Digital requiring digital tax submissions quarterly instead of annually. We created Coconut to sort out these challenges for freelancers and we ultimately aim to eliminate the need for tax returns, removing a huge amount of stress for business owners.”

The nature of the UK’s workforce is changing. The growth in the numbers of self-employed workers has massively outstripped growth in standard employment over recent years with a record 4.6 million choosing to work for themselves.

Despite making up such a large proportion of the workforce, self-employed people lack digital solutions specifically tailored to their needs and are often time-poor, which means completing tax returns is a major source of frustration. This is supported by HM Revenue & Customs’ (HMRC) announcement last week that more than three million UK taxpayers are yet to file their online self-assessment tax returns for the 2016-17 financial year, putting millions of self-employed professionals at risk of an immediate £100 late filing penalty – even if they don’t owe any tax. Last year, 840,000 people filed their tax return late suggesting a windfall for HMRC of £84m in late fees, and with Coconut that will be a thing of the past.
HMRC’s new requirement for businesses and self-employed people to keep digital records and send quarterly tax submissions through it’s Making Tax Digital initiative will only compound this problem further for unprepared sole traders. Coconut aims to solve this through its innovative offering which will make the pain around self-assessment deadline day a thing of the past for the self-employed.

Singapore aims to become e-FX hub for Asia

Speaking at FX Week Asia, MAS’s Jacqueline Loh says Singapore wants more price discovery to take place

Singapore is bidding to attract more foreign exchange players in an effort to become the largest electronic price-discovery hub in Asia, Jacqueline Loh, deputy managing director of the Monetary Authority of Singapore (MAS), told delegates at the 14th annual FX Week Asiaconference.

Singapore is currently the third-largest FX trading centre in the world, after London and New York, and its share of trading volumes is growing. But while Singapore saw its slice of global FX volumes inch higher to 7

Android SDK cozies up to Kotlin

With the August 6 production debut of the Android 9 Pie mobile OS, Google has released an Android SDK with special capabilities for development with the Kotlin language.

The SDK has nullability annotations for frequently used APIs, preserving null-safety guarantees when Kotlin code is calling into annotated APIs in the SDK. To ensure that newly annotated APIs are compatible with existing code, an internal mechanism provided by the Kotlin compiler team marks APIs as recently annotated. These APIs result in warnings instead of errors from the Kotlin compiler. Developers need to use Kotlin 1.2.60 or later.

The intention is for newly added nullability annotations to only produce warnings, with the severity level increased to errors starting in a subsequent SDK. Google wants to give developers time to update their code by this stepped error messaging.

Google has endorsed Kotlin for use in building Android applications. But nullability annotations also can benefit developers using Java, the traditional language of Android development, if they use the Android Studio IDE to find nullability contract violations. Plans call for adding more nullability annotations to existing Android APIs in future versions of the SDK as well as ensuring new APIs are annotated.

 

Where to download the Kotlin-friendly Android SDK

You can download the Android SDK by choosing Tools > SDK Manager in Android Studio and selecting Android SDK on the left menu. The SDK Platforms tab must be open. Check Android 8.+ and click OK to install Android SDK Platform 28 revision 6. Then, set a project’s compile SDK version to API 28. You can download Android Studio from the project website.

Cisco, Arista settle lawsuit, refocus battle on network, data center, switching arenas

After nearly four years of slashing at each other in court with legal swords, Cisco and Arista have agreed to disagree, mostly.

To settle the litigation mêlée, Arista has agreed to pay Cisco $400 million, which will result in the dismissal of all pending district court and International Trade Commission litigation between the two companies.

For Arista the agreement should finally end any customer fear, uncertainty and doubt caused by the lawsuit. In fact Zacks Equity Research wrote the settlement is likely to immensely benefit Arista.

“The company is profiting from the expanding cloud networking market primarily driven by strong demand for scalable infrastructure, which has become a necessity to support new applications and services. Apart from delivering high capacity and availability, cloud networking promises predictable performance along with programmability that enable integration with third-party applications for network, management, automation, orchestration and network services,” Zacks wrote.

For its part, Cisco walks away with a certain amount of vindication having protected its technology and to a smaller degree its marketshare because it did force Arista to develop workarounds and at one point briefly had the company’s infringing products banned from US import.

In a joint statement the companies outlined the broad agreement:

“Cisco and Arista have come to an agreement which resolves existing litigation and demonstrates their commitment to the principles of IP protection. They have agreed that, with limited exceptions, no new litigation will be brought over patents or copyrights related to existing products, for five years. In addition, for three years, they will use an arbitration process to address any patent issues regarding new products. As part of this agreement, Arista will be making a $400 million payment to Cisco, is committed to maintaining the product modifications it made as a result of previous rulings and will be making limited changes to further differentiate its user interfaces from Cisco’s.”

In an SEC 8-K filing Cisco further defined the agreement by writing that for three years any claim regarding patent infringement of any new products or new features of existing products, will be resolved by an arbitration process. The process will not apply to claims of copyright infringement and trade-secret misappropriation, among others.

In addition, for five years neither party will bring an action against the other for patent or copyright (except for any claims of source code misappropriation) infringement regarding their respective products currently on the market, Cisco wrote.

In its 8K filing, Arista wrote it will grant Cisco a release from all past antitrust claims. These mutual releases will extend to the Arista’s and Cisco’s customers, contract manufacturers and partners.  Arista also said it agreed to make certain modifications to its command-line interface (CLI).

It is the CLI issue that remains unresolved, however.  Cisco does still have an appeal pending over a copyright verdict that went Arista’s way in 2016.Cisco wrote that the two companies “will continue to seek appellate court review of that verdict regarding legal protection for user interfaces.” A decision on this appeal is expected later this year.

The agreement likely will cool tensions between the companies but also leave some embers burning because the two companies are so competitive in many markets.  It also might be a little personal for Cisco as Arista employs a number of ex-Cisco engineers including CEO Jayshree Ullal and co-founder Andy Bechtolsheim.

On the competitive side Gartner wrote in its recent data center report that Arista has 5,000 data-center-networking customers, and in 2017, Arista grew port shipments at market rates. It also introduced highly scalable routing as a feature on its high-end spine switches.

“All enterprises should shortlist Arista, particularly large organizations that need flexible and programmable solutions, provided there is appropriate local sales and channel coverage,” Gartner said. “Arista is experienced and well-regarded by customers in large-scale environments that require programmable infrastructure that are integrated with a wide range of third-party software orchestration, including VMware, Puppet and Ansible. “

On the cautionary side, Gartner said Arista’s geographic coverage is still limited, when compared to its larger rivals. “Outside of North America and Europe, we observe Arista focusing on specific verticals and very large accounts, so customers should verify appropriate local resources.”

Cisco on the other hand has over 100,000 data-center-networking customers and “offers a broad array of infrastructure hardware and software, and its flagship data center networking offering is Cisco ACI [Application-centric infrastructure], which includes Nexus 9000 hardware switches and the APIC [Application Policy Infrastructure Controller] controller. The vendor provides switches, NOS, and the requisite control, management and automation capability. Cisco is relevant in nearly all verticals and geographies,” Gartner wrote.

A knock against Cisco is that based on Gartner’s client feedback and analysis, “migrating from legacy infrastructure to Cisco ACI infrastructure is complex for a combination of financial, technical and cultural reasons. This has led to limited ACI adoption in the market, and also limited usage of the full ACI feature for customers that have adopted it,” Gartner said.

Does Facebook even need a CSO?

On August 1, Facebook’s chief security officer (CSO), Alex Stamos, posted that he’s leaving on August 17. “We are not naming a new CSO,” says an unnamed Facebook spokesperson. Instead, the spokesperson continues, “We embedded our security engineers, analysts, investigators and other specialists in our product and engineering teams.” In other words, in less than two weeks, no central point person will own security. “The senior leaders of those teams will be responsible for keeping Facebook and people’s information secure,” he explains.

Unlike other industries, where companies with similar products face the same security issues, social media doesn’t really have any data protection best guidelines. For starters, the industry is too small. According to Pew Research Center, only eight platforms are used by at least 20 percent of the country. Even they don’t work with the same types of data: YouTube and Facebook top the list, and while Facebook streams videos, the two collect and store radically different files and information.

“The spread of risk and concern and extremes inside of social media varies significantly,” according to Michael Coates, a former Twitter chief information security officer (CISO) who left in April. “The requirements and expectations that could be on a Twitter or a Facebook would differ greatly from a Pinterest or a Snapchat,” he says.

That’s why when you ask Coates’ opinion on Facebook’s recent decision to get rid of its chief security officer role, he’s hesitant to judge: “We can’t conjecture on what specifically is happening at Facebook,” he says, but adds he’s always concerned to see companies “move from a structure that has a centralized security leader to a distributive model.”

 

Facebook security: What is at risk

That’s exactly what Facebook has done. “When you move from a structure that has a centralized security leader to a distributive model,” Coates says, there’s a long list of risks. For starters, he explains, “If security is what you do when you have free time, nobody does it; nobody has time.” Then it’s tough to even identify security risks or to get leadership to agree on how to prioritize them against new product features. Finally there’s the security theater of it all.

“There’s definitely a PR perspective of it,” Coates says. Depending on the source, in 2014 and 2015, a Facebook developer, Aleksandr Kogan, either breached or sold user data to British firm Cambridge Analytica, where Russia accessed it, raising suspicions that it was used to rig the United States’ presidential election. Facebook CEO Mark Zuckerberg apologized to Congress on April 11 for its role in enabling Cambridge Analytica to acquire its data.

On June 5, the company faced data nightmares again, with the New York Times revealing Facebook gave a Chinese firm flagged by U.S. intelligence access to user data under a partner data sharing program. Then on August 6, the Wall Street Journal reported that Facebook asked banks for users’ credit card transactions and account balances — an allegation Facebook staunchly denies.

 

A seat at the table

As sister publication CIO has reported, not every company needs a CSO. “It’s not a person who needs a seat at the table,” Simple Tire CIO CJ Das said at a CSO event, “The topic needs a seat at the table.” Indeed, Flick says, “We expect to be judged on what we do to protect people’s security, not whether we have someone with a certain title.”

Pinterest and Tumblr don’t have CSOs. Neither they nor Twitter responded to our requests for interview, but Coates says Twitter filled his opening with an interim security leader immediately after he left, announcing plans to hire permanently.

“The creation — and I guess I would say appointment of any leadership role,” Coates notes, “also tells a story to the public, intentional or otherwise. In some regards, a chief security officer is also a central point to inspire confidence that this is something where they’re putting a senior role at the table to tackle this issue.”

That’s why, he says, New York State mandates all financial institutions have a CSO. “Because there is a challenge in security when the work is distributed amongst teams without a central owner, you can lose some of that experience and central visibility that a security organization brings to the table,” Coates explains, “A commitment to a high placed individual shows the company has kind of matured to that level and is thinking of it that way.”

“This is not to say by any means that Facebook is not mature,” he quickly adds. “But those are some of the things that people associate with the presence of a chief security officer.”

 

Is the traditional CISO role a good fit for social media?

In the end, the decision to do away with the role is Facebook’s. The company is an independent business, after all — not a recognized utility or a federally regulated bank. Some of the problems it’s having might not fit under a traditional security role anyway: Whether Facebook willingly sold data or not, there’s a difference between a sale and a breach. Is it security’s job to police business deals or to make sure users are real?

Normally, no — but by the nature of the beast, social media might be an exception. Coates says, “Given the sample size, it’s hard to say, because there’s only a handful of companies in this space and the security organizational structure at companies varies dramatically across even companies within the same industry.” Depending on each platform’s business structure, he adds, “it’s not uncommon for something like fake news or bots to be within an engineering team.”

“There are traditional elements that we associate with the role, such as IT security and — more these days — application and product security,” but account takeover security, anti-bot considerations, and fake news are relatively new problems, he explains, so “there is no playbook on where they go and it very much depends on how the company wants to tackle it.” Some CSOs, he continues, even oversee “elements of physical security — executive protection, building security, physical perimeter control.” At Facebook, though, those priorities aren’t going anywhere, as chief global security officer Nick Lovrien says, “There are no changes to my role, responsibilities or organizational structure.”

“There are no hard and fast rules between a CISO and a CSO,” Coates says. “The CSO is really a function of what the business wants it to be.” At Facebook, that’s apparently non-existent.

Cisco software, subscription strategies pay off

Cisco’s strategy of diversifying into a more software-optimized business is paying off – literally.

The software differentiation was perhaps never more obvious than in its most recent set of year-end and fourth quarter results. (Cisco’s 2018 fiscal year ended July 28.)  Cisco said deferred revenue for the fiscal year was $19.7 billion, up 6 percent overall, “with deferred product revenue up 15 percent, driven largely by subscription-based and software offers, and deferred service revenue was up 1 percent.”

The portion of deferred product revenue that is related to recurring software and subscription offers increased 23 percent over 2017, Cisco stated. In addition, Cisco reported deferred revenue from software and subscriptions increasing 23 percent to $6.1 billion in the fourth quarter alone.

“We’re seeing the returns on the investments we’re making in innovation and driving the shift to more software and subscription,” Kelly Kramer, Cisco CFO said during the company’s financial results conference call.

Chuck Robbins, Chairman and CEO of Cisco said during the call that when Cisco began the sale of the Catalyst 9000 that was the first attempt to sell a subscription software offering on top of a core networking product.

“That has gone as we’ve said on prior calls reasonably well. I’m very pleased with how the adoption has been from our customers, they understand the value… we had roughly 9,650 plus customers on the 9K as of the end of the quarter,” he said.

Robbins said that in the coming quarters when the company brings new products to market – particularly, in the enterprise networking space but across the portfolio – the company will apply that same strategy of adding advancde features such as analytics, automation and security to ensure future renewals.

Indeed, the company has made network software and application development a key push in recent months. For example, in May Cisco made a move to broaden the use of its DNA Center by opening up the network controller, assurance, automation and analytics system to the community of developers looking to take the next step in network programming. Introduced last summer as the heart of its Intent Based Networking initiative, Cisco DNA Center features automation capabilities, assurance setting, fabric provisioning and policy-based segmentation for enterprise networks.

In addition all four acquisitions Cisco has made this year have been software related.

The most recent is a $2.35 billion cash and stock play for network-identity, authentication and security company Duo. The deal, which is still pending, is Cisco’s biggest since its $3.7 billion buy of another software company, performance-monitoring firm AppDynamics in 2017, and its largest in the cyber security sector since its $2.7 billion Sourcefire purchase in 2013.

The company bought artificial intelligence firm, Accompany in May, mobile application firm July Systems in June and cloud technology provider Skyport Systems in January.

In addition to those software moves, the joint Google and Cisco Kubernetes platform for enterprise customers should appear before the end of the year.

Mingis on Tech: 3 big takeaways from Android Pie

So now we know: P is for Pie – as in Android Pie, the latest iteration of Google’s mobile OS. It officially arrived Aug. 6, is already rolling out to Pixel devices and –  depending on how quickly other Android device makers get moving – it should show up for non-Pixel users over the next few months.

That makes this a good time to hear from Computerworld‘s JR Raphael about just what users can look forward to when they finally get their hands on their upcoming slice of Pie.

IBM, Maersk launch blockchain-based shipping platform with 94 early adopters

After launching a proof of concept earlier this year, IBM and Maersk have unveiled TradeLens, the production version of an electronic ledger for tracking global shipments; the companies say they have 94 participants piloting the system, including more than 20 port and terminal operators.

The jointly developed electronic shipping ledger records details of cargo shipments as they leave their origin, arrive in ports, are shipped overseas and eventually received.

During the transportation process, all of the involved parties in the supply chain can view tracking information such as shipment arrival times and documents such as customs releases, commercial invoices and bills of lading in near real time via the permissioned blockchain ledger.

More than 160 million such shipping events have been captured on the platform, according to IBM and Maersk. “This data is growing at a rate of close to one million events per day,” the companies said.

Traditionally, the international shipping industry’s information systems have used paper legal documents, and electronic data was transmitted via electronic data interchange (EDI) – a 60-year-old technology that doesn’t represent real-time data information.

Shipping participants have also shared documents via email, fax and courier.

When information is entered or scanned in manually, TradeLens can track critical data about every shipment in a supply chain, and it offers an immutable record among all parties involved, the companies said.

Some shipping manifests can also be moved via an API to the TradeLens platform, so that manufacturers and others in the supply chain have more timely information and improved visibility to the process.

Along with freight forwarders, transportation companies and logistics firms, more than 20 port and terminal operators are using or have agreed to pilot TradeLens, including PSA Singapore, International Container Terminal Services Inc., Patrick Terminals and Modern Terminals Ltd. in Hong Kong. Customs authorities in the Netherlands, Saudi Arabia, Singapore, Australia and Peru are also participating.

“This accounts for approximately 234 marine gateways worldwide that have or will be actively participating on TradeLens,” IBM said.

Hong Kong-based Modern Terminals became a beta partner of the TradeLens blockchain earlier this year.

“Digitized documentation that can at the same time be authenticated will drive down costs and increase supply chain security,” Modern Terminals CEO Peter Levesque said via email.

As a port operator, Modern Terminals doesn’t have a need to track shipments outside of its operating environment, but it keeps the status of containers coming in and out of its terminals via a Terminal Operating System (TOS), many of which utilize EDI and wireless LANs and Radio-frequency identification (RFID) to monitor cargo movements. The company handles about 5.5 million shipping containers per year at its Hong Kong business unit.

The documentation that accompanies a container of cargo from the factory floor to store shelf is cumbersome and open, Levesque said. A blockchain-based electronic ledger will provide a platform where all the documentation along the way can be viewed and updated in near real time and in a secure environment by authorized supply chain participants.

It will also give customs, commerce, and border patrol agents around the world “a higher degree of certainty about what’s in the box, and who loaded it,” Levesque added.

“Modern Terminals plans to be a regular user of the solution once full development and testing are complete,” Levesque said. “We’ve only begun to scratch the surface on what we can use blockchain technology for in the transportation and logistics industry. Tackling the opportunity for improving the transmission of documents around the world is a great beginning. The next decade of development will be exciting to watch.”

What Vitalik Buterin’s tweetstorm means for the future Ethereum blockchain

It took 75 tweets, but Ethereum blockchain founder Vitalik Buterin has clarified the roadmap for implementing a new consensus mechanism that promises to greatly increase the speed with which new entries can be added to the distributed electronic ledger technology.

Buterin devoted most of the tweets to explaining the history of Ethereum developer efforts to create a Proof of Stake (PoS) consensus mechanism that would streamline the process while also combating nefarious attacks to control blockchain content.

He also clarified that a PoS system will be implemented independently from another effort to roll out sharding on Ethereum. (Sharding is a way of distributing the computational work needed to validate new documents, known as blocks, on the distributed ledger technology). The PoS and sharding development efforts had been part of one project, but they will now be rolled out separately.

 

Proof of Work and Proof of Stake

The two most popular mechanisms or algorithms for authenticating new entries on a blockchain and governing changes to the networks are Proof of Work (PoW) and Proof of Stake.

PoW algorithms force computers on the peer-to-peer (P2P) network to expend CPU power to solve complex cryptographic-based equations before they’re authorized to add data to a blockchain ledger; those computer nodes that complete the equations the fastest are rewarded with digital coins, such as Ether on Ethereum or bitcoin on the competing technology. The process of earning cryptocurrency through PoW is known as “mining,” as in mining bitcoin.

As the name suggests, PoS consensus models enable those with the most digital coins (the greatest stake) to govern a cryptocurrency or business blockchain ledger. To date, however, the most popular blockchain-based cryptocurrencies — Bitcoin, Ethereum (Ether) and Litecoin — have used PoW as their consensus mechanism.

While PoW algorithms are excellent at ensuring the authenticity of new documents posted to a ledger, they’re also slow and expensive to run.

The PoW process chews up a lot of electricity, both from running processors 24/7 and from the need for cooling server farms dedicated to mining operations. Those mining operations are siphoning off so much electricity that cities and even countries have begun clamping down on mining operations.

PoW protocols can also be extremely slow due to the lengthy process involved in solving the mathematical puzzles; approving a new entry on a blockchain ledger can take 10 or more minutes. PoW algorithms are, however, excellent at thwarting users who would try to game the blockchain, as it’s simply too expensive to expend the CPU power and time.

In contrast, PoS algorithms can complete new blockchain entries in seconds or less.

“Proof of Stake algorithms definitely have the potential to overtake Proof of Work,” said Vipul Goyal, an associate professor in the Computer Science Department at Carnegie Mellon University (CMU). “However, there are still some significant research challenges that need to be overcome before that happens.”

Ethereum began working on a PoS system in 2014 and last year introduced the mechanism on a testnet called “Casper” (as in Casper the Friendly Finality Gadget or Casper FFG). Casper was intended to be overlaid on Ethereum’s current PoW algorithm.

There have also been internal development battles over the way “finality” should be implemented on the Casper PoS system (i.e., how a block should be finalized so it’s added to the immutable blockchain).

Ethereum developer Vlad Zamfir has been creating a Casper consensus protocol called “Correct by Construction” (CBC). The difference? “Both Vitalik and Vlad agree that penalties for bad behaviour needs to be implemented. They differed on the approach — primarily, on how harsh those penalties should be,” Shawn Dexter, a research analyst with Mango Research, said via email.

In the end, a Casper PoS system will likely draw from both FFG and CBC consensus protocols, according to Jon Choi, a developer with the Ethereum Foundation.

As with other PoS models, the Casper consensus protocol would work by creating “bonded validators,” or users who must place a security deposit down before being allowed to serve as part of the blockchain consensus or voting community. As long as bonded validators act honestly on the blockchain, they can remain in the consensus community; if they attempt to cheat the system, they lose their stake (their money). Ethereum’s Casper PoS system would enable a consensus mechanism to process new transactions in about four seconds.

 

A hybrid system

Last year, two developments in the effort to implement a new consensus model came in the form of a standalone PoS mechanism named Serenity and a hybrid PoW/PoS system named Metropolis. Metropolis was divided into two phases: the development of a byzantine fault-tolerance mechanism launched last year and a project known as Constantinople — the hybrid PoW/PoS system.

The Constantinople name was dropped earlier this year and the effort to implement a new Casper PoS and sharding system is now being referred to as Ethereum 2.0.

The PoS system, whether hybrid or standalone, was going to require that validators deposit 1,500 Ether coins to become part of the consensus mechanism. In his tweet storm, however, Buterin announced the number of Ether coins required to become a validator will now be 32.

Jake Yocom-Piatt, creator of the digital currency network Decred, believes the best governance model is one that employs both PoW and PoS mechanisms, as Buterin and the Ethereum development team are proposing.

In a hybrid model, deference is given to the PoS validators who can override bad behavior on the PoW network.

“If you’re a Proof of Work miner and you’re playing games and causing problems on our network, the stakeholders on the network can penalize them and strip them of their rewards,” Yocom-Piatt said in an earlier interview. “You can also vote on consensus rule changes. This acts as a dispute-resolution and decision-making mechanism for major decisions in the cryptocurrency,” Yocom-Piatt said, referring to new software releases and other blockchain changes.

In Ethereum 2.0’s latest model, the blockchain would grow in blocks using the current PoW algorithm, “but every 50 blocks is a PoS ‘checkpoint'” where finality is assessed via a network of PoS validators, a white paper explained.

Over its development lifecycle, the PoS protocol faced a number of challenges, the most difficult of which is what is known as “posterior corruptions,” which could undermine the authenticity of a blockchain. For example, a set of users on a blockchain can hold the majority stake, and then sell that stake. In a PoS system, those entities could still hold the cryptographic keys that gave them governing power in the past and use that authority to create a new blockchain or “attack chain” off the primary chain (known as a fork).

In effect, validators’ money would no longer be on the line but they could still control the blockchain’s direction — a phenomenon known as the “Nothing at Stake” PoS problem.

“If the attack chain diverges from the main chain at a fairly recent point in time, this is not a problem, because if validators sign two conflicting messages for the two conflicting chains this can be used as evidence to penalize them and take away their deposits,” Buterin wrote in his Twitter thread. “But if the divergence happened long ago (hence, long range attack), attackers could withdraw their deposits, preventing penalties on either chain.”

To deal with long-range attacks, Ethereum developers introduced a change requiring clients log on at least once every four months and that their deposits take four months to withdraw, so the incentive to avoid a penalty would no longer be available.

Ethereum developers also considered other consensus algorithms “inspired by traditional byzantine fault tolerance theory,” such as Consensus by Bet, but eventually abandoned them as “too risky.”

Mango Research’s Dexter said the most recent Ethereum update has people confused because much of the explanatory information is contained in comment sections across various forums. Even in an explainer last week, Dexter cautioned things still may change between now and when a PoW/PoS and sharding algorithm is implemented.

Casper and sharding will be implemented on the same chain, but not together, Buterin explained. Casper could come first or sharding. Both will be implemented on a new overlay network known as Beacon Chain, which will be a hard fork or off ramp from the current Ethereum blockchain.

That fork isn’t expected to be contentious, so it’s unlikely to result in any “bad effect” on the Ethereum community, according to Dexter.

“Hard forks don’t have the same stigma in Ethereum like they do in [bitcoin]. In the new version, there will be a one-way smart contract (from the PoW to the Beacon Chain) that will allow users to deposit [32 Eth] in order to become a validator on the PoS chain. Users who deposited the 32 Eth will go in queue to become validators, and can start validating blocks,” Dexter said.

Buterin ended his Twitter thread by saying there is no formal timeline for implementing the new consensus mechanism. His last tweet said there are still “formal proofs, refinements to the specification, and ongoing progress on implementation,” which have already been started by three developer teams.

Martha Bennett, a principal analyst with Forrester Research, cautioned against speculating on when Ethereum 2.0 would be released or how it will perform. “The PoS consensus design took several iterations,” she said via email, “and we simply won’t know until it’s been implemented and has been running for a while whether it’ll work in the desired fashion or not.”

Reevaluate “low-risk” PHP unserialization vulnerabilities, researcher says

LAS VEGAS — In cybercrime, as in most areas of crime (or business), the more things change, the more they stay the same.

The emergence of Petya/NotPetya and other virulent forms of malware have showcased how the best and most successful black-hat hacks are not entirely new—bad actors simply take older, more established approaches or attack vectors and add a new twist. And so it is with PHP unserialization attacks, as showcased at the Black Hat conference earlier this month by Sam Thomas, director of research for Secarma Ltd, an information security consultancy.

Thomas was able to demonstrate a new exploitation method that makes it easier for cyber-criminals to generate critical deserialization vulnerabilities in the PHP programming language using functions previously considered low-risk. PHP unserialization vulnerabilities, or object injection vulnerabilities as they have also been called, allow hackers to perform different kinds of attacks by supplying malicious inputs to the “unserialize” PHP function. (Serialization is the process of converting data objects into a plain string, and the unserialize function recreates an object back from a string.) This attack vector has been documented since 2009, so the fact that these flaws exist is nothing new.

Indeed, OWASP added PHP deserialization to its Top 10 list, and last year’s massive Equifax breach was reportedly initiated through deserialization.

Given the popularity of PHP (aka PHP: Hypertext Preprocessor), a server-side scripting language that has been around since the mid-1990s, it is not surprising that bad actors have found new ways to exploit this approach. What Thomas of Secarma demonstrated at his Black Hat session, dubbed, “It’s a PHP Unserialization Vulnerability, Jim, But Not as We Know It” (as a shout-out to fellow Star Trek-loving cybersecurity experts), is that cyber-criminals can use low-risk functions against Phar archives to start a deserialization attack withoutrequiring the use of unserialize() function in a wide range of scenarios. Phar files, an archive format in PHP, stores metadata in a serialized format, which gets unserialized whenever a file operation function—such as fopen, file_exists, file_get_contents—tries to access the archive file.

“This is true for both direct file operations…and indirect operations such as those that occur during external entity processing within XML,” Thomas said during his presentation. He also issued a white paper during Black Hat that detailed how this particular variant of the PHP unserialization attack can be used on WordPress sites to exert full control over a web server. All the attacker needs do is upload a valid ‘Phar’ archive containing a malicious payload object onto the target’s local file system and make the file operation function access it.

This vulnerability can even be exploited using a basic JPEG image, originally a Phar archive converted into valid JPEG by changing its first 100 bytes, according to Thomas.

“The way certain thumbnail functionality within an application works enables an attacker with the privileges to upload and modify media items to gain sufficient control of the parameter used in a ‘file_exists’ call to cause unserialization to occur,” Thomas said.“A remote authenticated attacker with the ability to create [or] edit posts can upload a malicious image and execute arbitrary PHP code on vulnerable systems.”

Thomas highlighted that the unserialization is exposed to “a lot of vulnerabilities that might have previously been considered quite low-risk.”

“Issues which they might have thought [were] fixed with a configuration change or had been considered quite minor previously, might need to be reevaluated in the light of the attacks I demonstrated,” he said.

More from Black Hat 2018

  • Hack mobile point-of-sale systems? Researchers count the ways
  • Take-aways from Black Hat USA 2018
  • Talking phishing campaigns with @PhishingAI’s Jeremy Richards
  • Vegas hotel room checks raise privacy, safety concerns at Def Con, Black Hat

Mozilla sets termination date for Firefox’s legacy add-ons

Mozilla this week laid out the roadmap for ending Firefox support for all old-school add-ons, telling users that the end of those legacy extensions would come in just two weeks.

“Mozilla will stop supporting Firefox Extended Support Release (ESR) 52, the final release that is compatible with legacy add-ons, on September 5, 2018,” wrote Caitlin Neiman, add-on developer community manager, in an August 21 post to a company blog.

Firefox ESR is the version designed for enterprises and other users who want a more static browser; Mozilla upgrades ESR about once a year, as opposed to the every-six-week standard feature update tempo. Firefox ESR 52, destined to fall off the support list in two weeks, was first issued in March 2017. Its replacement, Firefox ESR 60, debuted in May of this year. Since that latter date, Mozilla has been regularly updating both ESR versions to give customers time to migrate from version 52 to version 60.

Because Firefox ESR 52 is the final version that supported legacy add-ons, Mozilla will also soon scrub extensions from its online market. “We will start the process of disabling legacy add-on versions on addons.mozilla.org (AMO) in September,” Neiman said. As of September 6, no new legacy add-ons will be accepted to the store; all such add-ons will be disabled in early October. “Once this happens, users will no longer be able to find your extension on AMO,” Neiman warned developers.

Mozilla has taken a long time to get to this place.

Three years ago, Mozilla outlined substantial changes to Firefox’s add-on ecosystem, including a plan to introduce a new API (application programming interface) that was designed to let developers port Google Chrome extensions to Firefox. By late 2017, Mozilla was ready to bar legacy add-ons from running in Firefox, a move made November 14 with the release of Firefox 57, a.k.a. “Quantum.”

As add-on developers have redesigned their works using the WebExtensions API, instances of Firefox still harboring the legacy — and thus unsupported — versions have been automatically updated to the newer add-on format. That will happen for Firefox ESR 52 users as well. “Once a new version is submitted to AMO, users who have installed the legacy version will automatically receive the update,” Neiman said.

Firefox has been on a five-month skid in user share, according to metrics vendor Net Applications. Firefox’s July global share, for example, was 9.7%, a two-year low that signaled the possibility of even bigger trouble ahead. Last month, Computerworld forecast that if Firefox continued declining on its 12-month average, the browser would fall under 9% by November and below 8% by March 2019.

What’s new in Microsoft Visual Studio for Mac

Microsoft has released Visual Studio for Mac Version 7.6, focused on reliability, particularly in code editing.

Improvements also have been made in performance and support for Azure cloud functions. New templates enable publishing of a function to Azure. But Microsoft emphasized code editing with the Version 7.6 release.

Improvements in the code editing include:

  • JavaScript syntax highlighting has been improved.
  • IntelliSense has been improved for developers using the F# language, with the resolution of an issue in which “.” could not be used for autocompletion.
  • An IntelliSense problem was fixed in which red squiggles persisted even when there were no errors,
  • A fix was made to an issue in which Quick Fix items were not being displayed if source analysis was disabled.
  • A situation where tooltips would not disappear was fixed.

For the IDE, Microsoft improved tag-based classification for C#, reusing Visual Studio for Windows code. This is expected to improve typing performance in the editor. Also, to speed up NuGet restore on solution loads, no-op restore of NuGet packages is supported during opening of a solution. Startup time has been improved in the IDE and memory consumption reduced.

For Azure Functions, providing event-driven compute services on demand in a serverless fashion, Version 7.6 has templates for configuring access rights, connection strings, and other binding properties. The upgrade also lets developers publish functions to the Azure Portal. Developers can right-click on project name and choose Publish > Publish to Azure.

IoT vendors talk open buildings, black hats and a jam conspiracy

Welcome to what we’re hoping is the first in a long string of regular updates from the world of IoT; everything from security to platform news will be fair game, and the aim is to help you be better grounded in the rapidly expanding Internet of Things space.

Schneider’s building open things

Schneider Electric, the Andover, Mass.,-based building-infrastructure manufacturer, recently rolled out a new open framework for IoT implementations, dubbing the product EcoStruxure Building.

It’s a software platform that makes it easy for sensors and controllers to talk to each other, even in complicated, large-scale building projects where there could be a lot of both types of devices.

EcoStruxure Building also collects the data from those sensors into a back-end analysis product called Building Advisor, which uses complex analytics and remote data scientists to minimize energy use and address potential occupant complaints before they happen.

Security pros warn about IoT vulnerabilities

In what may be one of the most predictable headlines readers of this piece will see, some of the world’s leading information security professionals attending the Black Hat security conference told the media that unsecured IoT devices still pose a large-scale threat to networks around the globe.

The most-talked about aspect of IoT security is the infamous unsecured endpoint. Like the horde of unprotected security cameras fashioned into a powerful botnet in the Mirai attacks in 2016, the fast-growing number of new IoT gadgets is a tempting target for the online world’s bad actors.

Digi touts new IoT endpoint hardware

The folks at Digi International announced last week that they’ve released a new series of wireless modems for IoT devices, called the Digi XBee3 Cellular. These are 13mm x 19mm, so they’ll fit in pretty small edge devices, and they’ve got support for ZigBee, other 802.15.4-based standards, DigiMesh and Bluetooth LE baked right in.

You can also use the company’s configuration software, XCTU, for free, letting you set up a network of devices using the XBee3 with a minimum of fuss. Digi said that the modules will gain NB-IoT – that’s narrow-band IoT – certification in October, allowing them to be used in standards-based equipment designs.

Ayla’s got new IoT software agents

As an addition to the company’s IoT PaaS offering, Ayla Networks rolled out a new portable software agent last week that is designed to make it easier for IoT architects to use whichever connectivity option suits them.

The agent is meant to be largely agnostic about the precise make and model of cellular or Wi-Fi modem that it runs on, letting almost any endpoint hardware connect smoothly back to Ayla’s cloud-based management and monitoring system. This potentially adds a lot of design flexibility for companies implementing new IoT systems – as long as they’re interested in using the PaaS architecture that Ayla provides.

ZigBee, ZigBee, everywhere a ZigBee

The consortium behind the ZigBee wireless communication standard for IoT has been quick to trumpet research released earlier this month suggesting that 500 million ZigBee chipsets have been sold worldwide.

ON World, a market research firm that covers the 802.15.4 standards category – which includes ZigBee, WirelessHART, ISA100.11a and a host of other low-rate personal area networks – said that ZigBee chipset sales project to 3.8 billion by 2023.

The standard’s major strength is in devices for the smart home, and the researchers noted that more than a third of all networked wireless sensors in-smart home implementations were ZigBee-powered.

AR/barcode scanning platform Scandit raises $30 million Series B round

Venture capitalists like GV, NGP Capital and Atomico were interested enough in computer vision/augmented reality/barcode scanning startup Scandit to lead a $30 million series B round late last month, to go with the company’s $13 million in earlier fundraising.

Scandit’s aim is “to bring the Internet of Things to everyday objects,” thanks to the universal presence of smartphones – and their built-in cameras. The idea is to pull information from scanners and cameras into a “software-based data capture engine” for processing and standardization, making it simple to integrate information from those capture devices into a wide array of different databases and enterprise applications.

Aside from the fact that the soundtrack makes it seem like everyone in the video is participating in some kind of sophisticated conspiracy to smuggle jam around the world, it’s a relatively straightforward of the company’s ambitious goals. Is $43 million enough to reinvent the supply chain? It’d be interesting to know what Amazon thinks about this.

Analysts: SD-WAN 5-year annual growth rate tops 40%

Whether users are looking to stabilize cloud-connected resources, better manage remote networks or simply upgrade a timeworn wide area environment, software-defined-WAN (SD-WAN) technologies are what’s on the purchasing menu.

The proof lies in the fact that this segment of the networking market will hit $4.5 billion and grow at a 40.4% compound annual growth rate from 2017 to 2022. In 2017 alone, SD-WAN infrastructure revenues increased 83.3% in 2017 to reach $833 million, according to IDC’s recent SD-WAN Infrastructure Forecast.

A related report from researchers at the Dell’Oro Group predicts revenue from SD-WAN software components, including controller and virtual network functions, will grow almost twice as fast as the hardware components.  Over the next five years, SD-WAN software revenue will grow at a 41% compounded annual growth rate, compared to 21% for hardware.

The speed of SD-WAN adoption is one of the most surprising aspects of the forecast, said Brandon Butler a senior analyst with IDC. “The SD-WAN market is really in the early stages and we expect to see significant growth through 2022.”

IDC found that as enterprise customers add SaaS and IaaS services they will increasingly look to SD-WAN as a way of “intelligently automating how application traffic is delivered to branch sites, moving away from traditional hub-and-spoke WAN architectures and the backhauling of internet- and cloud-bound traffic to on-premises datacenters toward the increasing use of broadband internet breakout and other network transports  – 4G/LTE and 5G  –  at the branch for cost-effective application delivery.”

Users looking to upgrade and optimize their wide area network for MPLS, cellular and broadband will find it a lot easier to do with SD-WAN, Butler said.

Lowering costs by not having customers buy new hardware and by easily supporting less expensive connectivity, either via the Internet, Ethernet or LTE are important potential benefits of SD-WAN.

“Users have diverse workload environments be they mobile or cloud, and SD-WAN helps bring those environments closer together,” Kiran Ghodgaonkar, Cisco senior manager of enterprise marketing told Network World recently.  “With the increased use of multi-cloud services especially, the WAN is really becoming the backbone of the enterprise.”

IDC offered up a number of other reasons SD-WAN technologies are heating up, including analytics and management.

Network infrastructure vendors are increasingly introducing analytics capabilities such as performance benchmarking and user analysis that can lead to more informed deployment and security decisions. These visibility, analysis, security and optimization tools are increasingly being applied to SD-WAN products, IDC wrote.

Increasingly, networking vendors integrate software tools that includes centralized management of enterprise campus and remote/branch office sites. While LAN and WAN networks are still largely managed separately, IDC expects in the coming years that some networking vendors will focus on integrating management of these environments, either through building out their own offerings or partnering with others, IDC wrote.

As customers adopt SD-WAN technologies there issues that could cause some trepidation.

“A lack of consistent pricing structures could slow SD-WAN adoption. Vendors’ pricing and feature sets vary widely, and this makes it difficult to assess the economic value of solutions,” Dell’Oro ‘s Umeda said.  “Also, the large software component of SD-WAN shifts spending to a long-term recurring operating expense from a one-time capital expense.”

Another issue is security. Users need to make sure what’s being offered by specific vendors and evaluate their requirements.

Researchers at Ovum wrote “many SD-WAN vendors are offering foundational security options versus listing them as roadmap items. Customers can easily service chain more robust security features by location, session, user, and application. This capability is just one example of the improved security that can be provided with advanced SD-WAN implementations.”

Umeda said that as he definition of SD-WAN is evolving, security is becoming a requirement. “Because many SD-WAN users rely on unsecure Internet connectivity, integrated security is needed to ensure traffic protection,” he said.

IDC’s Butler said that security features will be a large component of SD-WAN offerings and that vendors will rapidly add security features and enter into partnerships to bolster security packages.

“By including security tools natively in networking platforms, networking vendors are shaking up buyer dynamics by essentially forcing network decision makers and security decision makers to be on the same page from early in the network infrastructure procurement process. While only a portion of vendors offer these solutions today, IDC expects this number to grow steadily in the coming years,” IDC wrote.

Among the key vendors driving SD-WAN growth include Cisco, VMware, Silver Peak, Riverbed, Versa and Aryaka.

According to IDC, Cisco holds the largest share in the SD-WAN market. Cisco’s market share stood at 49.3% in 2017, down from 63.1% in 2016. In August 2017 Cisco purchased Viptela, which was one of the leading SD-WAN startups.

Cisco recently took a giant step in its SD-WAN development by adding Viptela’s SD-WAN technology to Cisco IOS XE software that runs its core ISR/ASR routers. Over a million of ISR/ASR edge routers, such as the ISR models 1000, 4000 and ASR 5000 are in use by organizations worldwide.

VMware, meanwhile, comes in second with a 10.4% SD-WAN marketshare in 2017, up from 7.8% in 2016, according to IDC, which noted that the company purchased  VeloCloud in December 2017 when it was one of the larger pure-play SD-WAN startups.

“With VeloCloud now rolled into VMware, the company is working toward offering an integrated multi-cloud networking platform that spans NSX network virtualization in the data center(s) and extends out to VeloCloud in the branch and remote office, which the company has rebranded VMware NSX SD-WAN,” IDC said.

Javalin 2.0 supports WebJars web libraries, JSON modularization

Version 2.0 of Javalin, a lightweight framework for the Java and Kotlin languages, is now shipping.

Supporting HTTP/2 and async requests, Javalin 2.0 provides interoperability between Java and Kotlin and is intended to be simple to use.

Changes made since the Version 1.7 release in May include:

  • Support for WebJars client-side web libraries.
  • Modularization of JSON and template functionality, so developers can plug in their own mappers/rendering engines.
  • The addition of a CRUDhandler to remove boilerplate from creating standard CRUD (create, read, update, delete) APIs.
  • Improved support for single-page applications.
  • Better exception handling for async requests.
  • The Pac4 security library.
  • Template functionality has been moved to a single function that uses the correct engine, based on file extension.
  • Rewritten WebSocket implementation.
  • Rewritten test suite.
  • A RequestLogger interface has been added.
  • Default values have been changed in some instances.
  • Functions return list instead of array.
  • Empty collections are returned instead of null.

Rather than being a full web framework, Javalin is a lightweight REST API library, or microframework. Although Javalin has no concept of MVC, its support of WebSockets, template engines, and static file-serving lets Javalin be used for building a RESTful API back end and serving index.html with static resources, if a developer is building a single-page application. To build a more traditional website, template engine wrappers can be used. The framework began as a fork of the Spark framework for Java and Kotlin but was rewritten, influenced by the Koa.js web framework.

GitLab 11.2 devops platform gets better Android, Jira, Hangouts integration

The Version 11.2 release of the GitLab devops platform enhances its Web IDE editor as well as Android support.

With its client-side evaluation capability, Web IDE now lets developers preview a JavaScript web app, to view changes in real time. Fixes can be tested before being committed or experiments can be done on them. Developers also can contribute to an open source project with no need to clone locally.

Powering client-side evaluation is the CodeBox online editor for web applications.  The capability can be enabled for self-managed GitLab instances and already is enabled for projects on GitLab.com. Server-side evaluation is planned for later in 2018, to test Ruby applications and more.

GitLab 11.2 also includes:

  • For Android projects, support for XML manifest files enables import of larger project structures with multiple repositories. Previously, importing complex structures with multiple substructures was tedious and time-consuming. XML manifest files contain metadata for groups of repositories.
  • Improved search, in which project- and group-scoping have been removed from the search bar. This is intended to make search easier to use, as instances grow and projects and groups multiply and become hard to find.
  • Status messages are being brought to the user’s personal profile. Collaborators can know each other’s status.
  • Issue board milestone lists, in which all issues assigned to a given milestone appear on a milestone list. Issues can be moved across different milestones.
  • A cloud-native GitLab chart for Helm, the Kubernetes package manager, is now out of beta.
  • Instance-wide custom project templates enable starting of new projects quickly by automating repetitive setup tasks. Also, builtin project templates now use Dockerfiles instead of the Herokuish utility.
  • A capability to define whether any license should be approved or blacklisted for an application.
  • JUnit test results can be seen directly in the merge request widget.
  • Integration with Google Hangouts, with users able to receive a variety of GitLab events as notifications directly in Hangouts.
  • For teams using the Jira issue tracker, multiple Jira transition IDs are supported. GitLab can recognize multiple ways to close an issue.

VMware sharpens security focus with vSphere Platinum, ‘adaptive micro-segmentation’

VMware is expanding its security range with a new version of its virtualization software that has security integrated into the hypervisor.

“Our flagship VMware vSphere product now has AppDefense built right in,” VMware CEO Pat Gelsinger told the audience at VMworld 2018, which kicked off this week in Las Vegas. “Platinum will enable virtualization teams – you – to give an enormous contribution to the security profile of your enterprise.”

Announced one year ago, AppDefense is VMware’s data-center endpoint-security product, designed to protect applications running in virtualized environments. AppDefense uses machine learning and behavioral analytics to understand how an application is supposed to behave, and it detects threats by monitoring for changes to the application’s intended state.

The new Platinum edition combines vSphere’s native security capabilities with AppDefense. It’s designed to help vSphere administrators deliver more secure applications and infrastructure by enabling VMs to run in a “known good” state. With visibility into VM intent and application behavior, an enterprise can bolster its threat detection and response capabilities.

With AppDefense, “you can see whatever a VM is for – it’s purpose, it’s behavior – and tell the system that’s what it’s allowed to do, dramatically reducing the attack surface without impacting operations or performance. The capability is so powerful, so profound, we want you to be able to leverage it everywhere, and that’s why we’re building it directly into vSphere,” Gelsinger said.

“I call it the burger and fries. Nobody leaves the restaurant without fries. Who would possibly run a VM in the future without turning security on? That’s how we want this to work going forward.”

VMware vSphere Platinum Edition is expected to become available by early November.

In the big picture, VMware sees enterprises making a shift from point security tools to security that’s embedded in infrastructure. VMware is aiming its message of intrinsic security at enterprises that are grappling with increasing security threats and greater regulatory pressure to control risks.

 

VMware offers ‘adaptive micro-segmentation’

Along with unveiling vSphere Platinum, VMware also bolstered its micro-segmentation offering.

Micro-segmentation is a method of creating secure zones in data centers and cloud deployments that allows companies to isolate workloads from one another and secure them individually. The goal is to decrease the network attack surface: Enterprises can create policies that limit network and application flows between workloads to those that are explicitly permitted, reducing the risk of an attacker moving from one compromised workload or application to another.

VMware has been talking about micro-segmentation at the network level for about five years, and it’s a core element of VMware’s NSX networking and security platform. At VMworld, it took micro-segmentation a step further, announcing what it terms “adaptive micro-segmentation.”

Adaptive micro-segmentation brings segmentation up the stack from the network level to include the application layer, tying VMware’s network products – NSX and vRealize Network Insight for operations management – more closely together with AppDefense. Working together, the products can identify the composition and intended behavior of an application, align policy to the application, and lock down the workload and network elements of the application. As an application changes throughout its lifecycle, the combined technologies can automatically rework compute and network security policy to address application component changes.

“As powerful as micro-segmentation has been as an idea, we’re taking the next step with what we call adaptive micro-segmentation,” Gelsinger said. “We are fusing together AppDefense and vSphere with NSX to allow us to align the policies of the application through vSphere and the network. We can then lock down the network and compute, and enable this automation of the microsegment formation. Taken together: adaptive micro-segmentation.”

Kubeflow brings Kubernetes to machine learning workloads

Now in beta, the open source Kubeflow project aims to help deploy a machine learning stack on the Kubernetes container orchestration system.

The Kubeflow machine learning toolkit project is intended to help deploy machine learning workloads across multiple nodes but where breaking up and distributing a workload can add computational overhead and complexity. Kubernetes itself is tasked with making it easier to manage distributed workloads, while Kubeflow centers on making the running of these workloads portable, scalable, and simple. Scripts and configuration files are part of the project. Users can customize their configuration and run scripts to deploy containers to a chosen environment.

To help management deployments, Kubeflow works with Version 0.11.0 or later of the Ksonnet framework, for writing and deploying Kubernetes configurations to clusters. Kubernetes 1.8 or later is required, in a cluster configuration. Kubeflow also works with the following technologies:

  • TensorFlow machine learning models, which can be trained for use on premises or in the cloud.
  • Jupyter notebooks, to manage TensorFlow training jobs.
  • Seldon Core, a platform for deploying machine learning models on Kubernetes.

Kubeflow extends the Kubernetes API by adding custom resource definitions to a cluster, so Kubernetes can treat machine learning workloads as first-class citizens. Described by the open source project as being cloud-native, Kubeflow also integrates with the Ambassador for Ingress and Pachyderm projects for management of data science pipelines. Plans call for extending Kubeflow beyond TensorFlow, with backing considered for the PyTorch and MXNet deep learning frameworks.

Top web browsers 2018: Chrome edges toward supermajority share

Google’s Chrome last month continued to creep up on a two-thirds supermajority of browser share, while Microsoft’s once-dominant position deteriorated. Again.

According to analytics company Net Applications, Chrome’s user share climbed half a percentage point in August, reaching 65.2%, an all-time high. In the last 12 months, Chrome has gained 5.9 percentage points, the only browser of the top four – others include Apple’s Safari, Microsoft’s Edge and Internet Explorer (IE), and Mozilla’s Firefox – to add to its total during that period.

Net Applications calculates user share by detecting the agent strings of the browsers people use to visit its clients’ websites. The firm then tallies the visitor sessions – which are effectively visits to the site, with multiple sessions possible daily – rather than count only users, as it once did. Net Applications primarily measures activity, although it does so differently than rival sources, which total page views.

If the trend of the last 12 months continue, Chrome will take the two-thirds prize in November. Barring any change in the browser battle, Chrome will account for 70% of the global share by June 2019.

The only other browsers to have accumulated that much share since the web broke out of its academia-government ghetto in the 1990s were Netscape’s Navigator and Microsoft’s IE. The former faded under assault from the latter, vanishing for good in early 2008; IE is following in its one-time rival’s footsteps.