Connect with us


Ingredients for Amazon’s HQ2 all available in St. Louis

There’s a whisper-quiet rumor making the rounds that Amazon is looking to establish a second headquarters outside of its initial digs in Seattle. It’s even got a name — HQ2 — and, in reality, Amazon’s announcement was a thunderous roar.

One of the premier global brands to which anyone or any city would love to hitch its wagon, when Amazon CEO Jeff Bezos sneezes, people want to examine the tissue for opportunity. Thus, economic development officials nationwide are foaming at the mouth while preparing to put their best foot forward, and pundits are speculating as to choices like Denver, Boston, Chicago, Austin and others.

In the offing is some 50,000 new jobs, deep organizational investments in infrastructure and more, thousands of relocating smart minds, high wages, residual economic benefits like new home sales, wage taxes, millions upon millions spent with regional retailers, charitable impacts and hundreds of other companies that will establish a presence to feed off of Amazon. Plus, as CEO Jeff Bezos said, “…billions of dollars in up-front and ongoing investments.” The prospective impact is remarkable, meaningful and has the potential to transform a region.

The big question is: Where? My unabashedly biased answer: St. Louis.

Yes, that St. Louis, in Missouri. The one I once wrote did “not suck” in Forbes. It’s where the Cardinals win a lot of baseball games and the Gateway Arch and Energizer Bunny live, and yes, the region that admittedly grapples with the same set of complex human challenges that nearly every other major metropolitan area in the U.S. is working to address today.

Why should Amazon look to St. Louis? In actuality, there is no one singular selling point, nor should there be if Amazon — or for that matter, any other company — is considering it. Rather, the choice revolves around a broad-based criterion, as anyone working in economic development and site selection understands.

Where to being? Well, although it’s not particularly important for all practical purposes, let’s start with legacy, as St. Louis’ run of innovation and corporate success over the past two centuries is nothing short of remarkable.

Anheuser-Busch birthed the global beer industry and Purina the worldwide pet food market; Energizer batteries and the largest car rental company, Enterprise, launched in St. Louis; and say what you want about GMO crops pending your politics, but St. Louis-based Monsanto has ensured for a generation of food production while global farmland dissipates and worldwide population numbers soar.

What have you done for me lately? Jim McKelvey and some anonymous guy from St. Louis named Jack Dorsey founded Square payments systems — not in the Bay Area — but in St. Louis; Mercy Hospitals is revolutionizing virtual healthcare here and Washington University Medical’s Human Genome Project has been hailed as groundbreaking.

What’s ironically lost relative to what Amazon is, and is not, revolves around perhaps the most boring moniker in business: Logistics.

But what about tech? Amazon, of course, is a tech giant, and one would imagine that being a “tech town” is important. St. Louis was once corporate America’s manufacturing backyard, but through strategic planning initiatives first launched at the start of the new millennium, the region has evolved into a vibrant hotbed of technology, as well as a cradle of entrepreneurship over the past decade. But don’t take my word for it — Forbes’ Christopher Steiner cited St. Louis as “The Right Way To Build a Tech City,” the city was ranked as the top startup city by Popular Mechanics in 2015, the fastest growing startup city by Business Insider and the “new startup frontier” by FiveThirtyEight in 2016.

What’s ironically lost relative to what Amazon is, and is not, revolves around perhaps the most boring moniker in business: Logistics. That’s right — at its core, Amazon is simply a logistics company. And guess what? St. Louis is a logistics town — total ballers in moving goods. The region is centered in the middle of the U.S., at the cross-section of international air, rail, interstate and major rivers; within 500 miles of one-third of the U.S. population and 1,500 miles of 90 percent of the people in North America.

And beyond the stereotypical perspective of transportation, let’s not forget the softer side of the modern worker’s everyday life — public transit — which is a key component for Amazon employees. Millennial workers today are less interested in driving to and from work and more interested in hopping a train to work as well as play. Thus, light rail is no frivolous criterion, and it just so happens that St. Louis has one, offering convenient service throughout the city, into nearby suburbs and to the airport.

What about human assets, or more plainly, a strong native workforce?

From a potential standpoint, the region has more than 30 four-year colleges and universities in the region, enrolling approximately 120,000 students, with options ranging from community and technical colleges to the esteemed Saint Louis University and the nationally recognized and prestigious Washington University in St. Louis. Plus, St. Louis has a strong pipeline of talent to one of the finest software engineering programs in the U.S. — the University of Illinois Champaign-Urbana.

From a more present-day perspective, the region is a hotbed of smart minds. For example, it has more plant science PhDs than anywhere else in the world thanks to the presence of Monsanto, the Donald Danforth Plant Science Center and Washington University, among others. The region also boasts one of the largest STEM workforces in the U.S. as more than 80,000 are employed in these highly specialized occupations. Additionally, metro-St. Louis has cultivated a bioscience machine as noted by The Initiative for Competitive Inner Cities’ report, “Building Strong Clusters for Strong Urban Economies.” The organization cited St. Louis’ BioSTL bioscience economic development coalition — established in 2001 to build on plant and life science strengths — as a model for cities endeavoring to create economic growth through regional strengths.

Jeff Bezos, Amazon decision makers, even Santa Claus — are you paying attention? It’s all right there in front you.

How about life beyond the day-to-day grind of work? Good news — the region’s cultural assets are rich and diverse. St. Louis boasts one of the largest urban parks in the U.S. (Forest Park), amazing theaters like the historic Fabulous Fox and Muny theaters, The National Blues Museum, a strong live music scene highlighted by the annual LouFest music festival, the insane City Museum, the Magic House children’s museum, a vast collection of historic architecture, phenomenal art and history museums and, of course, the obligatory reference to the Gateway Arch.

Plus, St. Louis couldn’t exist without baseball and beer, and has not only been widely hailed as America’s best baseball town, but also is an unquestioned beer town — built on the backs of Anheuser-Busch — and now boasts one of the most respected craft beer communities in the U.S. led by Schlafly and Urban Chestnut, among other stellar microbrews.

And while you never lead a sale with price, let’s finish with it. In reality, you can’t sing karaoke in a public park in San Francisco, New York, Chicago, DC or Miami without it somehow costing you $145 per cubic note.

Good thing St. Louis’ median home value, at $164,200, is well below the U.S. median of $194,500, according to the U.S. Census Bureau’s 2015 American Community Survey data. But what does that really get you? Certainly more than your $2,000-per-month 400-square-foot efficiency in Manhattan.

The Council for Community and Economic Research (ACCRA) Cost of Living Index that measures the relative cost of U.S. metropolitan areas examines a new 2,400 square foot, four-bedroom, two-bath home with an attached two-car garage suitable for a management household. This index correlated with an average St. Louis-area home price of $214,260 for the year 2016 versus the U.S. metro average of $326,999 for the same period.

Jeff Bezos, Amazon decision makers, even Santa Claus — are you paying attention? It’s all right there in front you. The smart choice. A community that has a certain hustle in its step that has the red carpet laid out for you — that’s St. Louis. Whether it’s smart minds, a welcoming business environment, cultural assets, a legacy of innovation, tech talent or even cost of living — this region has all the ingredients Amazon needs to make HQ2 a success. Let’s get cooking, together.

Featured Image: Danita Delimont/Getty Images

News Source =

Continue Reading
Click to comment

Leave a Reply


The long Cocky-gate nightmare is over

I’ve been wanting to write about Cocky-gate for some time now but the story – a row between self-published authors that degenerated into ridiculousness – seems finally over and perhaps we can all get some perspective. The whole thing started in May when a self-published romance author, Faleena Hopkins, began attempting to enforce her copyright on books that contained “cocky” in the title. This included, but was not limited to, Cocky Cowboy, Cocky Biker, and Cocky Roomie, all titles in Hopkins oeuvre.

Hopkins filed a trademark for the use of the word Cocky in romance titles and began attacking other others who used the word cocky, including Jamila Jasper who wrote a book called Cocky Cowboy and received an email from Hopkins.

After taking up the cause on Twitter and creating a solid example of Streisand Effect, Jasper changed the title of her book to The Cockiest Cowboy To Have Ever Cocked. But other authors were hit by cease and desist letters and even Amazon stepped in briefly as well and took down multiple titles for a short time.

From the Guardian:

Pajiba reported on Monday that the author Nana Malone had been asked to change the title of her novel Mr Cocky, while TL Smith and Melissa Jane’s Cocky Fiancé has been renamed Arrogant Fiancé. Other writers claimed that Hopkins had reported them to Amazon, resulting in their books being taken down from the site.

This went on for a number of weeks with the back and forth verging on the comical…

to the serious.

Hopkins went to court to defend her trademark and then bumped up against the powerful Author’s Guild who supported three defendants including a publicist who was incorrectly named as the publisher of one of the offending titles, The Cocktales Anthology.

“Beyond the obvious issues with the merits, it is evident from the face of the complaint that Plaintiffs failed to conduct a reasonable pre-filing investigation before racing to the courthouse. Indeed, the number and extent of defects alone call into question whether the filing was made in good faith. Plaintiffs’ lack of due diligence failed to uncover the stark difference between a publisher and a publicist, i.e., non-party best-selling author Penny Reid is the former, while Defendant Jennifer Watson is the latter (Ms. Watson’s website even states that she provides “publicist and marketing services” and nowhere indicates that she writes or publishes books),” wrote Judge Alvin Hellerstein of the Southern District of New York. “In sum, there is nothing meritorious about Plaintiffs’ situation, let alone urgent or irreparable. Defendant Watson cannot offer Plaintiffs the relief they seek as she bears no responsibility for The Cocktales Anthology they wish to enjoin from further publication. Defendant Crescent’s first allegedly infringing book was published over nine months ago. Plaintiffs have admitted that her use of “cocky” in titles would not likely cause confusion as to source or affiliation; moreover, she has publicly stated that she has not suffered lost sales.”

Online communities are wonderful but precarious things. One or two attacks by bad – or even well-meaning – actors can tip them over the edge and ruin them for everyone. In fact, Cocky-gate has encouraged other authors to try this tactics. One writer, Michael-Scott Earle, has attempted to register the words “Dragon Slayer” in a book title and there is now a Twitter bot that hunts for USPTO applications for words in titles.

Now that the cocky has been freed, however, it looks like the romance writers of the world are taking advantage of the opportunity to share their own cocky stories.

News Source =

Continue Reading


After twenty years of Salesforce, what Marc Benioff got right and wrong about the cloud

As we enter the 20th year of Salesforce, there’s an interesting opportunity to reflect back on the change that Marc Benioff created with the software-as-a-service (SaaS) model for enterprise software with his launch of

This model has been validated by the annual revenue stream of SaaS companies, which is fast approaching $100 billion by most estimates, and it will likely continue to transform many slower-moving industries for years to come.

However, for the cornerstone market in IT — large enterprise-software deals — SaaS represents less than 25 percent of total revenue, according to most market estimates. This split is even evident in the most recent high profile “SaaS” acquisition of GitHub by Microsoft, with over 50 percent of GitHub’s revenue coming from the sale of their on-prem offering, GitHub Enterprise.  

Data privacy and security is also becoming a major issue, with Benioff himself even pushing for a U.S. privacy law on par with GDPR in the European Union. While consumer data is often the focus of such discussions, it’s worth remembering that SaaS providers store and process an incredible amount of personal data on behalf of their customers, and the content of that data goes well beyond email addresses for sales leads.

It’s time to reconsider the SaaS model in a modern context, integrating developments of the last nearly two decades so that enterprise software can reach its full potential. More specifically, we need to consider the impact of IaaS and “cloud-native computing” on enterprise software, and how they’re blurring the lines between SaaS and on-premises applications. As the world around enterprise software shifts and the tools for building it advance, do we really need such stark distinctions about what can run where?


The original cloud software thesis

In his book, Behind the Cloud, Benioff lays out four primary reasons for the introduction of the cloud-based SaaS model:

  1. Realigning vendor success with customer success by creating a subscription-based pricing model that grows with each customer’s usage (providing the opportunity to “land and expand”). Previously, software licenses often cost millions of dollars and were paid upfront, each year after which the customer was obligated to pay an additional 20 percent for support fees. This traditional pricing structure created significant financial barriers to adoption and made procurement painful and elongated.
  2. Putting software in the browser to kill the client-server enterprise software delivery experience. Benioff recognized that consumers were increasingly comfortable using websites to accomplish complex tasks. By utilizing the browser, Salesforce avoided the complex local client installation and allowed its software to be accessed anywhere, anytime and on any device.
  3. Sharing the cost of expensive compute resources across multiple customers by leveraging a multi-tenant architecture. This ensured that no individual customer needed to invest in expensive computing hardware required to run a given monolithic application. For context, in 1999 a gigabyte of RAM cost about $1,000 and a TB of disk storage was $30,000. Benioff cited a typical enterprise hardware purchase of $385,000 in order to run Siebel’s CRM product that might serve 200 end-users.
  4. Democratizing the availability of software by removing the installation, maintenance and upgrade challenges. Drawing from his background at Oracle, he cited experiences where it took 6-18 months to complete the installation process. Additionally, upgrades were notorious for their complexity and caused significant downtime for customers. Managing enterprise applications was a very manual process, generally with each IT org becoming the ops team executing a physical run-book for each application they purchased.

These arguments also happen to be, more or less, that same ones made by infrastructure-as-a-service (IaaS) providers such as Amazon Web Services during their early days in the mid-late ‘00s. However, IaaS adds value at a layer deeper than SaaS, providing the raw building blocks rather than the end product. The result of their success in renting cloud computing, storage and network capacity has been many more SaaS applications than ever would have been possible if everybody had to follow the model Salesforce did several years earlier.

Suddenly able to access computing resources by the hour—and free from large upfront capital investments or having to manage complex customer installations—startups forsook software for SaaS in the name of economics, simplicity and much faster user growth.

Source: Getty Images

It’s a different IT world in 2018

Fast-forward to today, and in some ways it’s clear just how prescient Benioff was in pushing the world toward SaaS. Of the four reasons laid out above, Benioff nailed the first two:

  • Subscription is the right pricing model: The subscription pricing model for software has proven to be the most effective way to create customer and vendor success. Years ago already, stalwart products like Microsoft Office and the Adobe Suite  successfully made the switch from the upfront model to thriving subscription businesses. Today, subscription pricing is the norm for many flavors of software and services.
  • Better user experience matters: Software accessed through the browser or thin, native mobile apps (leveraging the same APIs and delivered seamlessly through app stores) have long since become ubiquitous. The consumerization of IT was a real trend, and it has driven the habits from our personal lives into our business lives.

In other areas, however, things today look very different than they did back in 1999. In particular, Benioff’s other two primary reasons for embracing SaaS no longer seem so compelling. Ironically, IaaS economies of scale (especially once Google and Microsoft began competing with AWS in earnest) and software-development practices developed inside those “web scale” companies played major roles in spurring these changes:

  • Computing is now cheap: The cost of compute and storage have been driven down so dramatically that there are limited cost savings in shared resources. Today, a gigabyte of RAM is about $5 and a terabyte of disk storage is about $30 if you buy them directly. Cloud providers give away resources to small users and charge only pennies per hour for standard-sized instances. By comparison, at the same time that Salesforce was founded, Google was running on its first data center—with combined total compute and RAM comparable to that of a single iPhone X. That is not a joke.
  • Installing software is now much easier: The process of installing and upgrading modern software has become automated with the emergence of continuous integration and deployment (CI/CD) and configuration-management tools. With the rapid adoption of containers and microservices, cloud-native infrastructure has become the de facto standard for local development and is becoming the standard for far more reliable, resilient and scalable cloud deployment. Enterprise software packed as a set of Docker containers orchestrated by Kubernetes or Docker Swarm, for example, can be installed pretty much anywhere and be live in minutes.

Sourlce: Getty Images/ERHUI1979

What Benioff didn’t foresee

Several other factors have also emerged in the last few years that beg the question of whether the traditional definition of SaaS can really be the only one going forward. Here, too, there’s irony in the fact that many of the forces pushing software back toward self-hosting and management can be traced directly to the success of SaaS itself, and cloud computing in general:

  1. Cloud computing can now be “private”: Virtual private clouds (VPCs) in the IaaS world allow enterprises to maintain root control of the OS, while outsourcing the physical management of machines to providers like Google, DigitalOcean, Microsoft, Packet or AWS. This allows enterprises (like Capital One) to relinquish hardware management and the headache it often entails, but retain control over networks, software and data. It is also far easier for enterprises to get the necessary assurance for the security posture of Amazon, Microsoft and Google than it is to get the same level of assurance for each of the tens of thousands of possible SaaS vendors in the world.
  2. Regulations can penalize centralized services: One of the underappreciated consequences of Edward Snowden’s leaks, as well as an awakening to the sometimes questionable data-privacy practices of companies like Facebook, is an uptick in governments and enterprises trying to protect themselves and their citizens from prying eyes. Using applications hosted in another country or managed by a third party exposes enterprises to a litany of legal issues. The European Union’s GDPR law, for example, exposes SaaS companies to more potential liability with each piece of EU-citizen data they store, and puts enterprises on the hook for how their SaaS providers manage data.
  3. Data breach exposure is higher than ever: A corollary to the point above is the increased exposure to cybercrime that companies face as they build out their SaaS footprints. All it takes is one employee at a SaaS provider clicking on the wrong link or installing the wrong Chrome extension to expose that provider’s customers’ data to criminals. If the average large enterprise uses 1,000+ SaaS applications and each of those vendors averages 250 employees, that’s an additional 250,000 possible points of entry for an attacker.
  4. Applications are much more portable: The SaaS revolution has resulted in software vendors developing their applications to be cloud-first, but they’re now building those applications using technologies (such as containers) that can help replicate the deployment of those applications onto any infrastructure. This shift to what’s called cloud-native computing means that the same complex applications you can sign up to use in a multi-tenant cloud environment can also be deployed into a private data center or VPC much easier than previously possible. Companies like BigID, StackRox, Dashbase and others are taking a private cloud-native instance first approach to their application offerings. Meanwhile SaaS stalwarts like Atlassian, Box, Github and many others are transitioning over to Kubernetes driven, cloud-native architectures that provide this optionality in the future.  
  5. The script got flipped on CIOs: Individuals and small teams within large companies now drive software adoption by selecting the tools (e.g., GitHub, Slack, HipChat, Dropbox), often SaaS, that best meet their needs. Once they learn what’s being used and how it’s working, CIOs are faced with the decision to either restrict network access to shadow IT or pursue an enterprise license—or the nearest thing to one—for those services. This trend has been so impactful that it spawned an entirely new category called cloud access security brokers—another vendor that needs to be paid, an additional layer of complexity, and another avenue for potential problems. Managing local versions of these applications brings control back to the CIO and CISO.

Source: Getty Images/MIKIEKWOODS

The future of software is location agnostic

As the pace of technological disruption picks up, the previous generation of SaaS companies is facing a future similar to the legacy software providers they once displaced. From mainframes up through cloud-native (and even serverless) computing, the goal for CIOs has always been to strike the right balance between cost, capabilities, control and flexibility. Cloud-native computing, which encompasses a wide variety of IT facets and often emphasizes open source software, is poised to deliver on these benefits in a manner that can adapt to new trends as they emerge.

The problem for many of today’s largest SaaS vendors is that they were founded and scaled out during the pre-cloud-native era, meaning they’re burdened by some serious technical and cultural debt. If they fail to make the necessary transition, they’ll be disrupted by a new generation of SaaS companies (and possibly traditional software vendors) that are agnostic toward where their applications are deployed and who applies the pre-built automation that simplifies management. This next generation of vendors will more control in the hands of end customers (who crave control), while maintaining what vendors have come to love about cloud-native development and cloud-based resources.

So, yes, Marc Benioff and Salesforce were absolutely right to champion the “No Software” movement over the past two decades, because the model of enterprise software they targeted needed to be destroyed. In the process, however, Salesforce helped spur a cloud computing movement that would eventually rewrite the rules on enterprise IT and, now, SaaS itself.

News Source =

Continue Reading


Amazon now lets you share your custom skills made with Alexa Blueprints

Earlier this year, Amazon introducedAlexa Blueprints” – a way for anyone to create their own customized Alexa skills for personal use, without needing to know how to code. Today, the company will allow those skills to be shared with others, including through text messages, email, messaging apps like WhatsApp, or social media platforms like Facebook, Twitter, and Pinterest.

The idea is that you could create a skill for your friends or family to use, to save them the work of having to edit Amazon’s provided templates with your own content. Amazon suggests the new sharing feature could be used among study groups, who have built a custom “flashcards” skill, for example, or shared among family for a birthday. (Presumably, the skill is part of your present?)

Blueprints, so far, have been a fun way to play around with Alexa in your home, teaching it to respond to questions like “who’s the best mom?” (me, of course), creating lists of your family’s favorite jokes, playing customized trivia games, and more.

But adoption has been fairly limited – it’s a neat trick, but not a must-have for all Alexa users.

The skills themselves are simple to build: Amazon provides templates, which are basically filled in and ready to go, but you change the answers to suit your needs.

Now, you when you’re viewing the list of skills you’ve made, you can toggle the skill’s status under the “Access” section to either “just me” or “shared.” (You can un-share a skill at any time, too, by choosing “revoke.”)

Sharing creates a link to your skill that you can paste into a text message, email, social media, or anywhere else. When the recipients clicks the link, they’re taken to the Alexa Blueprints site where they can enable the skill for themselves.

While this could make it easier for people to use Blueprints, it would be interesting if there was a way to share the skills more publicly, too – currently, you can’t publish skills to the Alexa Skill Store, as they’re meant for personal use. But having some sort of community section for Alexa owners within the Blueprints site itself could be interesting – maybe you could share your own Blueprints templates here, or ask others to collaborate on creating one with you.

Imagine a Blueprint trivia game built by all the fans of a favorite TV show, for instance. Or maybe you could share a set of Blueprints with your extended family, since, you know, you’re “the techie one.”

Of course, it’s hard to justify investing in a project that still has a niche audience at this time – but on the other hand, building a community of Alexa owners around homegrown skills could prompt that audience to grow, and help to inform professional developers about what kinds of skills people really wanted.

Alexa Blueprints are free to use at 

News Source =

Continue Reading

Most Shared Posts

Follow on Twitter