Menu

Timesdelhi.com

April 21, 2019
Category archive

machine learning

Get ready for a new era of personalized entertainment

in Amazon/Artificial Intelligence/Column/computing/content/Delhi/Facebook/India/machine learning/Marketing/Multimedia/personalization/Politics/smart devices/Spotify/Streaming Media/streaming services/Twitter/Virtual Reality/world wide web by

New machine learning technologies, user interfaces and automated content creation techniques are going to expand the personalization of storytelling beyond algorithmically generated news feeds and content recommendation.

The next wave will be software-generated narratives that are tailored to the tastes and sentiments of a consumer.

Concretely, it means that your digital footprint, personal preferences and context unlock alternative features in the content itself, be it a news article, live video or a hit series on your streaming service.

The title contains different experiences for different people.

From smart recommendations to smarter content

When you use Youtube, Facebook, Google, Amazon, Twitter, Netflix or Spotify, algorithms select what gets recommended to you. The current mainstream services and their user interfaces and recommendation engines have been optimized to serve you content you might be interested in.

Your data, other people’s data, content-related data and machine learning methods are used to match people and content, thus improving the relevance of content recommendations and efficiency of content distribution.

However, so far the content experience itself has mostly been similar to everyone. If the same news article, live video or TV series episode gets recommended to you and me, we both read and watch the same thing, experiencing the same content.

That’s about to change. Soon we’ll be seeing new forms of smart content, in which user interface, machine learning technologies and content itself are combined in a seamless manner to create a personalized content experience.

What is smart content?

Smart content means that content experience itself is affected by who is seeing, watching, reading or listening to content. The content itself changes based on who you are.

We are already seeing the first forerunners in this space. TikTok’s whole content experience is driven by very short videos, audiovisual content sequences if you will, ordered and woven together by algorithms. Every user sees a different, personalized, “whole” based on her viewing history and user profile.

At the same time, Netflix has recently started testing new forms of interactive content (TV series episodes, e.g. Black Mirror: Bandersnatch) in which user’s own choices affect directly the content experience, including dialogue and storyline. And more is on its way. With Love, Death & Robots series, Netflix is experimenting with episode order within a series, serving the episodes in different order for different users.

Some earlier predecessors of interactive audio-visual content include sports event streaming, in which the user can decide which particular stream she follows and how she interacts with the live content, for example rewinding the stream and spotting the key moments based on her own interest.

Simultaneously, we’re seeing how machine learning technologies can be used to create photo-like images of imaginary people, creatures and places. Current systems can recreate and alter entire videos, for example by changing the style, scenery, lighting, environment or central character’s face. Additionally, AI solutions are able to generate music in different genres.

Now, imagine, that TikTok’s individual short videos would be automatically personalized by the effects chosen by an AI system, and thus the whole video would be customized for you. Or that the choices in the Netflix’s interactive content affecting the plot twists, dialogue and even soundtrack, were made automatically by algorithms based on your profile.

Personalized smart content is coming to news as well. Automated systems, using today’s state-of-the-art NLP technologies, can generate long pieces of concise, comprehensible and even inventive textual content at scale. At present, media houses use automated content creation systems, or “robot journalists”, to create news material varying from complete articles to audio-visual clips and visualizations. Through content atomization (breaking content into small modular chunks of information) and machine learning, content production can be increased massively to support smart content creation.

Say that a news article you read or listen to is about a specific political topic that is unfamiliar to you. When comparing the same article with your friend, your version of the story might use different concepts and offer a different angle than your friend’s who’s really deep into politics. A beginner’s smart content news experience would differ from the experience of a topic enthusiast.

Content itself will become a software-like fluid and personalized experience, where your digital footprint and preferences affect not just how the content is recommended and served to you, but what the content actually contains.

Automated storytelling?

How is it possible to create smart content that contains different experiences for different people?

Content needs to be thought and treated as an iterative and configurable process rather than a ready-made static whole that is finished when it has been published in the distribution pipeline.

Importantly, the core building blocks of the content experience change: smart content consists of atomized modular elements that can be modified, updated, remixed, replaced, omitted and activated based on varying rules. In addition, content modules that have been made in the past, can be reused if applicable. Content is designed and developed more like a software.

Currently a significant amount of human effort and computing resources are used to prepare content for machine-powered content distribution and recommendation systems, varying from smart news apps to on-demand streaming services. With smart content, the content creation and its preparation for publication and distribution channels wouldn’t be separate processes. Instead, metadata and other invisible features that describe and define the content are an integral part of the content creation process from the very beginning.

Turning Donald Glover into Jay Gatsby

With smart content, the narrative or image itself becomes an integral part of an iterative feedback loop, in which the user’s actions, emotions and other signals as well as the visible and invisible features of the content itself affect the whole content consumption cycle from the content creation and recommendation to the content experience. With smart content features, a news article or a movie activates different elements of the content for different people.

It’s very likely that smart content for entertainment purposes will have different features and functions than news media content. Moreover, people expect frictionless and effortless content experience and thus smart content experience differs from games. Smart content doesn’t necessarily require direct actions from the user. If the person wants, the content personalization happens proactively and automatically, without explicit user interaction.

Creating smart content requires both human curation and machine intelligence. Humans focus on things that require creativity and deep analysis while AI systems generate, assemble and iterate the content that becomes dynamic and adaptive just like software.

Sustainable smart content

Smart content has different configurations and representations for different users, user interfaces, devices, languages and environments. The same piece of content contains elements that can be accessed through voice user interface or presented in augmented reality applications. Or the whole content expands into a fully immersive virtual reality experience.

In the same way as with the personalized user interfaces and smart devices, smart content can be used for good and bad. It can be used to enlighten and empower, as well as to trick and mislead. Thus it’s critical, that human-centered approach and sustainable values are built in the very core of smart content creation. Personalization needs to be transparent and the user needs to be able to choose if she wants the content to be personalized or not. And of course, not all content will be smart in the same way, if at all.

If used in a sustainable manner, smart content can break filter bubbles and echo chambers as it can be used to make a wide variety of information more accessible for diverse audiences. Through personalization, challenging topics can be presented to people according to their abilities and preferences, regardless of their background or level of education. For example a beginner’s version of vaccination content or digital media literacy article uses gamification elements, and the more experienced user gets directly a thorough fact-packed account of the recent developments and research results.

Smart content is also aligned with the efforts against today’s information operations such as fake news and its different forms such as “deep fakes” (http://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes). If the content is like software, a legit software runs on your devices and interfaces without a problem. On the other hand, even the machine-generated realistic-looking but suspicious content, like deep fake, can be detected and filtered out based on its signature and other machine readable qualities.


Smart content is the ultimate combination of user experience design, AI technologies and storytelling.

News media should be among the first to start experimenting with smart content. When the intelligent content starts eating the world, one should be creating ones own intelligent content.

The first players that master the smart content, will be among tomorrow’s reigning digital giants. And that’s one of the main reasons why today’s tech titans are going seriously into the content game. Smart content is coming.

News Source = techcrunch.com

OpenStack Stein launches with improved Kubernetes support

in cloud computing/Delhi/denver/Enterprise/India/Kubernetes/linux/machine learning/metal/mirantis/openstack/openstack foundation/Politics by

The OpenStack project, which powers more than 75 public and thousands of private clouds, launched the 19th version of its software this week. You’d think that after 19 updates to the open-source infrastructure platform, there really isn’t all that much new the various project teams could add, given that we’re talking about a rather stable code base here. There are actually a few new features in this release, though, as well as all the usual tweaks and feature improvements you’d expect.

While the hype around OpenStack has died down, we’re still talking about a very active open-source project. On average, there were 155 commits per day during the Stein development cycle. As far as development activity goes, that keeps OpenStack on the same level as the Linux kernel and Chromium.

Unsurprisingly, a lot of that development activity focused on Kubernetes and the tools to manage these container clusters. With this release, the team behind the OpenStack Kubernetes installer brought the launch time for a cluster down from about 10 minutes to five, regardless of the number of nodes. To further enhance Kubernetes support, OpenStack Stein also includes updates to Neutron, the project’s networking service, which now makes it easier to create virtual networking ports in bulk as containers are spun up, and Ironic, the bare-metal provisioning service.

All of that is no surprise, given that according to the project’s latest survey, 61 percent of OpenStack deployments now use both Kubernetes and OpenStack in tandem.

The update also includes a number of new networking features that are mostly targeted at the many telecom users. Indeed, over the course of the last few years, telcos have emerged as some of the most active OpenStack users as these companies are looking to modernize their infrastructure as part of their 5G rollouts.

Besides the expected updates, though, there are also a few new and improved projects here that are worth noting.

“The trend from the last couple of releases has been on scale and stability, which is really focused on operations,” OpenStack Foundation executive director Jonathan Bryce told me. “The new projects — and really most of the new projects from the last year — have all been pretty oriented around real-world use cases.”

The first of these is Placement. “As people build a cloud and start to grow it and it becomes more broadly adopted within the organization, a lot of times, there are other requirements that come into play,” Bryce explained. “One of these things that was pretty simplistic at the beginning was how a request for a resource was actually placed on the underlying infrastructure in the data center.” But as users get more sophisticated, they often want to run specific workloads on machines with certain hardware requirements. These days, that’s often a specific GPU for a machine learning workload, for example. With Placement, that’s a bit easier now.

It’s worth noting that OpenStack had some of this functionality before. The team, however, decided to uncouple it from the existing compute service and turn it into a more generic service that could then also be used more easily beyond the compute stack, turning it more into a kind of resource inventory and tracking tool.

Then, there is also Blazer, a reservation service that offers OpenStack users something akin to AWS Reserved Instances. In a private cloud, the use case for a feature is a bit different, though. But as some of the private clouds got bigger, some users found that they needed to be able to guarantee resources to run some of their regular, overnight batch jobs or data analytics workloads, for example.

As far as resource management goes, it’s also worth highlighting Sahara, which now makes it easier to provision Hadoop clusters on OpenStack.

In previous releases, one of the focus areas for the project was to improve the update experience. OpenStack is obviously a very complex system, so bringing it up to the latest version is also a bit of a complex undertaking. These improvements are now paying off. “Nobody even knows we are running Stein right now,” Vexxhost CEO Mohammed Nasar, who made an early bet on OpenStack for his service, told me. “And I think that’s a good thing. You want to be least impactful, especially when you’re in such a core infrastructure level. […] That’s something the projects are starting to become more and more aware of but it’s also part of the OpenStack software in general becoming much more stable.”

As usual, this release launched only a few weeks before the OpenStack Foundation hosts its bi-annual Summit in Denver. Since the OpenStack Foundation has expanded its scope beyond the OpenStack project, though, this event also focuses on a broader range of topics around open-source infrastructure. It’ll be interesting to see how this will change the dynamics at the event.

News Source = techcrunch.com

This little translator gadget could be a traveling reporter’s best friend

in Crowdfunding/Delhi/Gadgets/Hardware/India/Kickstarter/machine learning/Politics/TC/Translation by

If you’re lucky enough to get travel abroad, you know it’s getting easier and easier to use our phones and other gadgets to translate for us. So why not do so in a way that makes sense to you? This little gadget seeking funds on Kickstarter looks right up my alley, offering quick transcription and recording — plus music playback, like an iPod Shuffle with superpowers.

The ONE Mini is really not that complex of a device — a couple microphones and a wireless board in tasteful packaging — but that combination allows for a lot of useful stuff to happen both offline and with its companion app.

You activate the device, and it starts recording and both translating and transcribing the audio via a cloud service as it goes (or later, if you choose). That right there is already super useful for a reporter like me — although you can always put your phone on the table during an interview, this is more discreet and of course a short-turnaround translation is useful as well.

Recordings are kept on the phone (no on-board memory, alas) and there’s an option for a cloud service, but that probably won’t be necessary considering the compact size of these audio files. If you’re paranoid about security this probably isn’t your jam, but for everyday stuff it should be just fine.

If you want to translate a conversation with someone whose language you don’t speak, you pick two of the 12 built-in languages in the app and then either pass the gadget back and forth or let it sit between you while you talk. The transcript will show on the phone and the ONE Mini can bleat out the translation in its little robotic voice.

Right now translation online only works, but I asked and offline is in the plans for certain language pairs that have reliable two-way edge models, probably Mandarin-English and Korean-Japanese.

It has a headphone jack, too, which lets it act as a wireless playback device for the recordings or for your music, or to take calls using the nice onboard mics. It’s lightweight and has a little clip, so it’s probably better than connecting directly to your phone in many cases.

There’s also a 24/7 interpreter line that charges two bucks a minute that I probably wouldn’t use. I think I would feel weird about it. But in an emergency it could be pretty helpful to have a panic button that sends you directly to a person who speaks both the languages you’ve selected.

I have to say, normally I wouldn’t highlight a random crowdfunded gadget, but I happen to have met the creator of this one, Wells Tu, at one of our events and trust him and his team to actually deliver. The previous product he worked on was a pair of translating wireless earbuds that worked surprisingly well, so this isn’t their first time shipping a product in this category — that makes a lot of difference for a hardware startup. You can see it in action here:

He pointed out in an email to me that obviously wireless headphones are hot right now, but the translation functions aren’t good and battery life is short. This adds a lot of utility in a small package.

Right now you can score a ONE Mini for $79, which seems reasonable to me. They’ve already passed their goal and are planning on shipping in June, so it shouldn’t be a long wait.

News Source = techcrunch.com

The first research book written by an AI could lead to on-demand papers

in Artificial Intelligence/Delhi/India/machine learning/Politics/Science/TC by

The amount of research that gets published is more than any scholar can hope to keep up with, but soon they may rely on an AI companion to read thousands of articles and distill a summary from them — which is exactly what this team at Goethe University did. You can read the first published work by “Beta Writer” here… though unless you really like lithium-ion battery chemistry, you might find it a little dry.

The paper itself is called, in creative fashion, “Lithium-Ion Batteries: A Machine-Generated Summary of Current Research.” And it is exactly what it sounds like, some 250 pages of this:

The pore structure and thickness of the separator should be carefully controlled, as a satisfactory balance between mechanical strength and ionic electrical conductivity should be kept (Arora and Zhang [40]; Lee and others [33]; Zhang [50]) in order to satisfy these two functions [5]. The pore structure and porosity of the material are clearly quite crucial to the performance of the separator in a battery in addition to the separator material [5].

But as interesting as battery research is, it is only tangential to the actual purpose of this project. The creators of the AI, in an extensive and interesting preface to the book, explain that their intent is more to start a discussion of machine-generated scientific literature, from authorship questions to technical and ethical ones.

In other words, they aim to produce questions, not answers. And questions they have in abundance:

Who is the originator of machine-generated content? Can developers of the algorithms be seen as authors? Or is it the person who starts with the initial input (such as “Lithium-Ion Batteries” as a term) and tunes the various parameters? Is there a designated originator at all? Who decides what a machine is supposed to generate in the first place? Who is accountable for machine-generated content from an ethical point of view?

Having had robust debate already among themselves, their peers, and the experts with whom they collaborated to produce the book, the researchers are clear that this is only a beginning. But as Henning Schoenenberger writes in the preface, we have to begin somewhere, and this is as good a place as any.

Truly, we have succeeded in developing a first prototype which also shows that there is still a long way to go: the extractive summarization of large text corpora is still imperfect, and paraphrased texts, syntax and phrase association still seem clunky at times. However, we clearly decided not to manually polish or copy-edit any of the texts due to the fact that we want to highlight the current status and remaining boundaries of machine-generated content.

The book itself is, as they say, imperfect and clunky. But natural-sounding language is only one of the tasks the AI attempted, and it would be wrong to let it distract from the overall success.

This AI sorted through thousands upon 1,086 papers on this highly technical topic, analyzing them to find keywords, references, takeaways, ” pronominal anaphora,” and so on. The papers were then clustered and organized according to their findings in order to be presented in a logical, chapter-based way.

Representative sentences and summaries had to be pulled from the papers and then reformulated for the review, both for copyright reasons and because the syntax of the originals may not work in the new context. (Experts the team talked to said they should stay as close to the meaning of the original as possible, avoiding “creative” interpretations.)

Imagine that the best sentence from a paper starts with “Therefore, it produces a 24 percent higher insulation coefficient, as suggested by our 2014 paper.”

The AI must understand the paper well enough that it knows what “it” is, and in recasting the sentence, replace “it” with that item, and know that it can do away with “therefore” and the side note at the end.

This has to be done thousands of times and many edge cases pop up where the model doesn’t handle it right or produces some of that admittedly clunky diction. For instance: “That sort of research’s principal aim is to attain the materials with superior properties such as high capacity, fast Li-ion diffusion rate, easy to operate, and stable structure.” Henry James it isn’t, but the meaning is clear.

Ultimately the book is readable and conceivably useful, having boiled down probably ten thousand pages of research to a much more palatable 250. But as the researchers say, the promise is much greater.

The goal here, which doesn’t seem far fetched at all, is to be able to tell a service “give me a 50-page summary of the last 4 years of bioengineering.” A few minutes later, boom, there it is. The flexibility of text means you could also request it in Spanish or Korean. Parameterization means you could easily tweak the output, emphasizing regions and authors or excluding keywords or irrelevent topics.

These and a boatload of other conveniences are inherent to such a platform, assuming you don’t mind a rather stilted voice.

If you’re at all interested in scientific publishing or natural language processing, the preface by the authors is well worth a read.

News Source = techcrunch.com

Accenture announces intent to buy French cloud consulting firm

in Accenture/Artificial Intelligence/Cloud/Delhi/Enterprise/French Tech/Fundings & Exits/Google Cloud Next 2019/India/M&A/machine learning/Mergers and Acquisitions/Politics by

As Google Cloud Next opened today in San Francisco, Accenture announced its intent to acquire Cirruseo, a French cloud consulting firm that specializes in Google Cloud intelligence services. The companies did not share the terms of the deal.

Accenture says that Cirruseo’s strength and deep experience in Google’s cloud-based artificial intelligence solutions should help as Accenture expands its own AI practice. Google TensorFlow and other intelligence solutions are a popular approach to AI and machine learning, and the purchase should help give Accenture a leg up in this area, especially in the French market.

“The addition of Cirruseo would be a significant step forward in our growth strategy in France, bringing a strong team of Google Cloud specialists to Accenture,” Olivier Girard, Accenture’s geographic unit managing director for France and Benelux said in a statement.

With the acquisition, should it pass French regulatory muster, the company would add a team of 100 specialists trained in Google Cloud and G Suite to the an existing team of 2600 Google specialists worldwide.

The company sees this as a way to enhance its artificial intelligence and machine learning expertise in general, while giving it a much strong market placement in France in particular and the EU in general.

As the company stated there are some hurdles before the deal becomes official. “The acquisition requires prior consultation with the relevant works councils and would be subject to customary closing conditions,” Accenture indicated in a statement. Should all that come to pass, then Cirruseo will become part of Accenture.

News Source = techcrunch.com

1 2 3 32
Go to Top