menu

AWS Sofia 2018

AWS Sofia 2018 is the first Amazon Web Services conference organised in Bulgaria by TechHuddle. It was held on 19 April 2018 in Inter Expo Centre, Sofia. Richard Yeo, CEO of TechHuddle, opened the conference and presented the sponsors - Holiday Extras, Tide, and Bede Gaming. After that he introduced the keynote speaker Julien Simon, a Principal Evangelist at Amazon Web Services.

Julien’s first talk was devoted to serverless applications on AWS. He started with a quote from Werner Vogels, CTO at Amazon: The easiest server to manage is when you do not have a server to manage. Though you still need servers to run your code, with AWS they are completely abstracted. People can build complex architecture without ever starting a virtual server This is what serverless really is – building complete applications and architectures without ever having to worry about starting, stopping or scaling a server. Julien’s definition of serverless is combining AWS Lambda, a new service for compute, and data, streaming or back-end services like Amazon S3, DynamoDB, etc. that are themselves fully managed.

He made an overview of AWS Lambda, the core of serverless architecture. A few years old, Lambda has been massively adopted; there are even companies that are fully serverless. Its purpose is to let developers deploy simple functions written in Java, Python, Node.js, C# and Go. The scalability and high availability of that code come built in with Lambda; it scales automatically. Lambda is also integrated with other Amazon Web Services. Like anything else on AWS, you pay as you go, which means that you pay only for the compute time you consume; there is no charge when your code is not running. Julien pointed out that with serverless architecture you could automate and build everything including event-driven applications and APIs. He showed how to build a serverless data pipeline as well as a web application and get it running without a server. He demonstrated how developers can write a serverless pipeline with only 16 lines of code and scale it infinitely.

Julien also introduced several development tools such as the Serverless Framework, Chalice, Eclipse Plug-in, Serverless Express and Serverless Java. He made live demos of how some of them worked. He presented the AWS Serverless Application Model (SAM), a CloudFormation extension to bundle Lambda functions, APIs and events. At the end of his talk, he gave examples of what was usually built with serverless architecture.

Another popular way to deploy code is containers. In the second part of his presentation, Julien provided an overview of containers on AWS and different services such as Amazon Elastic Container Service (ECS), Amazon ECS for Kubernetes (EKS), and Amazon Fargate. His answer to the question what people were building with containers on AWS was “everything”. It is a technology that transcends all boundaries. From small companies and startups to large enterprises, everyone can use containers.

Docker, for example, an open platform for developers and system administrators to build, ship, and run distributed applications on laptops, data centre virtual machines or the cloud, has become popular very quickly. Moreover, to further facilitate deployment and increase scalability, Amazon has released Elastic Container Service (ECS), a highly scalable, high performance container management system, allowing developers to focus on building applications, not the infrastructure. ECS eliminates the need to install and operate own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on them. It provides cluster management, container orchestration, auto scaling, and deep AWS integration. Elastic Container Registry (ECR) is the Docker repository hosted on AWS. It is another Amazon web service which is integrated with ECS and simplifies development to production workflow. It makes it easy for developers to store, manage and deploy Docker container images.

ECS is not the only way to containers in production. There are more customers running Kubernetes on AWS than anywhere else. Amazon Elastic Container Service for Kubernetes (EKS) is a managed service that facilitates programmers to run Kubernetes. It is an open-source system for automating deployment, scaling, and management of containerized applications on AWS without the need to install and operate own Kubernetes clusters. It is a platform for enterprises to run production-grade Kubernetes installations. It is still in preview, but anybody can join it.

For those customers who want to run containers but do not want to manage clusters, Amazon has built Fargate, an underlying technology for container management; it is actually serverless container orchestration. With this service, there is no cluster or infrastructure to manage or scale; everything is handled at the container level and scales seamlessly on demand. It is used to basically deploy tasks or pods. Simpler, faster, more efficient, Fargate is available for ECS and it is coming to EKS this year.

In conclusion, there are a lot of options for running containers on AWS. If you try them out, you will know which one works best for you.

In the last part of his presentation, Julien focused on Amazon artificial intelligence and machine learning for developers. He made a short overview of Big Data on AWS and introduced Amazon AI services.

There are many areas where Amazon applies artificial intelligence (AI) and machine learning (ML) such as fully autonomous robots which use machine learning for route planning and collision avoidance or the Amazon Echo family of devices which is connected to cloud services running on AWS for natural language processing, speech recognition, text-to-speech, etc. Another example of using ML by Amazon is drone deliveries, which are still tested. Computer vision, deep learning algorithms and sensor fusion are fully utilised in Amazon Go, the first cashier-less store in Seattle, which has already been open to the public.

Amazon’s goal is to put AI and ML in the hands of every developer and data scientist to enable them build amazing products. That is why, they provide a stack of ML services in three layers:

  • Application services - API driven services like vision and language services and conversational chatbots. Users can do image and video detection, speech-to-text and text-to-speech, translation, and all those tasks are based on machine learning. However, developers do not need to know anything about ML; they can just call a simple API to get the job done.
  • Platform services - allow to deploy ML models with machine learning algorithms. They provide more control in training one’s own data or building own models.
  • Framework and infrastructure - can be utilised to develop sophisticated models and create managed, auto-scaling clusters of GPUs. When users need an infrastructure to train their own models, they need CPU and GPU instances. These Amazon web services are used by customers who want to run everything themselves. The ML stack can meet any customer need. Like with containers, more AI and ML apps are built on AWS than anywhere else by small and large companies in every possible vertical.

An example of application services on AWS is Amazon Rekognition, a deep leaning-based image analysis service. It provides object and scene detection, facial analysis and search, face comparison, celebrity recognition, image moderation, and text in image. It finds application in many areas of life including police investigations and law enforcement. Amazon has extended recognition to video as well. They offer Amazon Rekognition Video which also uses deep learning technologies for video analysis.

Another useful service is Amazon Polly which turns texts into lifelike speech and allows programmers to create applications that talk and build speech-enabled products. It is fully customisable – you can change the pitch, the speed, the intonation of the speech as well as number and date formats.

Amazon Translate is a neural machine translation service that delivers fast, high-quality language translation. Recently launched, it supports 12 language pairs, and 6 more are to come soon. Amazon offers an automatic speech recognition service called Amazon Transcribe that makes it easy for developers to add speech-to-text capability to their applications. It can recognise multiple users, and the text includes punctuation, formatting, and timestamps. It supports both high and low-quality audio. This is particularly useful when companies want to transcribe customer and support phone calls into text for the purpose of making analysis. Amazon Comprehend is also a text service. It is used to find insights and relationships in text. It can identify languages even on short documents like tweets, extract key phrases, places, people, brands or events, and understand how positive or negative the text is. In addition, it automatically organizes a collection of text files by topic. It has numerous applications – customer and social media analysis, intelligent document search, content personalisation, etc. The topic modelling provided by the service is frequently used to examine the content of a collection of documents to determine common themes.

The last application service presented by Julien was Amazon Lex. It lets developers build conversational chatbots. They can build conversational interfaces into applications using voice and text. A Lex chatbot is two things: it is a conversation and a Lambda function. The conversation tries to extract pieces of information from the user, called slots. Slots are filled by asking the user questions. For example, if he or she wants to book a hotel, the questions refer to time, location, etc. Once the slots are filled, you call a Lambda function and then the back-end or API to do the booking.

SageMaker, the new kid on the block, is a fully-managed platform service that enables developers and data scientists to quickly and easily build, train, and deploy machine learning models at any scale. It solves the problem of scaling related to huge data sets, which need big training or prediction clusters, or to large teams in which each developer needs their machine learning environment for training, prediction, etc. With SageMaker you can go from experimentation to deployment without managing a single server. Even if you train on huge infrastructure, SageMaker can build it. It consists of a number of modules. The first one is notebook instances – pre-installed EC2 instances with machine learning and deep learning tools and Jupiter notebooks. You can fire them up in minutes. If you do not need them because you have your environments, you can use the SageMaker SDK. For training, you can choose from a collection of ML built-in algorithms like linear regression, clustering, classification, principal component analysis, factorisation machines as well as more complicated algos such as time series and natural language processing. Moreover, you can use those algorithms directly, without writing a line of machine learning code. For those who want to use all the libraries, Amazon provides pre-installed environments for scikit learn, MXNet, TensorFlow, PyTorch, etc. The same refers to deployment. With one API call you can deploy to a fleet of managed web servers which start serving predictions through an HTTP endpoint. You can then scale the infrastructure.

SageMaker solves the infrastructure issues for you and lets you focus on machine learning. Other benefits of SageMaker involve load balancing, auto-scaling, and high availability. A major advantage of using the service is payment. Building, training and hosting are billed by the second without minimum fees or upfront commitments.

Amazon has also built a Machine Learning Lab (MLLib) which is actually a team that helps developers with their ML projects. They are not going to build your project but will advise you and help you get started.

The last thing Julien talked about at the conference was frameworks and infrastructure. Developers who want to run everything themselves have the option to select C5 instances, the most powerful CPU instances with the latest generation of Intel chips. If you need more power, you can use P3 instances, the latest GPU instances. You can also fire up EC2 instances and install your favourite libraries. Julien recommends that you use deep learning AMI (pre-build and pre-packaged with the NVIDIA drivers for the GPUs and all the libraries) which provides ML practitioners and researchers with the infrastructure and tools to accelerate deep learning in the cloud at any scale. It is free to use, you pay only for the instances, and it saves users an enormous amount of time.

As a part of their strategy to put deep learning in the hands of developers, Amazon offers DeepLens - a fully programmable video camera with tutorials, code and pre-trained models, designed to help developers learn about deep leaning or expand their skills.

In summary, Amazon AI and ML services are very easy to use (it takes just one API call), pretty smart since most of them are based on deep learning, and you do not need to know anything about deep learning to use them. Of course, if you need more control, you can choose the degree of control you need.

The last speaker to take the floor was Simon Wood from Holiday Extras who presented an AWS use case. Holiday Extras is a travel technology company based in UK. They focus on building great end-to-end user experiences and try to enhance everybody’s trip. That is why it is so important for them to understand customers and their trip needs. Early adopters of the cloud technology, they have been on AWS for 7 years. They moved a very large, monolithic PHP application into the cloud because they wanted the agility of auto-scaling. Since they have peaks and drops in the travel industry all the time, the cloud model suits them very well. It also gives them the flexibility to optimise their pricing.

They use Amazon Web Services like EC2, RDS, and S3. They use Lambda Edge because it lets them customise content that CloundFont delivers. In CloudFront you can host your firewalls, serve HTML file or JavaScript. Moreover, Holiday Extras used Lambda Edge even for split testing of UI on a large number of web pages. They ran their own analytics on AWS with Kinesis, which allowed them to collect and analyse real-time streaming data.

The second area they have been leveraging on Amazon is Microservices. When they moved beyond the monolithic PHP application to Node.js, they had thousands of lines JavaScript applications and a tech team of 120 people. In the last year and a half, they have been breaking those down. Microservices is a great solution by which you can execute fast when you have many people working on multiple code bases at the same time.

With the ease of Kubernetes and an own continuous integration pipeline, Holiday Extras are able to get from idea to production in under five minutes. They have written a container layer, a command line tool that sits on top of Kubernetes and some of the other tools they have developed. They have built this tooling in infrastructure to allow developers move extremely fast. Amazon even wrote a white paper on Holiday Extras’ use of their services.

All the presentations were full of demos and life examples, and the speakers were happy to answer participants’ questions at the end of each talk. The conference finished with a tombola draw. Two of the attendees won gift cards.

The conference was a huge success. We received many positive comments and thank-you notes. Most of the attendees rated the organisation and presentations as excellent. Over 90% of the participants stated that they would attend another event organised by TechHuddle. Therefore, we are planning another AWS conference next year.

If you do not want to miss another event organised by TechHuddle, sign up to receive regular updates or check TechHuddle Academy for information on courses and free lectures.

Find more pictures from the event, watch the presentations or download the slides from the links below:

Follow the hashtag #AWSSofia2018 to join in the conversation of social media.