DevFests are technological conferences supported by Google Developers Groups around the globe, focused on community building and learning about Google’s technologies. There are 838 groups in 120 countries with 500,000 members. So far over 500 events with 5,000 sessions and more than 20,000 attendees have been organised.
Each DevFest is inspired by and uniquely tailored to the needs of the developer community and region that hosts it. While no two DevFests are alike, each is powered by a shared belief that when developers come together to exchange ideas, amazing things can happen.
TechHuddle organised the first Google DevFest in Bulgaria with the support of GDG Sofia. The conference took place on 24 November 2018 in the National Palace of Culture. Developer experts, technology professionals and community leaders from all over the world came to DevFest Sofia 2018 to share their experience.
DevFest Sofia 2018 was a whole day event with 2 tracks (including hands-on workshops), 12 prominent speakers (two Googlers and two Google Developer Experts) and over 300 attendees.
The conference was opened by Boris Stranjev from GDG Sofia who presented the organisers, sponsors and supporters and gave the floor to the first speaker. Here are some highlights from the conference.
Cloud Native: Effective computing at scale
Ignacy Kowalczyk, Software Engineering Manager at Google, talked about containers as а way of packaging software with all the dependencies so that developers can include the libraries they need in the package and run them as isolated processes which do not steal computing power from one another. The container platform Docker enables developers to cost-effectively build and manage their application portfolio. Ignacy also pointed out the advantage of containers as compared to virtual machines (VMs):
- Less resource (RAM, CPU, etc.) intensive then running VMs
- Provide versioning
- Fast deployment
Note that virtual machines have to boot up whereas containers are much faster. It is crucial when you have to quickly react to the changing environment and make a fast upgrade or scaling of an application.
Containers are the fundamental building blocks of cloud native applications. Basically, everything at Google runs in containers. Google execute approximately 4 billion containers per week. While Docker is used to build and execute containers if you need an environment to manage execution, updates and replicas, monitor and provide security of containers, you should use a container orchestration environment like Cloud Native. Containers and container orchestration tools like Google’s Borg or Kubernetes ensure simplicity and agility of deployments. DevOps and how you deploy your cloud native applications are important, but you should also pay special attention to architecture and design. You should design your cloud native applications in such a way that they can be decomposed into microserves. In addition to speed, another benefit thereof is the blast radius. If something crashes, only one replica of a service crashes, rather than large binary that handles more tasks. Efficient scaling is also a significant advantage of microservices. Design your software to have stateless processes as they allow easier scaling and higher traffic.
The cloud native approach allows rolling out of 50 different services per day instead of one big monolithic release every month. In order to scale without increasing your ops team, you should use Kubernetes. This is a board master that schedules your containers in a large pool of physical machines. It is an open-source system for automating deployment, scaling, and management of containerised applications, developed by Google.
Cloud Native is not only about how to write software but also how to develop processes in your company and build applications in a very agile and efficient way and design them to support high availability and scalability.
Google Maps Platform
The next talk delivered by Selen Basol, Tech Lead Manager at Google, was on Google Maps. She pointed out that the data produced over the last two years is 90% of the data we have ever created, and the majority either has or will have location components, including the IoT devices. At present, there are 23 billion IoT devices, and all the data they provide involve a location timestamp.
In 2005, Google Maps was launched as a web mapping application with the mission to help users explore and navigate the real world. Since then, Google have increased the number of API offerings from 1 to 18. They are used in various industries like logistics, transportation, courier services, game development (Pokemon Go is a good example thereof), and many more. Maps’ APIs and products support 74 different languages.
Google use high resolution satellite imagery to cover 99% of the Earth surface, 40 million miles of roads, and 98% of the world population. Visual recognition and computer vision algorithms help them keep the landscape data updated without the need to physically photograph the surface and refresh manually. It is not enough to have the data, but you have to figure out how to keep them fresh, accurate and running. They rely on Machine Learning and visual recognition. For example, annotating the 80 billion images from Street View would have been impossible. That is why, they have created a geo code for each address. One of the teams gets all the imagery and applies machine learning and visual recognition to extract street numbers, match them with the car position and get the geo code right. This helps launch more countries faster. In the past, such a country launch took about a year. Now multiple countries can be launched within six months.
People are also an important part of the success of the map platform. Over 1 billion people use Google Maps products actively, 60 million users are a part of the local communities contributing different pieces of location information, and millions of developers give feedback on a regular basis. Furthermore, 2 billion Android users anonymously provide information about traffic. The above data is combined with reports from thousand authoritative sources ranging from private and public map providers to satellite and Street View imagery. All these data drive quality and accuracy.
The Google Maps platform is also beneficial to developers and businesses. Google Maps covers 150 million places of business and 700,000 are added each month. The cycling application Strava, for example, uses maps heavily. When they changed their map with a competitor’ product, there was a backlash. Their users complained, and one of the main complaints was about the Street View being removed. Strava returned to Google Maps and apologised to their users.
Google have three different types of map products (maps, routes and places) that help people and businesses create location-enabled experiences or get insights out of location. Maps involve customised maps and street view imagery. Vector and raster maps enable their rendering across different devices. Many customisation options are available from changing the map theme and colours to feeding your own data or overwriting and customising interactions like pinch-to-zoom.
Visual improvements are an integral part of the continuous development of the platform whether they are related to colours and typography or placement of different labels. APIs have constantly been expanding to different languages. In March 2018, thirty-seven new languages spoken by 1.5 million users were added.
Finally, there is 360° Street View imagery in the maps’ API. One lesser-known option is Street View Trusted which allows business to upload 360° inside tours of their places of business.
There are three more useful APIs on Google Maps platform from which developers can benefit. Time-zone API is used to add time-zone mapping, geocoding API - to convert addresses to coordinates and vice versa, and geolocation - to build real-time, real-world experiences, i.e. to locate a certain thing or a device.
Google are excited about future possibilities and keep looking for different ways to improve experiences. They try to make augmented reality technologies easily accessible to every developer. Many of these APIs are available in Google’s repository on GitHub with examples helping developers get up and running within minutes.
Building Assistive Apps with App Actions
Elaine Dias Batista, Google Assistant Developer Expert, illustrated how to improve your app and build assistive apps with App Actions. She pointed out that the most common assistive apps are voice applications like Google Assistant and Alexa. She emphasised that we should make a clear distinction between Google Assistant and Android. They are completely different platforms. Google Assistant is NOT Android, though, it is present on the Android platform. The developer platform of Google Assistant is called Google Action (which is the analogue of SDK on Android), a Google Assistant app is called an Action, and the equivalent of GooglePlay Store is Assistant App Directory.
The PlayStore is clogged with apps. According to TechCrunch nearly 1 in 4 people abandon mobile apps after they have used it once. 77% of users never use an app again 72 hours after installing. Newcomers struggle to get discovered and even if you have gained users after spending a lot of time developing a great app and a lot of resources to market it, your users may forget about your app, and it is hard to communicate new features to them. Despite the disheartening statistics, more and more apps are built. In 2017, the iOS app store removed hundreds of thousands problematic or abandoned apps. Nevertheless, apps are not dead. There are a lot of features on both app platforms helping developers build great applications – alpha and beta testing, instant apps, push notifications to engage users. They can also take advantage of App Actions.
In 2017, Google introduced Predictive App Row showing users a row of five apps they are more likely to use with 60% accuracy. Next year, they took a step further and launched App Actions, a new Android feature announced in 2018 at Google I/O, the annual developer conference held by Google in Mountain View, California. Instead of predicting what app people are going to use next, they try to predict the action that users are going to make at a specific time, depending on the context, by analysing usage patterns based on machine learning algorithms that run locally on phones. Those actions can be, for example, continue listening to a music app if you plug in your headphones, reading on a book app or watching a video on YouTube. If your app corresponds to the intent, it may be suggested on several places - Android Launcher, Google Assistant on Android, Play Store, Smart Text Selection, Google Search app.
To integrate App Actions into an app, developers can use built-in intents. A built-in intent is a unique identifier that you can specify to tell Google Assistant that your Action is suitable to fulfil a specific category of user requests, for example, play a game, get news, find a recipe, etc. Google have a rich catalogue of built-in intents, which extremely facilitates developers by providing phrases people would use to initiate an action.
App Actions offer two models - content-driven (content centric) and URL template (action centric). For example, a taxi app has a deep-link API with origin destination parameters whereas a cooking app has a website with schema.org/Recipe markup.
Boris Strandjev, co-founder of GDG Sofia, talked about mobile DevOps. A comprehensive definition of DevOps has not been provided yet. They are sometimes referred to as a concept, methodology, movement, or cultural philosophy. However, everyone agrees that they involve common practices, processed and tools. Boris defines DevOps as everything that is related to software delivery and facilitates software development but is not part of the coding. DevOps benefit projects by increasing efficiency, productivity, quality, and user satisfaction.
There are several phases in the DevOps cycle – planning, development, testing, delivery, feedback, and growth. For each of them, there are different practices or tools that facilitate the process. DevOps contribute to organisational improvement as well.
The question is why mobile DevOps matter. There are 205 billion mobile app downloads every year. People spend over 4 hours per day on mobile devices and use mobile apps for nearly three and a half hours per day. The mobile app market is huge, but you have to optimise your mobile app development to promote your own growth.
There are several main characteristics of mobile platforms. They have millions of device configurations and rich environment. Their interaction methods are specific, and the devices are often offline. There are also specific restrictions that apply to mobile apps. Every application is sandboxed. Resource-consuming applications are killed by app managers. All apps undergo scanning and review. Strict rules are applied to the distribution and marketing of apps.
Integrated development environments (IDEs) and plug-ins support mobile development quite a lot. They help developers to either contribute or visualise a small bit or bootstrap a whole component. Moreover, they are feature rich and targeted to the respective platform. The dependency management is a core feature of both platforms – iOS and Android. It allows developers to reuse code or use open-source code in different solutions. Mobile devices are quite diverse, but emulators or simulators help us in mobile development. They simulate the operation of a real device, which accelerates the development cycles and feedback loop. Hot swap is a new functionality saving time in mobile apps development. It allows particular changes in the IDE to be instantly reflected in the application without a rebuild.
Another important aspect of DevOps is quality. Inspection is embedded in the IDE itself, meaning that if you write something that does not abide by the rules, you receive a warning that you should not use this constructor, for example. In addition to static analysis, the inspection involves dynamic analysis which suggests improvements during the process of development, thus giving you immediate feedback.
To prevent the publishing of low-quality apps, mobile platforms review your app when you release it. Furthermore, to help you further improve the quality of your app, they offer on-device tools analysing the performance of user interface (UI).
Unlike web development when you have to test your own application, mobile platforms provide testing tools at unit and integration levels. Another option of which developers may take advantage is recorded UI tests, thought these types of tests do not take into consideration the variety of devices, resolutions, aspect ratios, etc. Monkey testing is a technique in web development where the user tests the application by providing random inputs and checking the behaviour of a system (or whether it crashes). In mobile app development, this is done automatically.
There are two types of functional testing - instrumented (available only for Android) and black-box. Espresso is a testing framework for Android facilitating UI test writing. Appium, for example, is a tool for black-box testing that supports both iOS and Android. A significant drawback of mobile testing tools, however, is that they cannot fully automate testing since they do not take into account the rich environment.
There are two aspects of mobile app delivery that is worth mentioning – distribution and protection. The distribution of mobile apps does not happen in the blink of an eye. It can take from a couple of hours (on the Android mobile platform) to days (for iOS applications). There are various tools supporting app delivery like Fabric which will soon be integrated into Firebase. Developers can also use blue-green deployment, a technique reducing the downtime and risk by running two identical production environments. It allows you to release the new version of your app gradually so that if there is an issue, it will affect only a small portion of your user database. Fastlane allows developers to automate every aspect of development and release workflow and is likely to become part of Firebase as well.
Security is an important aspect of mobile delivery. On the one hand, developers have to protect their application against copyright infringement. On the other, they have to protect their users’ data. Obfuscation or symbolication is used to protect the intellectual property of an app and prevent reverse engineering of a proprietary software. Stealing of data is prevented by signing apps with keys. Google offers a SafetyNet API that can be used to check whether your application has been tampered with.
An important question is how you receive feedback on your application. Mobile platforms, of course, count the number of downloads. However, if you need a more granular analysis of how your users interact with your app and further improve it, you need an analytics tool like Google Analytics, Firebase or Flurry. Moreover, the implementation of crash reporting tools as part of the development cycle has become a standard, and crash reporting tools have become a necessity. A new tool called Firebase Predictions has been recently announced by Google. This service enables you to utilise Firebase to predict user behaviour in your app, allowing you to interact with your users. There is a module you can add to your app and receive reports on the performance of your application.
After you have released your app and have received feedback, you can focus on the growth. You can use push notifications to either interact with your users or introduce new features. Use Firebase Predictions to take actions, for example, reduce churn by new offerings. Last but not least, apply A/B testing to tailor your app to your users’ needs.
How We Built a Successful Cloud-Scale Product with GCP
Borislav Pantaleev, a full stack web developer, explained how a successful cloud-scale product can be built with Google Cloud Platform (GCP). He presented a case study of how the company he works for developed an e-book sharing platform where reading is free, and publishers earn money from ads. It all started as an Android app called UB reader, having 5 million installs. What the developers needed was a scalable API and a website. When they built a monolith application, the question of where to host it arose. The answer was – on a cloud. They chose Google because they had previous experience with their cloud platform.
There were several options for hosing their code. Compute Engine is an infrastructure as a service that lets developers create and run virtual machines and allows auto-scaling. However, VMs put heavy load on developers because they have to think about infrastructure, updating and scaling them. Kubernetes did not satisfy all their needs. Google Cloud Functions was not the best platform to host a monolith application. Ultimately, they decided to use App Engine because it met all their requirements.
Google’s pricing calculator for the products offered on their cloud platform may also weigh in on your decision. App Engine offers two options – standard and flexible. The standard environment is suitable for small websites and experimentation. It scales down to zero, which is cost effective, and provides really fast deployment. However, it is sandboxed and imposes many restrictions. But a major drawback for the company was that it used an older PHP version. Consequently, they chose the flexible environment. There you can run a lot of programming languages (PHP, Python, Node.js, etc.). It resembles a virtual machine which runs a docker container. Each of the languages have their own container built when you deploy your application.
Deployment is pretty easy. The only thing you need is a .yaml file with four lines of code. After deployment, you can inspect what is happening on the console. There are several options. You can divide your application into different microservices. For example, if you want to run A/B tests, you can deploy another version of your application, and each version can have a number of instances.
Keep in mind that when you deploy your application, App Engine Flex runs two instances which you can limit to one, and the maximum number of instances you can create is twenty. After a successful deployment, you can use the console to secure shell (SSH) on the machine from your browser. You can also see all the Docker containers which are currently running. The first one is your application, which contains your code and Nginx. You get another Nginx which serves as a proxy and allows you to change the configuration. Moreover, you can run SSH in the Docker container and get access to the machine.
When the company was building their e-book platform, they wanted to provide their publishers the option to have their first page as their book cover. They used ImageMagick, an application that can convert the first page of a PDF file into an image, which was available on both standard and flexible App Engine. However, to read PDF files, they needed Ghostscript, an interpreter for the PostScript language and PDF. It was an easy task with App Engine Flex. They just added the script to the Docker container.
They chose a database where to store their users’ information with the help of Google’s Storage Option, a tool providing information on the features of different storage and database solutions. Finally, they picked out CloudSQL which is fully-managed PostgreSQL and MySQL, 99.95% available, vertically scalable, and accessible from anywhere. It also offers several configuration options like assigning a public IP, configuring backups, etc. Furthermore, backups are automatically made on a daily basis and can be restored with a click of a button.
The company uses Google Cloud Storage to store all the e-books. It offers different options; some of them provide backups and archival storage whereas others are used for web applications. One of the benefits of Cloud Storage is that you can migrate to it without changing a single line of code. For analytical purposes, you can also take advantage of BigQuery. It is a fully managed data warehouse, free for up to 1 TB of data analysed each month and 10 GB of data stored. Data Studio is a tool you can use to quickly build interactive reports and dashboards on the cloud.
Their only concern while building their cloud native product was whether it can scale. This issue was solved by GCP which offers automatic autoscaling. Google also performs regular health checks to ensure your machine is up and properly running.
They were satisfied with having their monolith solution on the cloud and all the useful tools that offered them storage, autoscaling, analytics as well as with the customer support provided by Google.
The Case for Experimentation Using Google Optimize
Peter Perlepes, Senior Frontline Engineer and GDG Athens member, gave a talk on the new Google product - Google Optimize, a free website optimization tool helping online marketers and webmasters increase visitor conversion rates and satisfaction by testing different combinations of website content. When designing a website, you should be aware of what the majority of your visitors like. For this purpose, you can use A/B testing which is done by comparing two versions of a web page to see which one performs better. Most people think that if they change, for example, the colour of the call-to-action button and analyse the results from their A/B tests, they will receive objective statistics.
According to Harvard Review, however, 80 to 90% of the A/B tests count “failures” to executives, i.e. a result that is “not statistically significant”. The founder of Appsumo.com claims that 7 out of 8 A/B tests have not driven any significant change.
Peter placed special emphasis on the fact that product quality is not defined by its objective value but rather by how much it exceeds user expectations. Although there are a lot of definitions of quality, it can mean different things to different people. In different industries, your product should satisfy different quality demands. In banking, for example, it is high security while in teaching the most valued quality is compassion, in media quality is associated with cohesion and accountability.
Many developers wonder how to achieve high quality of their product. The right way is through the process of experimenting. One option is to apply the Shewhart Cycle – plan, do, check, and act. We have an objective and we plan what to do. Then we run the experiment and deploy the code. We check the results using tools like Google Analytics to determine whether the change has a positive impact. Finally, we act on the hypothesis we have formed on the basis of our analysis.
According to Peter Perlepes, data is not the fuel, it is rather the ground on which we should build our product. The next layer is the information we derive from transforming and interpreting data. Then we have knowledge, i.e. information that is applied in the current context. Last comes wisdom when the integrated knowledge becomes a judgement.
An absolutely necessary prerequisite for your A/B testing is enough traffic. You also need to create customer-driven culture, define clear business targets and set clear test targets. Last but not least, you should be prepared to be proven wrong.
Keep in mind that you cannot test everything. You can test different offers, delivery methods, marketing campaigns and channels. We can even test search engine metadata. Ultimately, we should aim to test three things - relevance, value and call-to-action. We should determine how close our change corresponds to user needs and what value it provides. We should also think of how we communicate what action we want them to take next.
Testing and subsequent changes to apps should be data-driven. Google Optimize is a great tool that offers A/B testing, website testing and personalisation tools for businesses. It allows you to test both small changes and major redesigns of webpages. It provides robust in-platform reporting and allows users to align website tests with their conversion rate optimisation goals.
With Google Optimize we can make three different types of tests - A/B/N, Multivariate, and Redirect. A/B/N is similar to A/B testing. Two or more versions of a web page are compared against each other to determine which one will achieve the highest conversion rate. We may test changes in colour, position, size, text or effects like hover and scrolling, or experiences like popups, notifications, etc.
Multivariate testing uses the same core mechanism as A/B testing, but compares a higher number of variables, and reveals more information about how these variables interact with one another. As in an A/B test, traffic to a page is split between different versions of the design. However, it requires huge traffic to your page. Redirect tests is a type of A/B test that allows you to test separate web pages against each other. They are useful when you want to test two very different landing pages or a complete redesign.
Machine Learning and I
The next talk at DevFest Sofia 2018, delivered by Daniel Balchev, was devoted to Machine Learning (ML). All big tech companies like Google, Amazon, Facebook use ML. More and more venture capital funds invested in ML startups. Google Assistant is a technology using four machine learning subsystems – trigger word detection, speech recognition, natural language understanding, and text-to-speech. ML can be used, for example, in spam detection or object detection and annotation.
There are various machine learning models. Daniel made an overview of some of them. K-nearest neighbours is a simple algorithm based on minimum distance from the query instance to the training samples to determine the nearest neighbours. Decision tree is a classification algorithm that uses tree-like data structures to model decisions and their possible outcomes. The “knowledge” acquired by a decision tree through training is directly formulated into a hierarchical structure.
Deep learning (also known as deep structured learning or hierarchical learning) is a part of a broader family of machine learning methods based on learning data representations, as opposed to task-specific algorithms. Given enough data, it is capable of solving complex problems. It allows us to train an AI to predict outputs, given a set of inputs.
Training a model is an integral part of the machine leaning process. ML works by finding a relationship between a label and its features. We do this by showing a model a bunch of examples from our dataset. Each example helps define how each feature affects the label. When you are choosing a training algorithm, your choice should depend on the type of model you use. The final stage of the machine leaning process is evaluation of the performance of our model.
There is a huge difference between Artificial Intelligence (AI) and Machine Learning. AI enables machines to perform tasks in ways that involve some intelligence. They are not just programmed to do a single, repetitive motion. They can do more by adapting to different situations. Machine learning is a subset of AI. It employs the concept that we can build machines to process data and learn on their own without our constant supervision.
Machine learning can be used for pattern matching and problems we know how to solve, i.e. in cases we do not need to follow a series of steps to find the solution. Andrew Ng says that we can automate anything than a human being can do in less than one second. ML finds application in various areas:
- Sound processing - speech recognition, text-to-speech, music recognition and generation
- Image processing - object detection and recognition, semantic segmentation, optical character recognition (OCR) including handwriting, artistic style transfer
- Natural language processing - language modelling, helping OCR and speech recognition, natural language understanding, and machine translation
- Reinforcement learning
- Digital advertising
- Ad blocking.
Machine learning, however, is not a panacea for all problems. If you do not provide enough input to the model, for example, you cannot predict future stock exchange prices. Although you can use ML for algorithmic problems like regex/grammar matches, but the process will be slow and will produce a lot of errors. Every ML system like any other system makes errors because humans make errors when they enter input data or program the system or use ambiguous data. Of course, there are a few workarounds to avoid some errors. You can train the system to detect errors and forward them to humans for revision.
There are also challenges that are unique to machine learning and artificial intelligence. ML is mostly a black box. You do not have access to the full source code. The learnt values of the parameters are an important part of the ML model and are crucial to the decisions the model makes. However, inspecting model parameters is useless. Moreover, there are millions of parameters. The system is not interpretable, there is no way to debug a single example or guarantee it works. In order to evaluate performance, we should use certain metrics like accuracy (calculating the number of correct predictions) or recall (the fraction of relevant instances that have been retrieved over the total amount of relevant instances). Dataset management is also specific. Data should be split in three disjoint sets - training, validation and test.
There are certain advantages ML systems offer as compared to non-machine learning solutions. They can solve problems other systems are not capable of solving and are cost-effective. Keep in mind that it is hard to estimate how much time you will need to build a new feature or product using machine learning because most of the time it is a research. Besides, training a model takes time, and the feedback loop is slower.
Cultivating a Microservice Culture via Tooling
Thomas Vance, Software Engineer at Holiday Extras, presented a microservices use case. The company, he works for, have moved away from a giant PHP application over the last year to a Node.js microservice architecture and have built a lot of tooling to help their developers on that journey. A lot of those pieces of tooling have come from the GCP platform. The tooling they used is not unique to their business, and the reasoning, results and processes are transferable.
The company had to migrate to microservices at a very fast pace. Therefore, it was important to offer their engineers tooling at all stages of the development process whether it be starting from scratch or debugging a production running service. The processes they put in place removed their core teams from the deployment processes and allowed them to really focus on just building great applications. It did remove some freedoms but helped junior engineers get up and running quickly. Thomas explained that Dockyard is an internal code name that they used for everything running on their microservice platform.
It usually took about two weeks to create something new and get to production. So, the rate at which they wanted to push code out was not scalable. In addition, it involved a lot of people from different teams using different cloud platforms - AWS and GCP. And everyone was doing it differently. Holiday Extras tackled this problem through tooling.
To get through the first step of creating a service, they built a tool called Dockyard Create. It is a command line tool that engineers can run. It creates all the GitHub repositories with the correct user permissions and the correct access groups, continuous integration, deployment pipelines, staging and production environments hosted on Google Cloud Platform, and many more. All of this is done and happens in less than 5 minutes instead of two weeks.
The next problem they faced was that engineers and testers still wanted to run this on their laptops, but Holiday Extras’ services were talking to multiple services and they had to emulate that environment. The solution was to create a tool called Dockyard Local. It runs a Docker container locally for your service and all dependent services. In order to get all engineers using the tools and keep them updated, they shipped Node Toolbox, a tooling platform. The key takeaway for them is that automation creates consistency and eases development for engineers.
Thomas explained that the next step after you have a service up and running is to think about how you start writing code and add developers through that process. All services need some core principles and some core pieces of tech, for example, every service requires different metrics, logging, debugging. Holiday Extras had different engineers doing different types of work. Fortunately, the Node Toolbox was born out of this process. The Node Toolbox ships with every service they create and offers some common tooling that every engineer can use. A significant benefit thereof was that abstraction removed the complexity for engineers. Engineers do not have to spend weeks whiteboarding solutions. They can build core things pretty quickly.
Another challenge was to manage service dependencies. For example, for 200 services in production, there are 2,000 dependencies which have to be up-to-date. To automate the process, they created Renovate Bot which finds dependencies and merges and pulls requests.
Continuous integration (CI) was also a challenge since it was very expensive. They had around 150 builds running on CI a day and they were not quick enough. Furthermore, developers had to wait in queues due to lack of capacity. That was why they built their own CI deployment pipelines. The internal CI tooling runs on Kubernetes on the Google Cloud Platform.
Another problem they tackled was their GitHub usage, privacy and GDPR. They had to lock GitHub down internally and create access groups. To make things easy for developers, they created another bot called Access Bot. If you send a request on Slack to a team, and the teams grants it, you will be given a 24-hour access into their sections of GitHub.
Thomas further talked about their usage of Pub/Sub and BigQuery, which they call the Data Platform. It is a set of JSON schema, rules that look like events and messages. They can publish them to the data platform and then run through Google Pub/Sub architecture. They have services that subscribe to those which allows them to push messages throughout systems and those systems to talk to each other. They use BigQuery as their main analytics platform.
Finally, one of the main conclusions drawn from the whole process is that automation, abstraction and acknowledgments are really important and allowing engineers to get up and running really quickly.
The Best of Both Worlds: Achieving next-level code sharing between Flutter and the mobile web
Iiro Krankka, Kotlin Google Developer Expert, explained how he had achieved code sharing between Flutter and the mobile web while building a mobile app. A huge movie fan, he constantly used the Finish mobile app Finnkino to browse movies and buy tickets. However, he was not happy with the app, especially its UI and functionality. Therefore, he decided to build his own app which he named inKino. But he never finished it. Starting hobby projects is fun, finishing them – not so much. He made five attempts to create inKino in native Android, and every time got carried away or bored and never finished any of those. And then, he stumbled upon a YouTube video about Flutter.
He decided to give it a try and found out that Flutter was awesome. Building customised UIs is fast and fun, it provides developer-friendly APIs, fast iteration with hot reload on development mode and fast release builds with native ahead-of-time compilation. Furthermore, with Flutter UIs are consistent across devices and platforms like a portable rendering engine for mobile apps. It does not use native or web views. Another benefit of using Flutter is the headless widget testing meaning that you can run an infinite scrollable list without an emulator or simulator. Last but not least, it uses Dart. Dart offers fast iteration on debug builds, precompiled release builds with no interpreter on runtime and it comes with own package manager and formatter. It also has cheap object allocation and great VM optimisations. The best thing is that Dart and Flutter are able to cooperate.
Iiro advices developers to use a common language to achieve code sharing. It is all about architecture, layering and decoupling. Business logic should be decoupled from the UI. The business logic layer should be platform independent. Besides, decoupling makes testing business logic easy. Business logic layer should be platform-independent, so there should not be any Flutter or web dependencies there. This is a good practice even if you are not planning on code sharing between web and mobile at all. Then you have to pick an architecture and stick with it.
Having the best of both worlds means that you have a native app in the App Stores and a progressive web app at the same time. On top of it, you are sharing code across web and mobile without implementing the same logic over again.
Iiro Krankka was the last speaker at the conference.
At DevFest Sofia 2018, we also ran three workshops which were highly attended:
- Using TensorFlow for Real-Time Object Detection in Android held by Vladislav Donchev, Software Engineer & Development Consultant
- Build a Slack Bot (with Node.js on Kubernetes) held by Philip Yankov, Founder at x8academy
- Firebase Web: How to use the Firebase platform to easily create Web applications held by Angel Georgiev, Technical Training Lead
You can find conference materials on the following links:
Stay tuned for more news about DevFest 2019 or sign up to receive updates.