Ballerina is a new programming language that is designed and optimized for integration. Ballerina revolutionized the way you model integration scenarios with its graphical and textual syntax which is built on top of the sequence diagram metaphor. It is fully container native and 100% open source technology.
apidays LIVE India - REST the Events - REST APIs for Event-Driven Architectur...apidays
apidays LIVE India 2021 - Connecting 1.3 billion digital innovators
May 20, 2021
REST the Events - REST APIs for Event-Driven Architecture
Mark Teehan, Principal Solution Engineer at Confluent APAC
This presentation is for enterprises that are considering adopting Scala. The author is managing editor of http://scalacourses.com, which offers self-paced online courses that teach Introductory and Intermediate Scala and Play Framework.
[WSO2Con EU 2017] File Processing and Websockets with BallerinaWSO2
File processing is used in almost every enterprise IT system at different scales and across different domains. It spans across domains such as processing log files, audit files, data files, and binary files. Ballerina is designed for integration and it comes with a set of core features to implement different file handling use cases. This slide deck discusses file processing and Websockets with Ballerina.
This document provides an overview and comparison of different client-server data access techniques in Visual FoxPro, including remote views, dynamic SQL pass-through, stored procedures, and cursor adapters. It evaluates these techniques based on categories like ease of use, performance/scalability, maintainability, flexibility, and security concerns. The document recommends choosing the best approach for each application and mixing techniques when necessary to achieve flexibility. It also encourages developers to be informed when selecting a data access strategy.
The document provides an introduction to web APIs and REST. It defines APIs as methods to access data and workflows from an application without using the application itself. It describes REST as an architectural style for APIs that uses a client-server model with stateless operations and a uniform interface. The document outlines best practices for REST APIs, including using HTTP verbs like GET, POST, PUT and DELETE to perform CRUD operations on resources identified by URIs. It also discusses authentication, authorization, security concerns and gives examples of popular REST APIs from Facebook, Twitter and other services.
This document discusses Mule ESB, an open source enterprise service bus (ESB) product. It begins by describing commercial and open source ESB products, then defines what an ESB and service-oriented architecture (SOA) are. It explains how ESBs use a bus architecture compared to traditional enterprise application integration (EAI) hub-and-spoke models. The remainder of the document focuses on Mule ESB, describing its runtime environment, basic concepts like flows and building blocks, and how it implements features like transactions and security.
This document discusses using the Apache Synapse open source ESB to implement the API facade pattern. It provides an overview of Synapse's key features like message routing, transformation and protocols. It describes Synapse's messaging model including mediators, sequences, APIs and endpoints. Finally, it discusses how to use Synapse to expose a non-RESTful backend like a SOAP service or database via a REST API facade.
This is the presentation I did at Apache Asia Roadshow 2009 held at Colombo, Sri Lanka. My talk was titled "Introduction to Apache Synapse". In this presentation, I attempt to address areas like enterprise integration problems, ESB pattern, Synapse architecture, features and the configuration model.
This document provides an overview of APIs, including what they are, why they are useful, common data formats like JSON and XML, RESTful API design principles, and how to consume and create APIs. It discusses API concepts like resources, HTTP verbs, caching, authentication, and error handling. It also provides examples of consuming APIs with tools like Postman and creating a simple API in Node.js.
- Integration microservices are used to compose other microservices and APIs to create new services, similar to the concept of "miniservices". They help integrate web APIs, legacy systems, and microservices.
- Technologies for building integration microservices include frameworks like SpringBoot and Dropwizard, Apache Camel, and the Ballerina programming language. Ballerina is designed specifically for integration and allows graphical composition of services and connectors.
- Integration microservices are an important part of microservices architecture as they handle service compositions and orchestration between multiple microservices and external APIs.
Self-Service Data Ingestion Using NiFi, StreamSets & KafkaGuido Schmutz
Many of the Big Data and IoT use cases are based on combining data from multiple data sources and to make them available on a Big Data platform for analysis. The data sources are often very heterogeneous, from simple files, databases to high-volume event streams from sensors (IoT devices). It’s important to retrieve this data in a secure and reliable manner and integrate it with the Big Data platform so that it is available for analysis in real-time (stream processing) as well as in batch (typical big data processing). In past some new tools have emerged, which are especially capable of handling the process of integrating data from outside, often called Data Ingestion. From an outside perspective, they are very similar to a traditional Enterprise Service Bus infrastructures, which in larger organization are often in use to handle message-driven and service-oriented systems. But there are also important differences, they are typically easier to scale in a horizontal fashion, offer a more distributed setup, are capable of handling high-volumes of data/messages, provide a very detailed monitoring on message level and integrate very well with the Hadoop ecosystem. This session will present and compare Apache Flume, Apache NiFi, StreamSets and the Kafka Ecosystem and show how they handle the data ingestion in a Big Data solution architecture.
Venkata Kumar has over 8 years of experience as a senior Python developer. He has extensive experience developing web and mobile applications using Python frameworks like Django and Flask. He has also worked on backend development with databases like MongoDB, MySQL, Oracle, and SQL Server. Some of his responsibilities have included designing and developing RESTful APIs, building automated workflows using Python, and implementing responsive user interfaces with HTML, CSS, JavaScript, and frameworks like AngularJS.
This document provides a high-level overview of the Alfresco content management platform in 3 sentences: Alfresco is an open source enterprise content management platform that can manage files and metadata, provides search and security features, and includes a workflow engine and APIs to build custom applications. It discusses Alfresco's architecture, developer setup process, development model using APIs and extensions, and demos the platform's capabilities. The document is intended to introduce developers to building applications and customizing Alfresco.
End-to-end Data Governance with Apache Avro and AtlasDataWorks Summit
This document discusses end-to-end data governance with Apache Avro and Apache Atlas at Comcast. It outlines how Comcast uses Avro for schema governance and Apache Atlas for data governance, including metadata browsing, schema registry, and tracking data lineage. Comcast has extended Atlas with new types for Avro schemas and customizations to better handle their hybrid environment and integrate platforms for comprehensive data governance.
WSO2 Intro Webinar - Simplifying Enterprise Integration with Configurable WS...WSO2
WSO2 ESB is a lightweight and high-performance open source enterprise service bus (ESB) that simplifies enterprise integration. It provides mediation capabilities and abstractions to integrate diverse applications and protocols. WSO2 ESB's graphical tools and wizards make integration configurations easy without coding. It supports many protocols, standards, and integration patterns to enable a wide range of integration scenarios.
Serverless Web Apps using API Gateway, Lambda and DynamoDBAmazon Web Services
This document provides an overview of serverless computing using AWS services like API Gateway, Lambda and DynamoDB. It begins with an introduction to serverless computing and how it differs from traditional VM-based and container-based architectures by focusing on functions as the unit of scale. It then provides overviews of DynamoDB as a fully managed NoSQL database service and Lambda for running code without managing servers. It discusses how API Gateway can be used to create serverless APIs that integrate with Lambda. The document concludes with best practices tips for using Lambda and serverless deployment with AWS SAM.
Before joining Couchbase Phil has been a consultant on many different node.js and NoSQL projects working with many different languages and databases. By helping clients solve problems regarding scalability as well building completely new APIs he gained a broad knowledge of the available platforms and their tradeoffs in the big and small. He's a Developer Evangelist for Couchbase where he works to educate developers on the different parts of using a NoSQL database from mobile to big iron servers.
This session introduces the Spring Web Scripts and the Spring Surf framework describing how they are used to underpin the Alfresco Share user interface. As well as covering the basic concepts, this session will cover the history and future roadmap for the frameworks.
Realizing the promise of portable data processing with Apache BeamDataWorks Summit
The world of big data involves an ever changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. In a way, Apache Beam is a glue that can connect the Big Data ecosystem together; it enables users to "run-anything-anywhere".
This talk will briefly cover the capabilities of the Beam model for data processing, as well as the current state of the Beam ecosystem. We'll discuss Beam architecture and dive into the portability layer. We'll offer a technical analysis of the Beam's powerful primitive operations that enable true and reliable portability across diverse environments. Finally, we'll demonstrate a complex pipeline running on multiple runners in multiple deployment scenarios (e.g. Apache Spark on Amazon Web Services, Apache Flink on Google Cloud, Apache Apex on-premise), and give a glimpse at some of the challenges Beam aims to address in the future.
Many Organizations are currently processing various types of data and in different formats. Most often this data will be in free form, As the consumers of this data growing it’s imperative that this free-flowing data needs to adhere to a schema. It will help data consumers to have an expectation of about the type of data they are getting and also they will be able to avoid immediate impact if the upstream source changes its format. Having a uniform schema representation also gives the Data Pipeline a really easy way to integrate and support various systems that use different data formats.
SchemaRegistry is a central repository for storing, evolving schemas. It provides an API & tooling to help developers and users to register a schema and consume that schema without having any impact if the schema changed. Users can tag different schemas and versions, register for notifications of schema changes with versions etc.
In this talk, we will go through the need for a schema registry and schema evolution and showcase the integration with Apache NiFi, Apache Kafka, Apache Storm.
The document outlines the development of a portal for XYZ Company. It will include components for employees, customers, and suppliers. The portal will be built using Salesforce and include features like Chatter, files/libraries, profiles, and customized taxonomies for different user groups. It describes collecting requirements, designing system architecture, developing beta versions, and testing. The portal will use cloud infrastructure for availability and scalability. Various APIs will connect components, and security measures like permissions and authentication will protect sensitive information.
An architecture for federated data discovery and lineage over on-prem datasou...DataWorks Summit
Comcast's Streaming Data platform comprises a variety of ingest, transformation, and storage services in the public cloud. Peer-reviewed Apache Avro schemas support end-to-end data governance. We have previously reported (DataWorks Summit 2017) on how we extended Atlas with custom entity and process types for discovery and lineage in the AWS public cloud. Custom lambda functions notify Atlas of creation of new entities and new lineage links via asynchronous kafka messaging.
Recently we were presented the challenge of providing integrated data discovery and lineage across our public cloud datasources and on-prem datasources, both Hadoop-based and traditional data warehouses and RDBMSs. Can Apache Atlas meet this challenge? A resounding yes! This talk will present our federated architecture, with Atlas providing SQL-like, free-text, and graph search across select metadata from all on-prem and public cloud data sources in our purview. Lightweight, custom connectors/bridges identify metadata/lineage changes in underlying sources and publish them to Atlas via the asynchronous API. A portal layer provides Atlas query access and a federation of UIs. Once data of interest is identified via Atlas queries, interfaces specific to underlying sources may be used for special-purpose metadata mining.
While metadata repositories for data discovery and lineage abound, none of them have built-in connectors and listeners for the entire complement of data sources that Comcast and many other large enterprises use to support their business needs. In-house-built solutions typically underestimate the cost of development and maintenance and often suffer from architecture-by-accretion. Atlas' commitment to extensibility, built-in provision of typed, free-text, and graph search, and REST and asynchronous APIs, position it uniquely in the build-vs-buy sweet spot.
Many Organizations are currently processing various types of data and in different formats. Most often this data will be in free form, As the consumers of this data growing it’s imperative that this free-flowing data needs to adhere to a schema. It will help data consumers to have an expectation of about the type of data they are getting and also they will be able to avoid immediate impact if the upstream source changes its format. Having a uniform schema representation also gives the Data Pipeline a really easy way to integrate and support various systems that use different data formats.
SchemaRegistry is a central repository for storing, evolving schemas. It provides an API & tooling to help developers and users to register a schema and consume that schema without having any impact if the schema changed. Users can tag different schemas and versions, register for notifications of schema changes with versions etc.
In this talk, we will go through the need for a schema registry and schema evolution and showcase the integration with Apache Nifi, Apache Kafka, Apache Storm.
Realizing the Promise of Portable Data Processing with Apache BeamDataWorks Summit
The world of big data involves an ever changing field of players. Much as SQL stands as a lingua franca for declarative data analysis, Apache Beam aims to provide a portable standard for expressing robust, out-of-order data processing pipelines in a variety of languages across a variety of platforms. In a way, Apache Beam is a glue that can connect the Big Data ecosystem together; it enables users to "run-anything-anywhere".
This talk will briefly cover the capabilities of the Beam model for data processing, as well as the current state of the Beam ecosystem. We'll discuss Beam architecture and dive into the portability layer. We'll offer a technical analysis of the Beam's powerful primitive operations that enable true and reliable portability across diverse environments. Finally, we'll demonstrate a complex pipeline running on multiple runners in multiple deployment scenarios (e.g. Apache Spark on Amazon Web Services, Apache Flink on Google Cloud, Apache Apex on-premise), and give a glimpse at some of the challenges Beam aims to address in the future.
Speaker
Davor Bonaci, Senior Software Engineer, Google
- The candidate has over 7 years of experience as a Python developer with expertise in building web applications using Django and Flask frameworks. They have extensive experience developing both front-end and back-end components as well as designing and implementing database schemas and APIs.
Accenture Cloud Platform helps customers manage public and private enterprise cloud resources effectively and securely. In this session, learn how we designed and built new core platform capabilities using a serverless, microservices-based architecture that is based on AWS services such as AWS Lambda and Amazon API Gateway. During our journey, we discovered a number of key benefits, including a dramatic increase in developer velocity, a reduction (to almost zero) of reliance on other teams, reduced costs, greater resilience, and scalability. We describe the (wild) successes we’ve had and the challenges we’ve overcome to create an AWS serverless architecture at scale. Session sponsored by Accenture.
AWS Competency Partner
Similar to Ballerina- A programming language for the networked world (20)
The Challenge of Interpretability in Generative AI Models.pdfSara Kroft
Navigating the intricacies of generative AI models reveals a pressing challenge: interpretability. Our blog delves into the complexities of understanding how these advanced models make decisions, shedding light on the mechanisms behind their outputs. Explore the latest research, practical implications, and ethical considerations, as we unravel the opaque processes that drive generative AI. Join us in this insightful journey to demystify the black box of artificial intelligence.
Dive into the complexities of generative AI with our blog on interpretability. Find out why making AI models understandable is key to trust and ethical use and discover current efforts to tackle this big challenge.
UiPath Community Day Amsterdam: Code, Collaborate, ConnectUiPathCommunity
Welcome to our third live UiPath Community Day Amsterdam! Come join us for a half-day of networking and UiPath Platform deep-dives, for devs and non-devs alike, in the middle of summer ☀.
📕 Agenda:
12:30 Welcome Coffee/Light Lunch ☕
13:00 Event opening speech
Ebert Knol, Managing Partner, Tacstone Technology
Jonathan Smith, UiPath MVP, RPA Lead, Ciphix
Cristina Vidu, Senior Marketing Manager, UiPath Community EMEA
Dion Mes, Principal Sales Engineer, UiPath
13:15 ASML: RPA as Tactical Automation
Tactical robotic process automation for solving short-term challenges, while establishing standard and re-usable interfaces that fit IT's long-term goals and objectives.
Yannic Suurmeijer, System Architect, ASML
13:30 PostNL: an insight into RPA at PostNL
Showcasing the solutions our automations have provided, the challenges we’ve faced, and the best practices we’ve developed to support our logistics operations.
Leonard Renne, RPA Developer, PostNL
13:45 Break (30')
14:15 Breakout Sessions: Round 1
Modern Document Understanding in the cloud platform: AI-driven UiPath Document Understanding
Mike Bos, Senior Automation Developer, Tacstone Technology
Process Orchestration: scale up and have your Robots work in harmony
Jon Smith, UiPath MVP, RPA Lead, Ciphix
UiPath Integration Service: connect applications, leverage prebuilt connectors, and set up customer connectors
Johans Brink, CTO, MvR digital workforce
15:00 Breakout Sessions: Round 2
Automation, and GenAI: practical use cases for value generation
Thomas Janssen, UiPath MVP, Senior Automation Developer, Automation Heroes
Human in the Loop/Action Center
Dion Mes, Principal Sales Engineer @UiPath
Improving development with coded workflows
Idris Janszen, Technical Consultant, Ilionx
15:45 End remarks
16:00 Community fun games, sharing knowledge, drinks, and bites 🍻
DefCamp_2016_Chemerkin_Yury-publish.pdf - Presentation by Yury Chemerkin at DefCamp 2016 discussing mobile app vulnerabilities, data protection issues, and analysis of security levels across different types of mobile applications.
Cracking AI Black Box - Strategies for Customer-centric Enterprise ExcellenceQuentin Reul
The democratization of Generative AI is ushering in a new era of innovation for enterprises. Discover how you can harness this powerful technology to deliver unparalleled customer value and securing a formidable competitive advantage in today's competitive market. In this session, you will learn how to:
- Identify high-impact customer needs with precision
- Harness the power of large language models to address specific customer needs effectively
- Implement AI responsibly to build trust and foster strong customer relationships
Whether you're at the early stages of your AI journey or looking to optimize existing initiatives, this session will provide you with actionable insights and strategies needed to leverage AI as a powerful catalyst for customer-driven enterprise success.
Retrieval Augmented Generation Evaluation with RagasZilliz
Retrieval Augmented Generation (RAG) enhances chatbots by incorporating custom data in the prompt. Using large language models (LLMs) as judge has gained prominence in modern RAG systems. This talk will demo Ragas, an open-source automation tool for RAG evaluations. Christy will talk about and demo evaluating a RAG pipeline using Milvus and RAG metrics like context F1-score and answer correctness.
Redefining Cybersecurity with AI CapabilitiesPriyanka Aash
In this comprehensive overview of Cisco's latest innovations in cybersecurity, the focus is squarely on resilience and adaptation in the face of evolving threats. The discussion covers the imperative of tackling Mal information, the increasing sophistication of insider attacks, and the expanding attack surfaces in a hybrid work environment. Emphasizing a shift towards integrated platforms over fragmented tools, Cisco introduces its Security Cloud, designed to provide end-to-end visibility and robust protection across user interactions, cloud environments, and breaches. AI emerges as a pivotal tool, from enhancing user experiences to predicting and defending against cyber threats. The blog underscores Cisco's commitment to simplifying security stacks while ensuring efficacy and economic feasibility, making a compelling case for their platform approach in safeguarding digital landscapes.
Garbage In, Garbage Out: Why poor data curation is killing your AI models (an...Zilliz
Enterprises have traditionally prioritized data quantity, assuming more is better for AI performance. However, a new reality is setting in: high-quality data, not just volume, is the key. This shift exposes a critical gap – many organizations struggle to understand their existing data and lack effective curation strategies and tools. This talk dives into these data challenges and explores the methods of automating data curation.
"Making .NET Application Even Faster", Sergey Teplyakov.pptxFwdays
In this talk we're going to explore performance improvement lifecycle, starting with setting the performance goals, using profilers to figure out the bottle necks, making a fix and validating that the fix works by benchmarking it. The talk will be useful for novice and seasoned .NET developers and architects interested in making their application fast and understanding how things work under the hood.
The History of Embeddings & Multimodal EmbeddingsZilliz
Frank Liu will walk through the history of embeddings and how we got to the cool embedding models used today. He'll end with a demo on how multimodal RAG is used.
What's New in Teams Calling, Meetings, Devices June 2024
Ballerina- A programming language for the networked world
1. Chintana Wilamuna
Solutions Architect
April, 06 2017
A programming language for the networked world
Asanka Abeysinghe
Vice President, Solutions Architecture
Future of Enterprise Integration Meetup: Silicon Valley
2. The world is changing …
• Networked interactions are no longer a niche
– The new shared library
– Ability to reuse and recompose is key to agility
– Everything you write integrates with other things
• Configuration over code not workable at scale
• Containers, microservices, micro integrations
4. From data flows to sequence diagrams
• Every ESB technology based on dataflow
• Not very good at describing complex multi-party
interactions that are now common
• Sequence diagrams to the rescue
– Perfect for describing parallel, coordinated activities of many
parties
6. Why stop at using sequence diagrams to describe
programs?
8. Ballerina
• General purpose programming language, but optimized for
integration
• Strongly typed, concurrent with both text and graphical
syntaxes
• Modern, network-aware, data-aware, security-aware
programming system inspired by Java, Go, Maven, ...
9. The Ballerina Language
• Graphical/textual parity with ability to switch back and forth
• Common integration capabilities are baked into the language:
– Deep HTTP/REST/Swagger alignment
– Connectors for Web APIs and non-HTTP APIs
– Support for JSON, XML, (No)SQL data and mapping
• No magic – no weird syntax exceptions, everything is derived
from a few key language concepts
• Maximize developer productivity and abstraction clarity
• Support high performance implementations: low latency, low
memory and fast startup
12. Ballerina knows services (and main)!
• Two models of programming: service and main
• Services are network invoked collections of entry points
• Main is regular main()
14. Ballerina knows data!
• XML, JSON and datatable are built-in data types
• All streamed, high performant
• Data (type) mapping between Ballerina types, XML, JSON and
datatables
15. Ballerina knows network APIs!
• Client and server connectors for HTTP 1.1/2, WebSockets, JMS,
(S)FTP(S) and more
• Client connectors for BasicAuth, OAuth, AmazonAuth, SOAP
• Client connectors for Web APIs: Twitter, GMail, LinkedIn,
Facebook, Lambda Functions, …
16. Ballerina knows (or rather, is) Swagger!
• Ballerina programs’ interface (in HTTP case) is
expressed in Swagger
– Text syntax, graphical syntax and Swagger syntax
are interchangeable
• Edit interface anywhere
• No more standard limitations of interface first design
17. Ballerina knows Docker!
• Built in build command to create Docker image with executable
Ballerina program package
• Run on any container management platform
18. Ballerina is highly extensible!
• Code organized into packages like Go
• Repository model similar to Maven/NPM/Go giving ability to
create ecosystem of connector contributors
• Recomposable network connectors
19. Ballerina is open source!
• Patent pending technology
• Implementation released under Apache License v2.0
– Fork me on GitHub: https://github.com/ballerinalang/
• Community
– Users: ballerina-user@googlegroups.com
– Slack: #ballerinalang
– Twitter: @ballerinalang
– StackOverflow: #ballerinalang
– Developers: ballerina-dev@googlegroups.com
22. Modularity: Files & Packages
• Package model inspired by Go
– package org.wso2.foo;
• Packages are defined by the set of all files in a directory
• All symbols exposed by any file in the package is referred to with
packagename:symbolname
– In other words, file names have no meaning in the namespace of the language
• To use symbols from a package you must import:
– import org.wso2.foo [as xx];
• Packages that start with ballerina.* and ballerinax.* are reserved
• Packages will be versioned - details later
24. JSON Support
• Variables of type “json” can hold any JSON value
• Optionally, can associate a JSON Schema to the
declaration:
– json[<json_schema_name>] jsdoc;
– Constraints the value to conform to the schema
• Useful for type mapping
• JSON literals can be used to initialize JSON typed variables
– json address_json = `{"name" : "$name", "streetName" : "${street}"}`;
25. XML Support
• Variables of type “xml” can hold any XML element
• Variables of type “xmldocument” can hold any XML
document
• Optionally can associate an XML Schema to constrain the
value space of XML
– xml[<{xsd_namespace_name}type_name}>] e;
• XML literals
– xmlElement address_xml = `<address><name>${name}</name></address>`;
26. Tabular Data Support
• Variables of type “datatable” can hold any tabular data
coming from a data source
• Database connectors to query and produce datatables
27. Type Coercion and Conversion
• Lossless type coercions are automatic
– E.g. int -> float
• Other type conversions can be invoked with the cast
operator
– TypeT1 v1;
– TypeT2 v2;
– v2 = (TypeT2) v1;
• Users can define their own type mappers which fit into the
language type system and get invoked with the cast
operator
34. Ballerina is not just the language
• Composer with debugging, type mapping
• IDE plugins
• Docerina
• Testerina
• (Coming soon) Packerina
35. Composer
• Textual, graphical and Swagger editing of Ballerina
programs
• Runs in the browser
– Currently not packaged with embedded Chromium;
considering it
• Will be hosted in the cloud as well
36. IDE Plugins
• Available (still WIP)
– Atom
– Vim
– Idea
– Sublime Text
– VSCode
– Adobe Brackets
• More to be done!
– Send a pull request ☺
37. Docerina
• Tool to generate API docs for your Ballerina code
• All Ballerina code is documented using Docerina
41. When should I use Ballerina?
• Write integration microservices
– 80-20 rule: if 80% of your service is about integrating with
other services, data and APIs then use Ballerina. If its just
20% then use Java / Node / Go / XYZ
• Re-compose existing services to be API backends
• Write integration scripts
– Replacement for shell scripts that use curl too much
44. Available now: Ballerina v0.85 Technology
Preview
• Language runtime
– Run once, run one service, run many services (bus)
• Developer tools
– Composer: browser based graphic/text/Swagger editor and
debugger
– IDE plugins: Idea, Atom, Vim, Sublime, VSCode, Brackets
– Testerina: Unit testing framework
– Docerina: API doc generation framework
• Connectors
46. Next Steps
• Download, twirl with it, give us feedback
• Engage with the community
– Users: ballerina-user@googlegroups.com
– Slack: #ballerinalang
– Twitter: @ballerinalang
– StackOverflow: #ballerinalang
– Developers: ballerina-dev@googlegroups.com
52. Ballerina Concepts: Client & Server Connectors
• Ballerina services are attached to network protocols with
server connectors
• Ballerina programs interact with network endpoints with
client connectors
– Networked endpoints can mean pretty much anything