Don't miss out on expert conversations about the future of technology and human connections.
In the first two articles of this DevOps series. I outlined the myths and common errors surrounding DevOps, and then the initial steps to building a DevOps pipeline. In this article, I want to examine briefly some best practices through a typical continuous integration pipeline.
In my earlier article, I explored some of the common myths around DevOps. With this base, it’s now time to detail how organizations can start to build a DevOps operational model. It’s worth again mentioning that there is no one single path for a business - every organization needs to find its own journey, and adopt the steps in accordance with its own unique circumstances.
So far, the IT community does not have an agreed-upon definition of what DevOps is. There is however clarity about its goal. Put simply, DevOps accelerates IT service delivery enabled by agile and lean practices. That’s it? Yes. It’s easy to understand but very hard to execute.
I have seen how Amazon Kinesis Data Firehose allows third-party destinations such as Dynatrace, Datadog, and NewRelic, among others. These new integrations will allow us to take our log and metric flows easily to these providers.
Many of us use Amazon Aurora every day for different projects. Without doubt, it is one of the best relational databases currently on the market. However, it is often a bit complex to understand the database's pricing, and even more difficult is to identify cost-saving strategies.
In this article I want to provide information about some of the features that we can use in Azure pipeline. Azure pipelines have lots of task plugins and enough features to handle most common continuous integration/ continuous delivery (CI/CD) flows. Azure YAML syntax for Azure pipeline is proprietary and can’t be used with other CI/CD platforms.