Quantcast
Channel: Tips&Tricks – TeamCity Blog
Viewing all 66 articles
Browse latest View live

TeamCity and NUnit 3: on the Same Wavelength

$
0
0

For the last couple of months we, the TeamCity team, have been working really hard together with the NUnit team, and today we are happy to announce NUnit 3 has been released and is fully integrated with TeamCity out of the box.

Imagine any way to run NUnit tests, and all your wildest solutions will most likely work perfectly in TeamCity: you could run NUnit tests just from the command line, or using the MSBuild task, or via the NAnt project, or you may prefer to run tests using the NUnit step and would like to take advantage of the the full power of TeamCity – everything is possible now. TeamCity helps you to do it on all operating systems and at any time of day or night.

You will see real-time progress during the tests’ session. After completing the build, the Tests view allows you to analyse the build results via the user friendly UI, where you are able not only to see the test results, but also to manage them: mark an investigation as fixed, give up the investigation, re-assign it, or mute a test failure.

For a long time it was necessary to use different tricks to achieve a similar result and while most of the tricks worked fine for simple cases, it was difficult to understand how to make it work when something goes wrong. For example, you had to use the special NUnit JetBrains task, that is very similar to the NUnit task from MSBuild Community Tasks project, to run NUnit tests under the MSBuild project in TeamCity. Now you are free to use NUnit in TeamCity without any additional tweaks: just use the TeamCity command line runner to achieve your goals:

NUnit 3 in the TeamCity sample

Don’t hesitate to take advantage of this integration: download NUnit 3 and to run tests under TeamCity!

We are working on the the Getting Started guide to enable you to begin using NUnit 3 in TeamCity instantly: we hope it will  save your valuable time, because it is our mission to make your work fun and easy.

Happy building and testing!


Brand new TeamCity 9.1.5 is here

$
0
0

Greetings, everyone!

We are happy to announce TeamCity 9.1.5 update! Have you noticed our new site featuring bright colors and an uncluttered design?

icon_TeamCity

We’re so excited about our new logo and icon and hope you are too! But new logo or old, TeamCity is still your 24/7 build engineer and will continue the drive to being the best product for you.

Build 37377 fixes over 80 problems, the most important being NuGet 3.2 compatibility fixes and bundled tools updates: TeamCity now comes with the latest IntelliJ IDEA 15.0.x and ReSharper 10.0.2.

Another change concerns the way the Gradle runner accesses TeamCity system properties: the System.properties syntax introduced in TeamCity 9.1.2 will no longer be functional; the new recommended way is to reference TeamCity properties similarly to Gradle properties. See details in our documentation.

We greatly appreciate our users reporting problems with TeamCity, fixing which helps us to make the product better. But at times the efforts of our users truly exceed our expectations! Just as we were about to release version 9.1.5, we were presented with a very subtle bug with notifications. Obviously, it had taken our users loads of time to reproduce the problem, so we decided we could not but fix it as a holiday gift to our users, all thanks to the guys from ok.ru!

Our release notes list all fixes, check them out and download Teamcity 9.1.5. now!

As usual you can freely upgrade-downgrade within all TC 9.1.x releases, so check the upgrade notes and don’t hesitate to upgrade!

Happy building!

TeamCity Take on Build Pipelines

$
0
0

Although it is possible to setup  build pipelines in TeamCity, you will not find the term “build pipeline” in the TeamCity web interface or even in our documentation. The reason is that we have our own term: “build chain“, which we feel describes TeamCity’s build-pipeline-related features better.

First of all, while the word “pipeline” does not impose sequential execution, it still implies it. At the same time, in TeamCity a build chain is a DAG – directed acyclic graph, and as such a traditional build pipeline is just a special case of the TeamCity build chain. But there are other differences as well.

Source code consistency

Source code consistency implies that all builds of a build pipeline will use the same source code revision. Traditionally, when build pipelines are described, not much attention is paid to this problem. After all, if someone needs consistency, it can be achieved by sharing data between builds using a file system or by tagging the source code in the repository and then fetching the same tag in all builds.

But from our experience in the majority of cases users need consistency for all of the builds participating in the pipeline. For instance, if you build parts of your project on different machines in parallel and then combine them in the last build, you must be sure that all of the parts have used the same source code. Actually, any time when you need to combine results of execution of several parallel builds, you need to be sure all these builds have used the same source code.

You can spend time and achieve this consistency somehow via build scripts, but since many developers face the same problem, obviously this is the area where CI tools should help. TeamCity provides source code consistency for all builds in the chain automatically. It does it even if the build chain uses different repositories and even if the builds use repositories of different types – Git, HG, SVN, Perforce, TFS.

Triggering

We also thought that traditional approach to triggering – when the next build of the pipeline is triggered after the previous one is finished –  is limiting, especially in terms of optimizations that CI system are capable of. If the whole build pipeline is known from the beginning, then it is possible to reduce build time by making some optimizations. But if the builds in the pipeline are triggered one by one, then it will always take the same time to finish completely.

For instance, say we have a pipeline C -> B -> A, where C is the first build and A is the last:

cba_chain

If we trigger the whole pipeline at once, all builds (C, B and A) will be added into the queue and, if the system already has a finished build which used the same source code revision as build C will, the build in the queue can be substituted with such finished build reducing the total build time of the pipeline. This is exactly what TeamCity does for all build chains while they sit in the queue.

But if we trigger the builds one by one, i.e C is the first, then upon finishing C we start B, and so on, then there are much fewer opportunities for a CI system to optimize something. If something triggered C, the CI system must obey it , because this is how the user configured it. Even if there are finished builds identical to C (run on the same sources revision), at this point  the CI system cannot avoid running C, as upon finishing C, build B and eventually A must be triggered as well. If we decide to not run C,  we will not run the whole pipeline, which is definitely not what the user expects.

So this is the reason why we always recommend to avoid triggering builds one by one. If the build chain is fully automated (which is what we all try to achieve these days), then ideally there should be only one trigger – at the very last build of the chain (in terms of TeamCity – the top build, build A in the example above), which will trigger all the builds down the pipeline. Fortunately, TeamCity triggers are quite advanced, you can configure them to trigger the whole chain if a change was detected in a part of it. This has an important benefit as the set of triggers that you need to maintain can be drastically decreased.

According to the data collected from our own build server, due to this and other optimizations performed in the build queue, TeamCity greatly reduces the amount of work performed by agents daily:
queue_stats

Note that our server produces about 2500-3000 build hours per day, so if there were no optimizations like this, we’d have to add more agents, or our builds would be delayed significantly.

Data sharing

Another important aspect is how you pass data from build to build in a build chain. Obviously you can use artifacts for this task: the previous build produces artifacts and the next one uses them as input. But note that in many cases you cannot rely on the fact that the next build will be executed on the same machine as the previous one. Even if it does, a few other builds could be executed on this machine and they could remove the results that you wanted to share. Fortunately, publishing artifacts to TeamCity and then using artifact dependencies to retrieve them solves all these problems.

Besides,  in TeamCity you can also share parameters between builds. All builds in TeamCity publish their parameters upon finishing: system properties, environment variables, the parameters set in their build configuration at the moment of the build start, the parameters produced by the build with the help of service messages, etc. And all of them can be used in the subsequent builds of the build chain. For instance, this makes it possible to use the same build number for the whole build chain.

Find out more on sharing parameters.

Build pipelines view

The traditional approach to build pipelines implies that there is a dashboard showing all pipelines and changes which triggered them. Given that some of our customers have build chains consisting of hundreds of builds (we’ve seen chains with up to 400 builds) and in large projects there can be hundreds or even thousands of source code changes per day, it is obvious that a simple dashboard will not work.

TeamCity has the Build Chains tab on the build configuration level displaying an aggregated view of all of the chains where builds of this build configuration participated. It can surely be used as a dashboard to some extent, but with large build chains it quickly becomes unusable.

build_chains

Fortunately in TeamCity each build of the chain also shows the state of all of the builds it depends on. So by opening the build results of the last build in the chain (the top build) you can see the state of the whole chain: which builds are running, which builds failed, which artifacts were produced by each build, which tests failed for the whole chain, etc.

build_results_deps

Summary

Hopefully this article sheds some lights on to how build pipelines can be configured and used in TeamCity and why they work this way. To sum up:

  • a build pipeline in TeamCity is called a build chain
  • source code synchronization in TeamCity comes for free, you don’t need to do anything to achieve it
  • the data from build to build can be passed with the help of artifacts or parameters
  • if possible, avoid one-by-one triggering: the bigger the parts you trigger, the better
  • monitor the build chain results using the results page of the last build in the chain or using the Build Chains tab

TeamCity brings Azure Resource Manager support

$
0
0

arm-logo

Microsoft has recently introduced a new way to deploy cloud resources, so currently there exist two Azure deployment models supported by TeamCity as two separate plugins.

The earlier Azure deployment model is in the feature freeze state and is now called Azure Classic. The same goes for the existing TeamCity Azure support, now called TeamCity Azure Classic plugin.

Azure Resource Manager is the new Azure deployment model, which has some advanced features and is now becoming popular. More and more users want to deploy cloud agents with it, so today we are glad to announce a new plugin, the Azure Resource Manager plugin for TeamCity.

Plugin Description

The plugin for TeamCity starts new cloud build agents from a generalized VHD image when all of the regular build agents are busy. It uses resource groups to deploy build agent resources and supports authentication with an Azure Active Directory (AD) application.

While starting a new virtual machine, the plugin creates a new resource group and deploys required resources into it. When a cloud agent is terminating, the resource group with the allocated resources is removed.

The latest plugin version is available for download to the TeamCity 9.1.x users as a separate plugin which can be installed manually.

Azure Authentication Settings

The plugin accesses the Azure Resource Manager by leveraging an Azure AD application. To create a new application, follow the steps described in the Azure documentation. Then open new portal, navigate to Subscriptions, select the target subscription and assign a Contributor role to your Azure AD application.

Note: To find your application, type the application name in the users search box, because originally that list contains only usernames, not applications.

Virtual Machines Preparation

The plugin supports generalized VHD images to start new TeamCity cloud build agents. Perform the steps below to prepare VHD images.

New Image Creation

Create a new virtual machine with the Resource Manager via the new portal or cli tools, and configure it properly. Then fulfill generalization according to the Azure documentation for Linux and Windows virtual machines. As a result, you will get a VHD image in your blob storage.

Migrating Specialized Classic VM Image

If you previously used a specialized image or Start/Stop behaviour for a cloud build agent, you must capture it now. To do that, prepare a virtual machine by removing temporary files and then make a new image capture. Please refer to the Azure documentation for Linux and Windows machines. Then follow the steps below to migrate a generalized classic VM image.

Migrating Generalized Classic VM Image

If you already have a generalized VHD image located in the classic storage account, just copy it to a storage account created with the Resource Manager. To do that, use Microsoft Azure Storage Explorer.

Network Settings

To start new virtual machines, ensure that you have a Virtual Network and Subnet created in the target Azure region. If you already have virtual machines deployed with the Resource Manager, you can use them; otherwise you must create a new one and configure it as follows:

  1. Open the Virtual Network creation widget on the new portal and press Create.
  2. Fill in the settings and select the Resource Group located in your target Azure region.
  3. Press Create and wait for the deployment completion.

Note: To access your virtual machines by a public IP address, please ensure that you have properly configured security group rules for your Subnet.

Configuring Agent Cloud Profile in TeamCity

Open TeamCity Agent Cloud configuration and create a new Azure Resource Manager profile. Fill in the AD application credentials from the classic portal: copy the tenant id, client id values, and create a new client secret. Then select your subscription and target region to deploy new TeamCity build agent resources:

arm-credentials

Configuring Build Agent Images

Then add a new agent image in the cloud profile by pressing the Add image button and set configuration parameters:

arm-image

Source image is a VHD blob URL of generalized Virtual Machine. Please note that currently the Azure Resource Manager allows creating OS disk images only in the original storage account where the source VHD image is stored. More details are available in the Azure forum request.

Name prefix will be used as a prefix for the resource group name. Note that you must use unique prefixes within your subscription.

Network settings allow using the preliminary created virtual networks in your target location and optionally creating a public IP address for build agent machines.

After the image is added and the cloud profile is saved, TeamCity does a test start for virtual agents to gather the agent configuration. Once the agents are connected, TeamCity stores their parameters to be able to correctly process build configurations-to-agents compatibility.

Feedback

You are welcome to try the plugin for TeamCity, and your feedback will be really appreciated, so please share your thoughts in the comments to this blog post, in the forum or issue tracker.

Happy building in the cloud!

TeamCity 9.1.7 Update

$
0
0

Greetings, everyone!

It’s been a while since our latest update, so today we are announcing TeamCity 9.1.7.

Build 37573 is a bugfix release addressing over 90 issues, see the full list in our Release notes.

As usual, TeamCity 9.1.7 Build 37573 uses the same data structure as the other 9.1.x builds enabling you to freely upgrade/downgrade within the latest released version, so no need to hesitate: download TeamCity 9.1.7 and upgrade.

Most likely this is the last update for TeamCity 9.1.

We are working on the next EAP for TeamCity 10, so watch for the new build!

Happy building!

TeamCity 10 EAP 2 is out today

$
0
0

Greetings, everyone!

We are proud to announce that TeamCity 10.0 EAP 2 build 41463 has just been released!

One of the most exciting features of this EAP is the ability to create TeamCity projects and build configurations programmatically: using DSL based on the Kotlin language. You can now define TeamCity project settings using Kotlin, check them into your version control and TeamCity will execute your Kotlin script and produce projects and build configurations with the desired settings. See our Release notes for more information.

This EAP build also comes with improved algorithm for VCS changes polling. As you know, TeamCity polls repositories for changes according to the configured interval. Although this approach is quite reliable and easy to maintain, in case of large installations with thousands of VCS roots it imposes a great load on both the TeamCity server and VCS repository servers. To avoid this, you can set up a commit hook, but still you’d need to manually adjust checking-for-changes intervals in VCS roots, and to constantly monitor that the hook works. Fortunately, starting with this EAP, TeamCity will increase the checking-for-changes interval automatically, once it determines that changes were detected as a result of a commit hook. If TeamCity detects a non-working commit hook, the checking-for-changes interval will be decreased. Our Release Notes provide details.

Among other things, the latest TeamCity version is more pleasant to look at, as it comes with new refreshed icons in the Web UI! All the more reason to check out the full list of this EAP’s  great features, and give this version a test drive! Download  build 41463 and don’t hesitate to install it on a trial server.

All our EAP releases come with a  60-day Enterprise evaluation license, and we hope you’ll have something to say about the new version once you have tried this build.

Happy building!

Ten Years After: Welcome TeamCity 10!

$
0
0

Greetings, everyone!

Our team is excited to announce the arrival of TeamCity 10.0, the jubilee version of our continuous integration and deployment server!

The number ten is regarded as the most perfect of numbers, as it contains the life and the nothing, the Unit that did it all, and the zero, symbol of the matter and the Chaos, of which all came out; it represents a new beginning at a higher level.

TC100

We literally follow the symbolism: TeamCity 1.0 saw the light in 2006, and 10 years (and BTW over 16 000 bugfixes!) later, TeamCity 10.0 is here! This TeamCity version comes with advanced features revolutionizing the way you create your projects and builds, making you TeamCity installation more scalable, expanding your horizons with new and improved integrations, all packed in the revised UI.

While so many of our users love the simplicity and ease of configuring projects and builds via the UI, for repetitive tasks like copying projects, build configurations, etc. the programmatic approach is by far more efficient. With TeamCity 10.0 Kotlin-based DSL you’ll enter a new world of creating and maintaining your build infrastructure, dynamically applying all your changes to the projects and builds, skipping the need to use the UI.

In this release another take on scalability has been made – now you can set up a two-node TeamCity, distributing the workload between two servers.

Issue trackers integration has been improved: TFS, GitHub, and Bitbucket are supported out of the box

We’ve made significant progress making VCS-related improvements:
cross-platform support for TFS is finally here; TeamCity works with Perforce streams now and more!

This is just a teaser – for the full list of features see our What’s New  and Release Notes, and download TeamCity 10.0! Remember to check the Upgrade Notes before you install the new version!

Register for our webinar to see the new TeamCity 10.0 features in action!

Happy upgrading and building!

P.S. We’ve also adopted a tradition of giving a decennial retrospective of the TeamCity UI, so take a look at our site to see how pretty our GUI grown to be!

Installing GitHub Webhooks from TeamCity

$
0
0

Greetings, everyone!

Since TeamCity 10.0 it has become possible to use commit hooks with a TeamCity server. Now, when a VCS change is detected via a commit hook, TeamCity automatically increases the VCS repository polling interval, reducing the load on both the TeamCity server and VCS repository. And obviously the presence of the hook greatly decreases the time needed to detect a change.

Unfortunately, installation and configuration of commit hooks is not an easy task and for on-premises VCS repositories it requires administration skills. At the same time, popular VCS hostings such as GitHub, support installation of commit hooks via their REST API, and installation of commit hooks for them can be a lot simpler, provided that we use such an API from TeamCity…

So today, we’d like to announce one more TeamCity plugin whose task is to install and maintain GitHub commit hooks (do not worry, we plan to add support for other VCS hosting services too).

The plugin supports both GitHub.com and GitHub Enterprise. The plugin does not install a webhook automatically for GitHub.com because a webhook requires a connection from GitHub.com to the TeamCity server, and in the majority of cases, when TeamCity is installed in the intranet, such a connection is blocked by a firewall. In case of GitHub Enterprise, the plugin will install a webhook automatically for any TeamCity project created from a URL or via GitHub integration.

The plugin works with the GitHub REST API and has to make API calls to GitHub on behalf of the current user, so it requires a GitHub connection configured in the project or its parent.

The plugin is quite simple, basically it does three things:

  • It shows a suggestion to install a GitHub webhook if it finds a GitHub repository in a project without such a webhook:webhook_suggestion
  • It provides a new action in the project actions menu for webhook installation enabling you to install or reinstall a webhook at any time: project_actions
  • It checks the status of all of the installed webhooks and raises a warning via the health report if some problem is detected:webhook_problem

The plugin is open source, distributed under the Apache 2.0 license. Most of the code is written in Kotlin. The source code is published on GitHub.

The plugin relies on the new TeamCity API and will only work with TeamCity 10.0 and later. Download the plugin, install it on your TeamCity server and try the plugin. We’ll appreciate your feedback!

Happy building!


TeamCity 10 GitHub-Related Improvements

$
0
0

Configuring TeamCity to use GitHub as the source code repository has always been easy, especially since the feature ‘create from URL‘ was first introduced. TeamCity 10 has brought a number of improvements related to integration with GitHub, which are worth a special mention.

Project-level GitHub Connection in TeamCity

We’ve been gradually improving the integration, and in TeamCity 10 you can now set up a connection to GitHub on the project level:

connectionproject

Once the connection is set up, you can create a project or a build configuration pointing to a GitHub repository:connectionprojectlist

Besides, with the connection configured, a small GitHub icon becomes active in several places where a repository URL can be specified: create project from URL, create Git VCS root, GitHub issue tracker, create create VCS root from URL:

createvcsroot

On clicking the icon, TeamCity will list GitHub repositories available to the user.

Configuring a GitHub connection is useful for the organization administrator who can create a parent project and configure a connection to GitHub there once; thus, all the TeamCity users of the organization will see a list of GitHub repositories URLs in the TeamCity Web UI. It makes setting up a subproject, a Git VCS root, or a GitHub issue tracker extremely easy.

GitHub Issue Tracker Integration in TeamCity

Built-in integration with GitHub issue tracker was also introduced in TeamCity 10. It is configured on the dedicated page of the project settings. If a project level connection to GitHub is configured, you can simply select the repository URL from the list available on clicking the GitHub icon:

tracker

If no connection is configured, the URL can be specified manually.

The rest is easy – select the type of authentication and provide the required information, tell TeamCity which strings should be recognized as references to issues: for GitHub, the regex syntax is used, e.g. #(\d+). TeamCity will resolve the issue number mentioned in a VCS comment and will display a link to this issue in the Web UI (e.g. on the Changes page, Issues tab of the build results page).

TeamCity Build Status in GitHub

Developers find it handy to view the status of their commits right in the repository. Earlier there were several external plugins allowing you to publish the TeamCity build status on GitHub, and in TeamCity 10 we delivered a bundled build feature, Commit Status Publisher, which automatically attaches the build status to your GitHub pull requests:

commitstatuspub

Besides, using the TeamCity REST API, you can publish the status icon of your TeamCity build in the README for your repository:

statuspic

TeamCity Commit Hooks

All the above-mentioned improvements are bundled with TeamCity 10, but we should also mention the TeamCity Commit Hooks Plugin, not bundled with TeamCity yet. This plugin is compatible with TeamCity 10 or later. Its task is to install webhooks for GitHub repositories specified in TeamCity VCS roots.
Installed GitHub webhooks greatly decrease the time required for TeamCity to detect a change. As an additional benefit, webhooks reduce the load on both the TeamCity server and the GitHub server.

 

Take the advantage of the TeamCity-GitHub improved integration and let these features make your experience of Continuous Integration & Delivery with GitHub and TeamCity nice and smooth.

Build, test and deploy .NET Core projects with TeamCity

$
0
0

teamcity-net-coreThe .NET Core development platform is the next wave of .NET ecosystem evolution which brings cross-platform targets for your existing or new products. Microsoft ships .NET Core SDK packages with a runtime for supported platforms and command line tools (.NET CLI) to build, test, and pack such projects.

The .NET Core support plugin for TeamCity aims to bring streamline support for .NET Core CLI by providing .NET Core build steps, CLI detection on the build agents, and auto-discovery of build steps in your repository.

Installation

The plugin is available for download from the TeamCity plugin gallery and can be installed as an additional TeamCity plugin. Next the .NET Core SDK has to be installed on your build agent machines. The plugin uses the PATH environment variable to determine the .NET CLI tools location, but you can override it by specifying the DOTNET_HOME environment variable as follows: DOTNET_HOME=C:\Program Files\dotnet\

You can check that plugin has been properly initialized by navigating to the build agent parameters:

net-core-parameters

Building and testing .NET Core projects

The .NET Core plugin supports auto-discovery of build steps, and usually all you need is to use the Create project from URL wizard:

net-core-build-steps

If you want to tune some parameters in your build steps, the .NET Core plugin provides details about configuration options and hints at their possible values:

net-core-step

Specify project build version

While building and creating project packages, the .NET CLI tools use the version number defined in the project.json file. If you want to provide a custom build number, you can update your project.json by leveraging the File Content Replacer build feature as follows:

net-core-version

Here you need to specify the target project.json files to process, fill in (\{[^{]*\"version\"\s*:\s*\")([^\"]+)(\") in the Find what and $1%build.number%$3 in the Replace with fields. Then, after executing the vcs checkout, the File Content Replacer will set the version, for instance %build.number% in the previous example.

Note: Currently .NET CLI tools support projects with project.json-based structure, but the upcoming versions will rely on the new csproj structure.

Testing .NET Core projects

The xUnit framework provide dotnet test runner which report the information about test results on the fly:

net-core-tests

The NUnit test runner does not support it out of the box, but you can vote for the following issue. If you need any other test framework support, please leave a comment in the related issue.

Deploying .NET Core projects

.NET Core libraries

Your projects can contain a number of shared libraries which can be reused in dependent project builds. Instead of artifact dependencies, it is recommended that you use the build-in TeamCity NuGet feed. In this the specified build artifacts paths need to include the produced NuGet packages.

Tip: You can use the NuGet Dependency Trigger to start a build when a package has been updated in some nuget feed.

Publishing ASP.NET Core projects

Such projects can be deployed to a wide range of platforms. As a result of executing the dotnet publish build step, you will receive the published package data in your file system. Then you need to decide how to deploy it to the target system.

For Windows servers, the Web Deploy utility can be used. Alternatively, you can deploy the package with the auto-generated %profile%-publish.ps1 or default-publish.ps1 PowerShell scripts. More details on how to provide proper arguments to these scripts can be found in the Web Publishing documentation.

For other server platforms you can publish package by leveraging TeamCity Deployer build steps like FTP/SSH/SMB.

Feedback

You are welcome to try the plugin for TeamCity, and your feedback will be really appreciated, so please share your thoughts in the comments to this blog post, in the forum or issue tracker.

Happy building modern .NET Core projects!

Kotlin Configuration Scripts: An Introduction

$
0
0

This is a five part series on working with Kotlin to create Configuration Scripts for TeamCity.

  1. An Introduction to Configuration Scripts
  2. Working with Configuration Scripts
  3. Creating Configuration Scripts dynamically
  4. Extending the TeamCity DSL
  5. Testing Configuration Scripts

With TeamCity 10, we brought the ability to define configurations using Kotlin as opposed to using XML or the User Interface.

This provides several benefits amongst which are

  • A human-readable and editable configuration script, not relying on the user interface
  • Ability to do diffs on changes to a configuration script
  • Ability to check in your configuration scripts alongside your source code1
  • Ability to dynamically create configuration scripts, as well as test them

Why Kotlin?

You might ask why Kotlin and not some other language? For those not familiar with Kotlin, it is a statically-typed language developed by us at JetBrains and open sourced under the Apache 2 license. It targets the JVM, JavaScript (and we also have native in the works). It’s designed to be a pragmatic language that cuts down boilerplate code, remains expressive, and facilitates tooling. Kotlin provides a series of features that allow the creation of DSL’s (Domain Specific Languages) and TeamCity is a perfect fit for this purpose. Kotlin enables the benefits we’ve outlined and we’ll see.

Why not something more common such as YAML? Based on our experience, we believe that while YAML is great for the simpler setups, at some point it does fall short, and can lose clarity when it comes to defining more complex configuration scripts. We wanted to provide a solution that works for the simplest to the most demanding of scenarios, and that’s why we’ve gone with Kotlin. It’s important to understand though that we’re not providing just a DSL for creating configuration scripts. Kotlin is a programming language and as such we have the ability to write any kind of code in our configuration scripts (which, of course, like anything can be also abused). This enables many scenarios such as those we’ll see when creating dynamic configurations.

What is needed

Given Kotlin is a proper language, you might be wondering what tooling is required to edit configuration scripts. In principle any editor will do. Kotlin is built so that it can be used with any editor or IDE. In fact we ship support for not only IntelliJ IDEA (both Ultimate and the free OSS community editions), but also Eclipse and NetBeans. If Vim or another editor is your thing, you can also use that along with a command line compiler.

For the purpose of this blog post series, we’ll be using IntelliJ IDEA.

Creating a first script

While we can start with a completely blank script, the easiest way to create a Kotlin build script is to take an existing TeamCity project and either

  • Download it as a Kotlin Script
  • Enable Versioned Settings

The first is a great way to start playing with Kotlin configuration scripts and get a feel for them. The second option not only provides us with the configuration scripts checked in to source control, but it’s actually a required step for us to use Kotlin build scripts in production.

The recommended approach is to use the second option once we’re ready to enable Kotlin scripting in our projects. The first option, as mentioned, is great for discovering how things work and getting familiar with Kotlin configuration scripts.

Download Settings as a Kotlin Script

In order to see what a build configuration looks like in Kotlin, simply Edit Project Settings on the selected project and click on the Actions menu, picking the entry Download Configuration Script in Kotlin format

download-kotlin-settings

This downloads a zip file that contains a Maven project which can be obtained in IntelliJ IDEA. Note that the folder it contains is prefixed with a dot, which indicates it’s hidden under MacOS. The idea is that this folder can essentially be placed in version control at some point (TeamCity marks its configuration files as hidden).

Enable Versioned Settings

The second option, which is required for us to actually use Kotlin configuration scripts, is to enable Versioned Settings. This is done under Versioned Settings and selecting Enable, and Kotlin as the file format.

versioned-settings

 

As soon as we activate this, the TeamCity UI will no longer allow any changes and all the configuration will be stored in version control and modifications have to take place via the build script.

These script files will have the same folder layout as that of the downloaded zip file (first option). For instance, the following are the files checked in to version control for the Joda-Time project. We can see that once again it’s a Maven project containing a series of Kotlin files (.kt). 

Opening the script in IntelliJ IDEA

Once we have the configuration script (be it as a zip file or checked in to version control on enabled versioned settings), we can open it in IntelliJ IDEA, and Maven will pull down the correct dependencies. Our project layout should look like the one below

project-structure

Don’t feel overwhelmed by the number of files in there. In fact, TeamCity really only needs one file, which is settings.kts. Let’s examine each file in more detail.

settings.kts

kts is a Kotlin Script file, different from a Kotlin file (.kt) in that it can be run as a script. As mentioned above, all the code relevant to the TeamCity configuration could be stored in this single file, but it is divided into several files to provide a better separation of concerns.

This file contains usually two lines

settings

version indicates the TeamCity version, and project() is the main entry point to the configuration script. It represents a function call, which takes as parameter a Project, representing the entire TeamCity project.

The parameter GitExtensions.Project is an object in Kotlin. Think of objects in Kotlin as you would in JavaScript. Or to compare it to Java, they’d be a Singleton, a single instance of a Kotlin class. While Kotlin script could work with singletons, having a first-class support for objects, makes things much easier.

In this case, Project is the name of the actual object, and GitExtensions is the package name.

Project.kt

This file contains the GitExtensions.Project object, and is where all the magic happens. If you look at the script layout, it’s basically a code representation of the build steps we’re accustomed to seeing the TeamCity user interface

project

which would correspond to the following entries in the UI, in addition to VCS roots and Build Types.

settings-ui

GitExtensions_HttpsGithubComJetbrainsGitextensions.kt

This object defines the VCS root configuration. It’s important to note that we could have just placed all this information directly in the code above, as opposed to making the call vcsRoot(GitExtensions_…).  However, the advantage to doing this, as we’ll see, is not only to make the code cleaner and separate concerns, but also to provide  reusability.

vcs-settings-root

GitExtensions_Main.kt

Finally, we have the actual meat of where the build happens. This object defines the build steps, failure conditions, and everything else you’d come to expect for a build configuration in TeamCity

build-type

Summary

In this first part we’ve seen the components of a TeamCity configuration script. In the next part we’ll dive a little deeper into the DSL, modify the script, and see some of the benefits that Kotlin and IntelliJ IDEA already start providing us in terms of guidance via code assistants.

[1] While enabling version control storage for settings has been available since version 9, it was only available in XML format. TeamCity 10 brings Kotlin to this.

Kotlin Configuration Scripts: Working with Configuration Scripts

$
0
0

This is part two of the five-part series on working with Kotlin to create Configuration Scripts for TeamCity.

  1. An Introduction to Configuration Scripts
  2. Working with Configuration Scripts
  3. Creating Configuration Scripts dynamically
  4. Extending the TeamCity DSL
  5. Testing Configuration Scripts

In the first part we saw the basics of Configuration Scripts and how to get started. Now we’ll dive a little deeper into TeamCity’s DSL and see what it provides us in terms of building configuration scripts.

Examining the DSL

The DSL comes in a series of packages with the main one being the actual DSL which is contained in the configs-dsl-kotlin-{version}.jar file. Examining it, we see that it has a series of classes that describe pretty much the entire TeamCity user interface.

main-dls-package

At any time you can navigate to a definition using for instance IntelliJ IDEA, and it will prompt you to download sources or decompile.  The sources are available from the actual TeamCity server your pom.xml file points to, and they’re a great way to see all the potential of the DSL.

The entry point to a project, as we saw in the previous post, is the Project object. From here we can define all the settings such as the basic project properties, the VCS roots, build steps, etc. There are a few basic parameters that should be set:

  • Uuid: it’s the internal ID TeamCity maintains. It is unique across the server. It’s not recommended that this value be changed as TeamCity uses it internally to associate data with the project.  
  • extId: this is the user-friendly ID used in URLs, in the UI, etc. and can be changed if required.
  • parentId: represents the extId of a project where this project belongs, defaulting to the value _Root for top-level projects.
  • name: the name of the project

and optionally

  • description: description for the project

Beyond the above, everything else is pretty much optional. Of course, if that’s the only thing we define, then not much is going to happen during the build process.

Modifying the build process

The code below is an excerpt from the configuration script for the Spek project (parts of the configuration are omitted for brevity). This particular build compiles the code and runs some tests using Gradle.

steps {
   gradle {
       name = "Snapshot Build"
       tasks = "clean jar test"
       jdkHome = "%env.JDK_18_x64%"
   }
}


triggers {
   vcs {
       branchFilter = "+:<default>"
       perCheckinTriggering = true
       groupCheckinsByCommitter = true
   }
}

Let’s extend this build to add a feature to that TeamCity has, Build Files Cleaner also known as Swabra. This build features makes sure that files left by the previous build are removed before running new builds.

We can add it using the features function. As we start to type, we can see that the IDE provides us with completion:

features-completion

The features function takes in turn a series of feature functions, each of which adds a particular feature. In our case, the code we’re looking for is

features {
   feature {
       type = "swabra"
   }
}

The resulting code should look like

steps {
   gradle {
       name = "Snapshot Build"
       tasks = "clean jar test"
       useGradleWrapper = true
       enableStacktrace = true
       jdkHome = "%env.JDK_18_x64%"
   }
}

triggers {
   vcs {
       branchFilter = "+:<default>"
       perCheckinTriggering = true
       groupCheckinsByCommitter = true
   }
}

features {
   feature {
       type = "swabra"
   }
}

And this works well. The problem is that, if we want to have this feature for every build configuration, we’re going to end up repeating the code. Let’s refactor it to a better solution.

Refactoring the DSL

What we’d ideally like is to have every build configuration automatically have the Build Files Cleaner feature without having to manually add it. In order to do this, we could introduce a function that wraps every build type with this feature. In essence, instead of having the Project call

buildType(Spek_Documentation)
buildType(Spek_Publish)
buildType(Spek_BuildAndTests)

we would have it call

buildType(cleanFiles(Spek_Documentation))
buildType(cleanFiles(Spek_Publish))
buildType(cleanFiles(Spek_BuildAndTests))

For this to work, we’d need to create the following function

fun cleanFiles(buildType: BuildType): BuildType {
   buildType.features {
       feature {
           type = "swabra"
       }
   }
   return buildType
}

Which essentially takes a build type, adds a feature to it, and returns the build type. Given that Kotlin allows top-level functions (i.e. no objects or classes are required to host a function), we can place it anywhere or create a specific file to hold it.

Now let’s extend this function to allow us to pass parameters to our feature, such as the rules to use when cleaning files.

fun cleanFiles(buildType: BuildType, rules: List<String>): BuildType {
   buildType.features {
       feature {
           type = "swabra"
           param("swabra.rules", rules.joinToString("\n"))
       }
   }
   return buildType
}

We pass in a list of files which are then passed in as parameter to the feature. The joinToString function allows us to concatenate a list of strings using a specific separator, in our case the carriage return.   

We can improve the code a little so that it only adds the feature if it doesn’t already exist:

fun cleanFiles(buildType: BuildType, rules: List<String>): BuildType {
   if (buildType.features.items.find { it.type == "swabra" } == null) {
       buildType.features {
           feature {
               type = "swabra"
               param("swabra.rules", rules.joinToString("\n"))
           }
       }
   }
   return buildType
}

Generalizing feature wrappers

The above function is great in that it allows us to add a specific feature to all build configurations. What if we wanted to generalize this so that we could define the feature ourselves? We can do that by passing a block of code to our cleanFiles function, which we’ll also rename to something more generic.

fun wrapWithFeature(buildType: BuildType, featureBlock: BuildFeatures.() -> Unit): BuildType {
   buildType.features {
       featureBlock()
   }
   return buildType
}

What we’re doing here is creating what’s known as a higher-order function, a function that takes another function as  a function. In fact this is exactly what features, feature, and many of the other TeamCity DSL’s are.

One particular thing about this function, however, is that it’s taking a special function as a parameter, which is an extension function in Kotlin. When passing in this type of parameter, we refer to it as Lambdas with Receivers (i.e. there is a receiver object that the function is applied on).

This then allows us to make the calls to this function in a nice way referencing  feature directly

buildType(wrapWithFeature(Spek_BuildAndTests, {
   feature {
       type = "swabra"
   }
}))

Summary

In this post we’ve seen how we can modify TeamCity configuration scripts using the extensive  Kotlin-based DSL.  What we really have in our hands is a full programming language along with all the features and power that it provides. We can encapsulate functionality in functions to re-use, we can use higher-order functions  as well as other things that open up many possibilities.

In the next part we’ll see how to use some of this to create scripts dynamically.

 

 

Kotlin Configuration Scripts: Creating Configuration Scripts Dynamically

$
0
0

This is part three of the five-part series on working with Kotlin to create Configuration Scripts for TeamCity.

  1. An Introduction to Configuration Scripts
  2. Working with Configuration Scripts
  3. Creating Configuration Scripts dynamically
  4. Extending the TeamCity DSL
  5. Testing Configuration Scripts

We saw in the previous post how we could leverage some of Kotlin’s language features to reuse code. In this part, we’re going to take advantage of the fact that we are in fact dealing with a full programming language and not just a limited DSL, to create a dynamic build configuration.

Build Configurations based on parameters

The scenario is the following: we have an HTTP server that we need to test on different platforms with a range of concurrent connections on each one. This generates potentially a lot of different build configurations that we’d need to create and maintain.

Instead of doing this manually, what we can do is write some code to have our Kotlin DSL Configuration Script generate all the different build configurations for us.

Let’s say that we have a list of Operating Systems and a range of concurrent connections for each one. The first thing to do is to create a data class that represents this information

data class BuildParameters(val name: String, val operatingSystem: String, val connections: Int)

which in essence is like a regular class but provides a series of benefits such as a nicer string representation, equality and copy operations amongst other things.

Now let’s imagine we have the different platforms we want to test on represented as a list of BuildParameters

val operatingSystems = listOf(
            BuildParameters("Windows Build", "windows", 30),
            BuildParameters("MacOS Build", "osx", 20),
            BuildParameters("Linux Build", "ubuntu", 20)
    )

what we’d like to do, is iterate over this list and create a new build type for each combination. In essence, if our standard Project is

object Project : Project({
    uuid = "9179636a-39a3-4c2c-aa7e-6e2ea7cfbc5b"
    extId = "Wasabi"
    parentId = "_Root"
    name = "Wasabi"
    vcsRoot(Wasabi_HomeGitWasabiGitRefsHeadsMaster)
    buildType(WasabiBuild)
})

what we want to do is create a new buildType for each entry in the list

object Project : Project({
    uuid = "9179636a-39a3-4c2c-aa7e-6e2ea7cfbc5b"
    extId = "Wasabi"
    parentId = "_Root"
    name = "Wasabi"

    vcsRoot(Wasabi_HomeGitWasabiGitRefsHeadsMaster)

    val operatingSystems = listOf(
            BuildParameters("Windows Build", "windows", 30),
            BuildParameters("MacOS Build", "osx", 20),
            BuildParameters("Linux Build", "ubuntu", 20)
    )

    operatingSystems.forEach {
        buildType(OperatingSystemBuildType(it))
    }
})

Creating a base Build Type

Any parameter passed into buildType needs to be of the type BuildType. This means we’d need to inherit from this class and at the same time provide some parameters to it (BuildParameters)

class OperatingSystemBuildType(buildParameters: BuildParameters) : BuildType() {
    init {
        val paramToId = buildParameters.name.toExtId()
        uuid = "53f3d94a-20c6-43a2-b056-baa19e55fd40-$paramToId"
        extId = "BuildType$paramToId"
        name = "Build for ${buildParameters.name}"

        vcs {
            root(Wasabi.vcsRoots.Wasabi_HomeGitWasabiGitRefsHeadsMaster)

        }
        steps {

            gradle {
                tasks = "clean build"
                useGradleWrapper = true
                gradleWrapperPath = ""
                gradleParams = "-Dconnections=${buildParameters.connections}"
            }
        }
        triggers {
            vcs {
            }
        }

        requirements {
            contains("teamcity.agent.jvm.os.name", buildParameters.operatingSystem)
        }
    }

}

The code is actually a pretty standard Configuration Script that defines a Gradle build step, has a VCS definition and defines a VCS trigger. What we’ve done is just enhance it somewhat.

To begin with,  we’re using the BuildParameters name property to suffix it to the uuid, extId and name. This guarantees a unique ID per build configuration as well as provides us a name to identify it easily. To make sure the values are properly formatted, we use the TeamCity DSL defined extension function to String, named toExtId() which takes care of removing any forbidden characters.

In the steps definition, we pass in certain parameters to Gradle, which in our case is the number of concurrent connections we want to test with. Obviously, this is just a sample, and the actual data being passed in can be anything and used anywhere in the script.

Finally, we also use the BuildParameters operatingSystem property to define Agent requirements.

Summary

The above is just a sample of what can be done when creating dynamic build scripts. In this case, we created multiple build configurations, but we could just as well have created multiple steps, certain VCS triggers, or whatever else could come in useful. The important thing to understand is that at the end of the day, Kotlin Configuration Script isn’t just merely a DSL but a fully fledged programming language.

In the next part of this series, we’ll see how we can extend the Kotlin Configuration Scripts (hint – it’s going to involve extension functions).

 

 

Kotlin Configuration Scripts: Extending the TeamCity DSL

$
0
0

This is part four of the five-part series on working with Kotlin to create Configuration Scripts for TeamCity.

  1. An Introduction to Configuration Scripts
  2. Working with Configuration Scripts
  3. Creating Configuration Scripts dynamically
  4. Extending the TeamCity DSL
  5. Testing Configuration Scripts

TeamCity allows us to create build configurations that are dependent on each other, with the dependency being either snapshots or artifacts. The configuration for defining dependencies is done at the build configuration level. For instance, assuming that we have a build type Publish that has a snapshot dependency on Prepare Artifacts, we would define this in the build type Publish in the following way

class Publish : BuildType({
    uuid = "53f3d94a-20c6-43a2-b056-baa19e55fd40"
    extId = "Test"
    name = "Build"
    vcs {
        . . .
    }
    steps {
        . . .
    }
    dependencies {
        dependency(Prepare_Artifacts) {
            snapshot {
                onDependencyFailure = FailureAction.FAIL_TO_START
            }
        }
    }
})

and in turn if Prepare Artifacts had dependencies on previous build configurations, we’d define these in the dependencies segment of its build configuration.

TeamCity then allows us to visually see this using the Build Chains tab in the user interface

Build Chains

Defining the pipeline in code

Pipeline is a sequence of phases, phase is a set of buildTypes, each buildType in a phase depends on all buildTypes from the previous phase. This can handle simple but common kind of build chains where some build produces an artifact, several builds test it in parallel and the final build deploys the result if all its dependencies are successful.

It would often be beneficial to be able to define this build chain (or build pipeline) in code, so that we could describe what’s going on from a single location. In essence, it would be nice to be able to define the above using the Kotlin DSL as so

object Project : Project ({
   . . .
   pipeline {
        phase("Compile all clients") {
            + HttpClient
            + FluentHc
            + HttpClientCache
            + HttpMime
        }
        phase("Prepare artifacts") {
            + PrepareArtifacts
        }
        phase("Deploy and publish to CDN") {
            + Publish
        }
   }
})

Defining this at the Project.kt level in our DSL, would give us a good oversight of what’s taking place.

The issue is though that currently the TeamCity DSL does not provide this functionality. But that’s where Kotlin’s extensibility proves quite valuable as we’ll see.

Creating our own Pipeline definition

Kotlin allows us to create extension functions and properties, which are a means to extend a specific type with new functionality, without having to inherit from these. When passing extension functions as arguments to other functions (i.e. higher-order functions), we get what we call in Kotlin Lambdas with Receivers, something we saw in this series already when Generalising feature wrappers in the second part of this series. We apply the same concept here to create our pipeline DSL

class Pipeline {
    val phases = arrayListOf<Phase>()

    fun phase(description: String = "", init: Phase.() -> Unit = {}) {
        val newPhase = Phase()
        newPhase.init()
        phases.lastOrNull()?.let { prevPhase ->
            newPhase.buildTypes.forEach {
                it.dependencies {
                    for (dependency in prevPhase.buildTypes) {
                        snapshot(dependency)
                    }
                }
            }
        }
        phases.add(newPhase)
    }
}

class Phase {
    val buildTypes = hashSetOf<BuildType>()

    operator fun BuildType.unaryPlus() {
        buildTypes.add(this)
    }
}

fun Project.pipeline(init: Pipeline.() -> Unit = {}) {
    val pipeline = Pipeline()
    pipeline.init()
    //register all builds in pipeline
    pipeline.phases.forEach { phase ->
        phase.buildTypes.forEach { bt ->
            this.buildType(bt)
        }
    }
}

What the code above is doing is define a series of new constructs, namely pipeline and phase.

The code then takes the contents of what’s passed into each of these and defines, under the covers, the dependencies. In essence, it’s doing the same thing we would do in each of the different build configurations, but from a global perspective defined at the Project level.

In order to pass in the configuration to the pipeline as we saw earlier, we merely reference the specific build type (HttpClient, Publish, etc.), assigning it to a variable

val HttpClient = BuildType(....)

Flexibility of defining our own pipeline constructs

It’s important to understand that this is just one of many ways in which we can define pipelines. We’ve used the terms pipeline and phase. We could just as well have used the term stage to refer to each phase, or buildchain to refer to the pipeline itself (and thus align it with the UI). In addition to how we’ve named the constructs, and more importantly, is how the definition is actually constructed. In our case, we were interested in having this present in the Project itself. But we could just as well define a different syntax that is used at the build type level. The ability to easily extend the TeamCity DSL with our own constructs, provides us with this flexibility.

 

 

 

TeamCity as Debian Package Repository

$
0
0

Recently we’ve been experimenting around using TeamCity as a Debian repository and we’d like to share some tips and tricks we’ve come up with in the process.

We used the TeamCity tcDebRepository plugin to build *.deb packages and serve package updates to Debian/Ubuntu servers. The plugin, the special prize winner of the 2016 TeamCity plugin contest, works like a charm, but we’ve encountered a few difficulties with the software used to build and package .deb files, hence this blog post.

It’s important to note that tcDebRepository is not capable of serving packages to recent Ubuntu versions due to the lack of package signing. This is planned to be resolved in version 1.1 of the plugin (the plugin compatibility details).

We do not intend to cover the basics of TeamCity and Debian GNU/Linux infrastructure. To get acquainted with TeamCity, this video might be helpful. Building Debian packages is concisely described in the Debian New Maintainers’ Guide.

Everything below applies to any other Debian GNU/Linux distribution, e.g. Astra Linux.

Prerequisites and Project Setup

We arbitrarily chose 4 packages to experiment with:

Our setup uses:

  1. The free Professional version of TeamCity 10 with 3 agents managed by Debian 8.0 (Jessie).
  2. The tcDebRepository plugin installed as usual.
  3. Besides the Teamcity agent software, each of the agents required installing:
    • the build-essential package as a common prerequisite
    • the dependencies (from the 01-build-depends build-depends and 02-build-depends-indep build-depends-indep categories) required for individual packages.

Next we created a project in TeamCity with four build configurations (one per package):
teamcity-build-configs

Configuring VCS Roots

When configuring VCS Roots in TeamCity, types of Debian GNU/Linux packages had to be taken into account.

Debian packages can be native and non-native (details).
The code of native packages (e.g. debhelper, dpkg), developed within the Debian project, contains all the meta-information required for building (the debian/ directory in the source code tree).
The source code of non-native packages (e.g. bash) is not related to Debian, and an additional tree of their source code and meta-information with patches (contained in the debian/ directory) has to be maintained.

Therefore, for native packages we configured one VCS Root (pointing to Debian); whereas non-native packages needed two roots.

Next we configured checkout rules: the difference from the usual workflow is that in our case the artifacts (Debian packages) will be built outside the source code, one directory higher. This means that the checkout rules in TeamCity should be configured so that the source code of the dpkg package is checked out not into the current working directory, but into a subdirectory with the same name as the package, i.e. pkgname/. This is done by adding the following line to the checkout rules:
+:.=>pkgname
Finally, a VCS trigger was added to to every configuration.

Artifact paths

In terms of TeamCity, the binary packages we are about to build are artifacts, specified using artifact paths in the General Settings of every build configuration:

03-teamcity-debian-general-artifact-paths

For native packages, the pkgname.orig.tar.{gz,bz2,xz,lzma} and pkgname.debian.tar.{gz,bz2,xz,lzma} will not be created.

Build Step 1: Integration with Bazaar

Building a package with Bazaar-hosted source code (bash in our case) requires integration with the Bazaar Version Control System not supported by TeamCity out-of-the box. There is an a 3rd party TeamCity plugin to support Bazaar, but it has at least two problems:

  1. agent-side checkout is not supported
  2. the plugin is based on bzr xmlls and bzr xmllog (the bzr commands invoked by the plugin), but these commands are provided by external modules not necessarily available in the default Bazaar installation

The TeamCity server we used is running Windows, so we decided not to opt for installing Bazaar on the server side. Instead, to build the bash package, we added a build step using the Command Line Runner and a shell script for Bazaar integration:

#!/bin/bash

export LANG=C
export LC_ALL=C

set -e

rm -rf bash/debian
bzr branch http://bazaar.launchpad.net/~doko/+junk/pkg-bash-debian bash/debian
major_minor=$(head -n1 bash/debian/changelog | awk '{print $2}' | tr -d '[()]' | cut -d- -f1)
echo "Package version from debian/changelog: ${major_minor}"
tar_archive=bash_${major_minor}.orig.tar
rm -f ${tar_archive} ${tar_archive}.bz2
# +:.=>bash checkout rule should be set for the main VCS root in TeamCity
tar cf ${tar_archive} bash
tar --delete -f ${tar_archive} bash/debian
# Required by dpkg-buildpackage
bzip2 -9 ${tar_archive}

This approach will not allow us to see the changes in one of the two trees of the source codes and run build automatically on changes, but it is ok as the first try.

Build Step 2. Initial configuration

We used the Command Line Runner to invoke dpkg-buildpackage. The -uc and -us keys in the image below indicate that digital signatures are currently out of scope for our packages. If you wish to sign the packages, you’ll have to load the corresponding pair of GnuPG keys on each of the agents.

Note that the “Working directory” is not set to the current working directory, but to the subdirectory with the same name as the package (bash in this case) where the source code tree will be checked out and dpkg-buildpackage will be executed. If the VCS Root is configured, the “Working directory” field can be filled out using the TeamCity directory picker, without entering the directory name manually:

07-teamcity-debian-dpkg-buildpackage-step

After the first build was run, we encountered some problems.

Build problems resolution

Code quality

For some packages (bash in our case), two code trees were not synchronized, i.e. they corresponded to slightly different minor versions, which forced us to use tags rather than branches from the main code tree. Luckily, TeamCity allows building on a tag and the “Enable to use tags in the branch specification” option can be set in such case:

teamcity-vcs-tags-in-branch-specs-small

Dependencies

Here is another problem we faced. When running the first build, dpkg-buildpackage exits with code 3 even though all the required dependencies were installed (see the prereqs section above).

There are a few related nuances:

  • The most common case for continuous integration is building software from Debian Unstable or Debian Experimental. So, to meet all the dependencies required for the build, TeamCity agents themselves need to run under Debian Unstable or Debian Experimental (otherwise dpkg-buildpackage will be reporting that your stable dependencies versions are too old). To resolve the error, we added the -d parameter:
    dpkg-buildpackage -uc -us -d
  • A special case of old dependencies is the configure script, created by the GNU Autotools that are newer than those currently installed on your system. dpkg-buildpackage is unable to detect this, so if the build log shows messages about the missing m4 macros, you need to to re-generate the configure script using the of GNU Autotools currently installed on the agent. This was done by adding the following command as the build step executed before dpkg-buildpackage:
    autoreconf -i

Broken unit tests

When working with Debian Unstable or Debian Experimental, unit test may fail (as it happened in our case). We choose to ignore the failing test and still build the package by running dpkg-buildpackage in a modified environment:
DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage -uc -us

Other build profiles are described here.

Final Stage

After tuning all of the build configurations, our builds finally yielded expected artifacts (the dpkg package is used as an example):
11-teamcity-debian-artifacts

If you do not want each incremental package version to increase the number of artifacts of a single build (giving you dpkg_1.18.16_i386.deb, dpkg_1.18.17_i386.deb, etc.), the working directory should be selectively cleaned up before starting a new build. This was done by adding the TeamCity Swabra build feature to our build configurations.

Tuning up Debian repository

The tcDebRepository plugin adds the “Debian Repositories” tab in the project settings screen, which allows you to create a Debian repository. Once the repository is created, artifact filters need to be configured: at the moment each filter is added manually. Filters can be edited, deleted or copied.
teamcity-adding-an-artifact-filter

The existing artifacts will not be indexed, therefore after the configuration of the Debian repository, at least one build must be run in every build configuration. Then we can view the available indexed packages:
16-teamcity-debian-debrepo-page-2

When adding the repository to the /etc/apt/sources.list, all these packages are visible on the client side. Note that the packages are not digitally signed, which prevents packages from being available on recent versions of Ubuntu. Package signing is planned for tcDebRepository version 1.1.

17-teamcity-debian-aptitude-1

(!) If you are compiling for several architectures (i386, x32, amd64, arm*), it is worth having several build configurations corresponding to the same package with different agent requirements, or, in addition to the VCS Trigger, using the Schedule Trigger with the “Trigger build on all enabled and compatible agents” option enabled.

20-teamcity-debian-change-log

Happy building!

* We’d like to thank netwolfuk , the author of the tcDebRepository  plugin, for contributing to this publication.
This article, based on the original post in Russian, was translated by Julia Alexandrova.


Welcome EAP1 for TeamCity 2017.1 (aka 10.1)

$
0
0

Greetings, everyone!

Today we are unveiling the Early Access Preview (EAP) for TeamCity 2017.1, formerly known as TeamCity 10.1! Following the JetBrains versioning scheme, our releases will be now identified by <year of the release>.<number of the feature release within the year>.<bugfix update number>.

Build 45965 comes with over 30 new features, and though most of them are minor, they provide a smoother experience with TeamCity.

The TeamCity Web UI has been given a facelift overall; we should mention that the build chains page has been reworked with chains being displayed in a more compact way with handy new options.

It’s worth noting that the server startup and the page load times have been decreased.

Besides the new features, this versions brings over 100 bugfixes – see the details in our Release Notes.

Don’t hesitate to get the new EAP build and share your feedback on our forum or tracker.

As usual, this new feature release changes the TeamCity data format, and given that version 2017.1 is still in development, we recommend you install it on a trial server.

Happy building!
The TeamCity Team

Kotlin Configuration Scripts: Testing Configuration Scripts

$
0
0

This is part five of the five-part series on working with Kotlin to create Configuration Scripts for TeamCity.

  1. An Introduction to Configuration Scripts
  2. Working with Configuration Scripts
  3. Creating Configuration Scripts dynamically
  4. Extending the TeamCity DSL
  5. Testing Configuration Scripts

In this last part of the series we’re going to cover one aspect that using Kotlin Configurations Scripts enables, which is testing.

Given that the script is a programming language, we can simply add a dependency to a testing framework of our choice, set a few parameters and start writing tests for different aspects of our builds.

In our case, we’re going to use JUnit. For this, we add the JUnit dependency to the pom.xml file

<dependency>
      <groupId>junit</groupId>
      <artifactId>junit</artifactId>
      <version>4.12</version>
</dependency>

We also need to define the test directory

<sourceDirectory>Wasabi</sourceDirectory>
<testSourceDirectory>tests</testSourceDirectory>

In our case, this corresponds to the following directory layout

Directory Structure

Once we have this in place, we can write unit tests like in any other Kotlin or Java project, accessing the different components of our project, build types, etc. In our case we’re going to write a simple test that checks to see all build types start with a clean checkout

@Test
fun buildsHaveCleanCheckOut() {
    val project = Project
    project.buildTypes.forEach { bt ->
        assertTrue(bt.vcs.cleanCheckout ?: false, "BuildType '${bt.extId}' doesn't use clean checkout")
    }
}

 

That additional layer of certainty

When we make changes to the Kotlin Configuration Script and check it in to source control, TeamCity synchronises the changes and it will report of an errors it encounters, which usually are related to compilation. The ability to now add tests allow us to add another extra layer of checks to make sure that beyond our build script containing any scripting errors, certain things are validated such as the correct VCS checkout as we’ve seen above, whether the appropriate number of build steps are being defined, etc. The possibilities are essentially endless given that once again, we’re just using a regular programming language.

TeamCity to Access Private GitHub Repositories Securely

$
0
0

Introduction

One of the great features TeamCity has is its integration with GitHub. Authentication from TeamCity to GitHub should be configured for the integration to work. We consider TeamCity secure enough; however, stealing authentication data is a threat we all live with and the consequences of it should be clearly understood.

In this blog post we’ll go over the existing ways of configuring authentication in TeamCity VCS roots pointing to private GitHub repositories, discussing the advantages and disadvantages of each of them. We hope that TeamCity Server administrators and Project administrators will find this information useful when deciding on the approach to authentication from TeamCity to GitHub.

Username and Password Authentication

It is often the case when access to a repository from TeamCity is configured via a user’s GitHub username/password. All server administrators and administrators of the projects where the root is configured have access to this information.

If this data is stolen, it gives the intruder full access and the possibility to maliciously modify your GitHub repositories, including your GitHub profile and all your settings.

We do not recommend it as it seems to be the least secure approach.

Username and Generated OAuth Token

Not long ago TeamCity started to support OAuth for GitHub. In this case you use the username and a generated OAuth token.

If someone gains access to them, then all your repositories in all organizations where you have read rights will be at this user’s disposal as TeamCity uses the ‘repo’ token scope.
However, this authentication option gives access to repositories only and not your GitHub profile and settings; besides, it is easy to revoke this token. Besides, repositories cannot not be deleted, and although force-push may still be performed, using the protected branches feature of GitHub can help in this case. All in all, this way can be considered an improvement in comparison with the previous approach.

SSH Key

As an administrator, you can create an SSH key for your TeamCity server, with the public part of the SSH key uploaded to GitHub and the private part uploaded to the TeamCity server.

The advantages of this approach are the same as those of the one above.
As to the risks, it can be highly dangerous if you use this key in different servers and applications. This can be mitigated using the special SSH key added to the GitHub profile and TeamCity, which will be used for this integration exclusively.

‘Deploy Key’ GitHub Feature

For every GitHub repository that Teamcity has access to it is possible to generate an SSH key with the private part on the TeamCity and the public part added to the deploy keys of your repository using the repository settings page. GitHub administrator rights for this repository are required. The key can have either read-only or read-write permissions for the repository.

This seems to be the most secure approach, because an individual key can be added to each repository, which would make access revocation extremely easy in case of data loss.

Creating a key for every repository might be a nuisance for Windows users, this seems to be the main disadvantage (not affecting Linux or macOS users though).

Conclusion

From the approaches outlined above, using OAuth token authentication and Deploy keys are considered secure enough by the majority of people, with Deploy keys being more secure and therefore recommended by us. In the end, adopting this or that way is up to you.

Run TeamCity CI builds in Google Cloud

$
0
0

teamcity-google-cloudWe are happy to announce that now TeamCity provides a tight integration with Google Cloud services. The Google Cloud Agents plugin allows using Google Compute Engine to start cloud instances on demand to scale the pool of cloud build agents and also supports using cost-efficient preemptible virtual machines. The Google Artifact Storage plugin provides the ability to use Google Cloud Storage to keep external build artifacts.

Installation

To enable these TeamCity integrations with the Google Cloud, you need to download the Google Cloud Agents and Google Artifact Storage from the plugins gallery and install them as additional TeamCity plugins. Remember to restart the TeamCity server.

Google Cloud Agents Configuration

Prepare Google Cloud Image

Now you need to create cloud images to start new cloud build agents from. To do that, create a new cloud instance from the available public boot disks and uncheck “Delete boot disk when instance is deleted” in the “Management, disk, networking, SSH keys” section. Then, after the start of the cloud instance, connect to it via SSH or RDP depending on the selected image and install a TeamCity build agent. You need to point the build agent to the TeamCity server to update plugins and set the agent to start automatically.

Create Google Cloud Instance

Then install required build tools, remove temporary files, remove the cloud instance and create a new custom image from the cloud instance boot disk.

Create Custom Google Image

Create Service Account and Key

Now we need to create a new service account with the Compute Engine Instance Admin role and a new JSON private key for that account.

google-cloud-credentials

Create Cloud Agent Profile

To create a Google agent cloud profile, navigate to the project where you want to set up the profile and select the Cloud Profiles link. Then click the “Create new profile” button and select “Google Compute” as the cloud type. Specify the profile name and the JSON private key value in the corresponding field.

To add a new image to the profile, press the “Add image” button, select the recently created cloud image, and fill in other properties. Next you need to save the image and then the profile settings. That’s all.

teamcity-google-profile

Google Artifact Storage Configuration

Create Service Account and Key

To access Google Cloud Storage, the plugin requires a service account with the Compute Engine Instance Admin role and a new JSON private key for that account.

teamcity-google-profile

Configure Artifact Storage

Navigate to the Artifacts Storage Project tab and click the Add new storage button. Select Google Storage type and fill in the name, JSON private key, bucket name and press Save. After that, activate this artifact storage in the TeamCity project by clicking the Make Active link.

teamcity-google-profile

That’s it! Your new builds in this project will store artifacts in the Google Cloud Storage.

Feedback

You are welcome to try these plugins for TeamCity, your feedback will be really appreciated: please share your thoughts in the comments to this blog post or in our issue tracker.

Happy building in the Google Cloud!

Manage TeamCity Users with Invitations Plugin

$
0
0

Managing users of a TeamCity server is one of the tasks currently handled by the server administrator. Now when a new user comes to the company, they need to create an account on a TeamCity server (provided the default setting allowing users to register on the first login is enabled). Next, registered users need to ask the TeamCity server administrator or a Project administrator to grant them the required permissions.

Invitations Plugin Overview

The open-source Invitations plugin from JetBrains enables TeamCity administrators to invite users to create or join TeamCity projects. Typically this task will be handed over to Project administrators: they will create a project and will be able to send out invitations to the project users with the permissions scope already defined, thus freeing the System Administrator from granular permissions assignment and saving the new users the time and effort currently required for registering on the server.

It is possible to restrict the invited users’ access to a definite project (or several projects): in this case the per-project authorization mode is required on the server. This way any invitation will limit the user access to the project the invitation is sent from.

The plugin is compatible with TeamCity 2017.1 and later.

Installing Plugin

The system Administrators can download the Invitations plugin.zip from here and install it on the TeamCity server as usual.

Using Plugin

The use of the plugin is quite straight-forward: after the server restart, the plugin adds the Invitations page to the settings of all projects on the server.

To invite users:

  1. Go to the Project Settings | Invitations page and select the invitation type:
    createInvite

    Invitation Types

    Two types of invitation are supported.
    The invitation to create a project will grant the invitee the Project Administrator role. Note that if per-project permissions are not enabled on the server, any project administrator will get access to TeamCity server administration.

    The invitation to join a project allows the project administrator to define a role for the invitee as well as the group they will be added to. The available roles depend in the roles configuration on the server.
    As for groups, the invited users can be added to any group having ‘View project’ as the minimal permission in the current project.

  2. Next create your invitation:
    • modify the automatically filled in fields as required,
    • specify whether the invitation will be used multiple times (default), or once, after which it will be removed from the UI,createSubproject
    • save the invitation.
  3. The invitation has been created. Copy the invitations link and sent it to the user you want to invite.
  4. The user will follow the link, register or log in to TeamCity, after which they will be able to create / join a project on the server.Invite

That’s it. We hope you will find this plugin useful. Your feedback is greatly appreciated!

Happy building!
TeamCity Team

Viewing all 66 articles
Browse latest View live