Integrated management and security across your hybrid cloud

Do you have a truly end-to-end view of your hybrid cloud environment? Most environments today are complex, with multi-tier applications that may span multiple datacenters and cloud hosting environments. In fact, the reality is that for most companies, complexity is the number one challenge in a hybrid cloud environment, according to the 2017 State of Hybrid Cloud research study. Not coincidentally, respondents identified unified management across multiple operating systems and public clouds as a top priority.
To make sure your critical applications and systems perform at peak efficiency, you need a big picture view that spans the different application components and infrastructure services, and includes the ability to act on insights and resolve issues quickly. The advantage of doing this deep level of analytics in the cloud is that you can have unlimited scale and flexibility with your log data, without the heavy weight of infrastructure to put in place. With management-as-a-service in Azure, you let us do the hard part of correlating, analyzing, and crowd-sourcing information. You can then use the insights you gain to start anticipating and resolving issues before problems result in business impact.
At Microsoft, our core management approach is to bring data from your applications, workloads, and infrastructure together in one place, then provide you the ability to drill down deep and do rich analytics. With Azure management and security services, you can pull data from multiple sources to find out if there is an infrastructure issue, if the network is slow, or if the latest deployment of your application is causing problems. Since we include built-in collection of log and performance data from servers all the way to application code, we can help you bring IT and Developers together to troubleshoot issues quickly.
One of the key technologies that can help you turn data into actionable insights about your hybrid environment is Service Map, part of Azure Insight & Analytics. Today we released Service Map general availability, a tool that automatically discovers and builds a map of server and process dependencies for you. It pulls in data from other solutions in the service, such as Log Analytics, Change Tracking, Update Management, and Security, all in context. Rather than looking at individual types of data, you can now see all data related to the systems you care about most, as well as graphically visualize their dependencies.
Learn more about how you can use integrated management and security services to reduce complexity in your hybrid cloud environment.

Try today with a free Operations Management Suite account.

New Get Data Capabilities in the GA Release of SSDT Tabular 17.0 (April 2017)

With the General Availability (GA) release of SSDT 17.0, the modern Get Data experience in Analysis Service Tabular projects comes with several exciting improvements, including DirectQuery support (see the blog article “Introducing DirectQuery Support for Tabular 1400”), additional data sources (particularly file-based), and support for data access options that control how the mashup engine handles privacy levels, redirects, and null values. Moreover, the GA release coincides with the CTP 2.0 release of SQL Server 2017, so the modern Get Data experience benefits from significant performance improvements when importing data. Thanks to the tireless effort of the Mashup engine team, data import performance over structured data sources is now at par with legacy provider data sources. Internal testing shows that importing data from a SQL Server database through the Mashup engine is in fact faster than importing the same data by using SQL Server Native Client directly!
Last month, the blog article “What makes a Data Source a Data Source?” previewed context expressions for structured data sources—and the file-based data sources that SSDT Tabular 17.0 GA adds to the portfolio of available data sources make use of context expressions to define a generic file-based source as an Access Database, an Excel workbook, or as a CSV, XML, or JSON file. The following screenshot shows a structured data source with a context expression that SSDT Tabular created for importing an XML file.

Note that file-based data sources are still a work in progress. Specifically, the Navigator window that Power BI Desktop shows for importing multiple tables from a source is not yet enabled so you end up immediately in the Query Editor in SSDT. This is not ideal because it makes it hard to import multiple tables. A forthcoming SSDT release is going to address this issue. Also, when trying to import from an Access database, note that SSDT Tabular in Integrated Workspace mode would require both the 32-bit and 64-bit ACE provider, but both cannot be installed on the same computer. This issue requires you to use a remote workspace server running SQL Server 2017 CTP 2.0, so that you can install the 32-bit driver on the SSDT workstation and the 64-bit driver on the server running Analysis Services CTP 2.0.

Keep in mind that SSDT Tabular 17.0 GA uses the Analysis Services CTP 2.0 database schema for Tabular 1400 models. This schema is incompatible with CTPs of SQL vNext Analysis Services. You cannot open Tabular 1400 models with previous schemas and you cannot deploy Tabular 1400 models with a CTP 2.0 database schema to a server running a previous CTP version.

Another great data source that you can find for the first time in SSDT Tabular is Azure Blob Storage, which will be particularly interesting when Azure Analysis Services provides support for the 1400 compatibility level. When connecting to Azure Blob Storage, make sure you provide the account name or URL without any containers in the data source definition, such as If you appended a container name to the URL, SSDT Tabular would fail to generate the full set of data source settings. Instead, select the desired contain in the Navigator window, as illustrated in the following screenshot.

As mentioned above, SSDT Tabular 17.0 GA uses the Analysis Services CTP 2.0 database schema for Tabular 1400 models. This database schema is more complete than any previous schema version. Specifically, you can find additional Data Access Options in the Properties window when selecting the Model.bim file in Solution Explorer (see the following screenshot). These data access options correspond to those options in Power BI Desktop that are applicable to Tabular 1400 models hosted on an Analysis Services server, including:

Enable Fast Combine (default is false)   When set to true, the mashup engine will ignore data source privacy levels when combining data.  
Enable Legacy Redirects (default is false)  When set to true, the mashup engine will follow HTTP redirects that are potentially insecure (for example, a redirect from an HTTPS to an HTTP URI).  
Return Error Values as Null (default is false)  When set to true, cell level errors will be returned as null. When false, an exception will be raised if a cell contains an error.  

And especially with the Enable Fast Combine setting you can now begin to refer to multiple data sources in a single source query.
Yet another great feature that is now available to you in SSDT Tabular is the Add Column from Example capability introduced with the April 2017 Update of Power BI Desktop. For details, refer to the article “Add a column from an example in Power BI Desktop.” The steps are practically identical. Add Column from Example is a great illustration of how the close collaboration and teamwork between the AS engine, Mashup engine, Power BI Desktop, and SSDT Tabular teams is compounding the value delivered to our customers.
Looking ahead, apart from tying up loose ends, such as the Navigator dialog for file-based sources, there is still a sizeable list of data sources we are going to add in further SSDT releases. Named expressions discussed in this blog article a while ago also still need to find their way into SSDT Tabular, and there are other things such as support for the full set of impersonation options that Analysis Services provides for data sources that can use Windows authentication. Currently, only service account and explicit Windows credentials can be used. Forthcoming impersonation options include current user and unattended accounts.
In short, the work to enable the modern Get Data experience in SSDT Tabular is not yet finished. Even though SSDT Tabular 17.0 GA is fully supported in production environments, Tabular 1400 is still evolving. The database schema is considered complete with CTP 2.0, but minor changes might still be coming. So please be invited to deploy SSDT Tabular 17.0 GA, use it to work with your Tabular 1200 models and take Tabular 1400 for a thorough test drive. And as always, please send us your feedback and suggestions by using ProBIToolsFeedback or SSASPrev at Or use any other available communication channels such as UserVoice or MSDN forums. Influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers!

Introducing DirectQuery Support for Tabular 1400

With the production release of SSDT 17.0, Tabular projects now support DirectQuery mode at the 1400 compatibility level, so you can tap into large data sets that exceed the available memory on the server and meet data freshness requirements that would otherwise be difficult if not impossible to achieve in Import mode. As with Tabular 1200 models, DirectQuery 1400-supported data sources include SQL Server, Azure SQL Database, Azure SQL Data Warehouse, Oracle, and Teradata, as the following screenshot indicates, and you can only define a single data source per model. Available DAX functions are also limited, as documented in “DAX Formula Compatibility in DirectQuery Mode.” Another important restriction pertains to the M queries that you can create in DirectQuery mode.

Given that Analysis Services must transform all DAX and MDX client queries into source queries to send them to the source where the data resides, M transformations must be foldable. A foldable transformation is a transformation that the Mashup engine can translate (or fold) into the query dialect of the source, such as T-SQL for SQL Server or PL/SQL for Oracle. You can use the View Native Query option in the Query Builder dialog to verify that the transformation you create is foldable. If the option is available and can display a native query, the transformation meets the DirectQuery requirements (see the following screenshot).

On the other hand, if the option is unavailable and a warning is displayed, you must remove the problematic step because it does not meet the DirectQuery requirements. If you attempt to create a table based on an unsupported M query, SSDT Tabular will display an error message asking you to redefine the query or switch the model into Import mode, as the following screenshot illustrates.

The DirectQuery experience in SSDT Tabular is similar to Power BI Desktop, but there are some noteworthy differences. For example, in Power BI Desktop, you can switch individual connections into DirectQuery mode whereas SSDT Tabular enables DirectQuery only on a per-model basis, as the following screenshot illustrates with the Power BI Desktop dialog in the background and SSDT Tabular Solution Explorer and Properties window in the front. Mixing Import and DirectQuery mode data sources is not supported in a Tabular model because, in DirectQuery mode, a model can only have a single data source. Also, Power BI Desktop supports Live mode against Analysis Services, which Tabular models do not support.
Another issue worth mentioning is that there currently is no data preview for tables defined in the model. The preview in Query Editor works just fine, but when you apply the changes by clicking Import, the resulting table in the model remains empty because models in DirectQuery mode do not contain any data as all queries are directed to the source. Usually, you can work around this issue by adding a sample partition, as the article “Add sample data to a DirectQuery model in Design Mode” ( describes, but sample partitions are not yet supported in 1400 mode. This will be completed in a future SSDT Tabular release.

Moreover, SSDT Tabular, running inside Visual Studio, requires 32-bit drivers, while the SSAS engine runs as a 64-bit process and requires the 64-bit versions. This is particularly an issue when connecting to Oracle. Make sure you install the drivers per the following requirements.

SSDT with Integrated Mode
SSAS Server

SQL Server, Azure SQL Database, Azure SQL Data Warehouse
Drivers preinstalled with the operating system
Drivers preinstalled with the operating system

.Net provider for Oracle
OLEDB provider for Oracle (OraOLEDB.Oracle),
.Net provider for Oracle(Oracle.DataAccess.Client)

.Net provider for Teradata
.Net Provider for Teradata(Teradata.Client.Provider)

And that’s it for a quick introduction of DirectQuery support for Tabular 1400. Please take it for a test drive and send us your feedback and suggestions via ProBIToolsFeedback or SSASPrev at Or use any other available communication channels such as UserVoice or MSDN forums. You can influence the evolution of the Analysis Services connectivity stack to the benefit of all our customers.

Editing a .VMCX file

In Windows Server 2016 we moved from using .XML for our virtual machine configuration files to using a binary format – that we call .VMCX.
There are many benefits to this – but one of the downsides is that it is no longer possible to easily edit a virtual machine configuration file that is not registered with Hyper-V.  Fortunately – we provide all the APIs you need to do this without editing the file directly.

This code sample takes a virtual machine configuration file – that is not registered with Hyper-V.  It then:

Loads the virtual machine into memory – without actually importing it into Hyper-V
Changes some settings on the virtual machine
Exports this changed virtual machine to a new .VMCX file

Using this method you can make any changes you need to a .VMCX file without actually having to import the virtual machine.  The key piece of information here is that when you perform a traditional import of a virtual machine you use ImportSystemDefinition to create a planned virtual machine (in memory copy) which you then realized to complete the import operation.  But if you do not want to import the virtual machine – but just want to edit it – you can modify the planned virtual machine and pass it into ExportSystemDefinition to create a new configuration file.

Windows Server 2016 Essentials Dashboard

Windows Server 2016 Essentials is the lowest-cost edition of Windows Server intended for a small company’s first server. With Windows Server 2016 Essentials comes a dashboard which simplifies the tasks that you perform to manage your network and server. Common tasks include setting up your server, managing user accounts, managing backups, integrating with cloud services, and more. In this episode we’ll provide an overview of Windows Server 2016 Essentials edition and a demo of how to use it.[3:24] DEMO: Windows Server 2016 Essentials Dashboard walkthrough

DockerCon 2017: Powering new Linux innovations with Hyper-V isolation and Windows Server

This post was authored by John Gossman, Azure Lead Architect and Linux Foundation Board Member.
With over 900,000 containerized applications in the Docker Hub there has never been a better time to be a developer. However, a barrier remained Linux images run on a Linux host and Windows images on a Windows host requiring multiple infrastructures and more complex development tooling. Today at DockerCon 2017, Microsoft showcased how we will remove this barrier with Linux containers running natively on Windows Server through our Hyper-V isolation technology. This will enable developers to build with Windows and IT administrators hosting Windows Server to run any container image regardless of their platform.
When we announced and launched Hyper-V Containers it was because some customers desired additional, hardware-based isolation for multi-tenant workloads, and to support cases where customers may want a different kernel than what the container host is using for example different versions. We are now extending this same Hyper-V isolation technology to deliver Linux containers on Windows Server. This will give the same isolation and management experience for Windows Server Containers and Linux containers on the same host, side by side.
Tens of thousands of developers depend on Docker Community Edition (CE) on their Windows 10 laptops each day as they build, ship and run Linux and Windows containers. Microsoft has a long history of working in the Docker community, collaborating to bring container technologies to Windows and Microsoft Azure. This project is being launched today, at DockerCon, so that we can continue that legacy of working with the community to deliver innovative solutions in open source.
More than three years ago, we helped contribute Hyper-V support to the Docker Machine and boot2docker projects which served as the early foundation of Moby and LinuxKit. Over the last year, weve continued working hand-in-hand to bring Windows container support into Docker CE, first with Microsoft adding support for Windows Server Containers on Windows 10 using Hyper-V isolation and then Docker adding support to switch between Linux and Windows. We are now looking forward to continuing that collaboration in the open source LinuxKit and Docker projects to provide even better Windows and Linux container support. We are also committed to building support for this feature as part of the ongoing containerd project in line with the goals of an industry-standard cross platform container runtime.
Beginning with the very first DockerCon in June 2014, Microsofts ongoing strong commitment to Docker and open source has been singular, said Scott Johnston, COO, Docker, Inc. Microsofts new Hyper-V Linux containers, announced today at DockerCon, and its collaboration with Dockers LinuxKit and containerd together represent a unique, innovative solution for developers building heterogeneous, hybrid cloud applications.
In the spirit of providing customers with a choice, we will also enable customers to choose the Linux distributions they want to use to host their Linux containers. Microsoft will be open sourcing the required integration code and we have been working with leading Linux vendors who will be providing container OS images. We are happy to share that Canonical, Intel, Red Hat and SUSE will also support this project.
“Canonical is proud of a longstanding relationship with Microsoft to bring Ubuntu and the best of the open source world to the Windows ecosystem. We have teamed together to deliver Ubuntu images to the Microsoft Azure cloud platform and Azure Container Service, and Ubuntu as the Bash experience in the Windows Subsystem for Linux on the Windows Desktop, and now in the form of a minimal, secure, Ubuntu container OS image.”
– Dustin Kirkland, Head of Product, Canonical
We are excited to collaborate closely with Microsoft to optimize and include the Clear Linux OS for Intel Architecture as an option for customers to use within their new Linux containers running natively on Windows Server through Hyper-V isolation technology,
– Arjan van de Ven, Sr. Principal Engineer, Intel Corporation
“Through both our upstream open source contributions and through Red Hat Enterprise Linux Atomic Host and Red Hat OpenShift, Red Hat is committed to bringing production-ready container solutions to enterprise customers. The cloud is hybrid and customers want to be able to adopt heterogeneous technologies. Through this aligned vision with Microsoft, we look forward to bringing Red Hat Enterprise Linux containers to Hyper-V users.
– Jim Totton, Vice President and General Manager, Platforms Business Unit, Red Hat
Microsoft is investing in Linux containers on Windows Server — and if security and containers are important to you — keep on reading. This collaboration is a natural step for SUSE, as we are investing in secure, rootless containers for our CaaS Platform solution. SUSE is excited to be a part of this announcement and will actively collaborate with Microsoft to enable our joint customers with SUSE-based Hyper-V isolated containers that run natively on Windows Server.”
– Dr. Gerald Pfeifer, VP of Products and Technology Programs, SUSE
We look forward to working with all of you on this project over the coming months.

Windows Server 2016 Adds Native Overlay Network Driver, enabling mixed Linux + Windows Docker Swarm Mode Clusters

Based on customer and partner feedback, we are happy to announce the Windows networking team released a native overlay network driver for Windows Server 2016 to enable admins to create a Docker Swarm cluster spanning multiple Windows Server and Linux container hosts without worrying about configuring the underlying network fabric. Windows Server containers and those with Hyper-V Isolation powered by Docker are available natively in Windows Server 2016 and enable developers and IT admins to work together in building and deploying both modern, cloud-native applications as well as supporting lift-and-shift of workloads from a virtual machine (VM) into a container. Previously, an admin would be limited to scaling out these containers on a single Windows Docker host. With Docker Swarm and overlay, your containerized workloads can now communicate seamlessly across hosts, and scale fluidly, on-demand. 
How did we do it? The Docker engines, running in Swarm mode, are able to scale-out services by launching multiple container instances across all nodes in a cluster. When one of the “master” Swarm mode nodes schedules a container instance to run on a particular host, the Docker engine on that host will call the Windows Host Networking Service (HNS) to create the container endpoint and attach it to the overlay networks referenced by that particular service. HNS will then program this policy into the Virtual Filtering Platform (VFP) Hyper-V switch extension where it is enforced by creating network overlays using VXLAN encapsulation.
The flexibility and agility enjoyed by applications already being managed by Docker Swarm is one thing, but what about the up-front work of getting those applications developed, tested, and deployed? Customers can re-use their Docker Compose file from their development environment to deploy and scale out a multi-service/tier application across the cluster using docker stack deploy command syntax. It’s easy to leverage the power of running both Linux and Windows services in a single application, by deploying individual services on the OS for which they are optimized. Simply use constraints and labels to specify the OS for a Docker Service, and Docker Swarm will take care of scheduling tasks for that service to be run only on the correct host OS. In addition, customers can use Docker Datacenter (via Docker Enterprise Edition Standard) to provide integrated container management and security from development to production.
Ready to get your hands on Docker Swarm and Docker Datacenter with Windows Server 2016? This feature has already been validated by beta customers by successfully deploying workloads using swarm mode and Docker Datacenter (via Docker Enterprise Edition Standard), and we are now excited to release it to all Windows Server customers through Windows Update KB4015217. This feature is also available in the Windows 10 Creator’s Edition (with Docker Community Edition) so that developers can have a consistent experience developing apps on both Windows client and server.
Feature requests? Bugs? General feedback? We would love to hear from you! Please email us with feedback at

Hey Dude, Where’s My Winlogon.log?

Hi this is Michael from the PMC PFE Team, I recently helped a customer during the implementation of their Windows Server 2016 systems.
When checking the Event viewer, we spotted a well-known Event ID:
Log Name:      Application
Source:        SceCli
Date:          08.03.2017 17:49:21
Event ID:      1202
Task Category: None
Level:         Warning
Keywords:      Classic
User:          N/A
Security policies were propagated with warning. 0x534 : No mapping between account names and security IDs was done.
Advanced help for this problem is available on Query for “troubleshooting 1202 events”.

Error 0x534 occurs when a user account in one or more Group Policy objects (GPOs) could not be resolved to a SID.  This error is possibly caused by a mistyped or deleted user account referenced in either the User Rights or Restricted Groups branch of a GPO.  To resolve this event, contact an administrator in the domain to perform the following actions:

Identify accounts that could not be resolved to a SID:

From the command prompt, type: FIND /I “Cannot find”  %SYSTEMROOT%SecurityLogswinlogon.log
The string following “Cannot find” in the FIND output identifies the problem account names.
Example: Cannot find JohnDough.
In this case, the SID for username “JohnDough” could not be determined. This most likely occurs because the account was deleted, renamed, or is spelled differently (e.g. “JohnDoe”).

Use RSoP to identify the specific User Rights, Restricted Groups, and Source GPOs that contain the problem accounts:

Start -> Run -> RSoP.msc
Review the results for Computer ConfigurationWindows SettingsSecurity SettingsLocal PoliciesUser Rights Assignment and Computer ConfigurationWindows SettingsSecurity SettingsLocal PoliciesRestricted Groups for any errors flagged with a red X.
For any User Right or Restricted Group marked with a red X, the corresponding GPO that contains the problem policy setting is listed under the column entitled “Source GPO”. Note the specific User Rights, Restricted Groups and containing Source GPOs that are generating errors.

Remove unresolved accounts from Group Policy

Start -> Run -> MMC.EXE
From the File menu select “Add/Remove Snap-in…”
From the “Add/Remove Snap-in” dialog box select “Add…”
In the “Add Standalone Snap-in” dialog box select “Group Policy” and click “Add”
In the “Select Group Policy Object” dialog box click the “Browse” button.
On the “Browse for a Group Policy Object” dialog box choose the “All” tab
For each source GPO identified in step 2, correct the specific User Rights or Restricted Groups that were flagged with a red X in step 2. These User Rights or Restricted Groups can be corrected by removing or correcting any references to the problem accounts that were identified in step 1.

So, okay Event 1202 Sid-to-Name mapping issue. Sure enough there was some security principal in either one of the settings or at the delegation tab on one of the policies which couldn’t get resolved.
So let’s have a look at the Winlogon.log as called out in the event description. We went to %SYSTEMROOT%SecurityLogs and then “Dude, where’s the Winlogon.log!”
We quickly checked if the path may have changed in Server 2016 but couldn’t find the log in any other directory. Then we checked how this was enabled / disabled on earlier OS versions.
Key name: ExtensionDebugLevel
Data: 0x2
Okay, so it is not enabled by default on Windows 10 and Server 2016. We enabled the debug log but still after a “gpupdate / force” and a reboot NO winlogon.log file was present. What’s wrong? Well actually nothing….
Here’s why…
We are talking about a CSE (Client side extension) which is repented by that GUID “{827D319E-6EAC-11D2-A4EA-00C04F79F83A}” ; We did a change in the registry which is only triggers when the CSE is run the next time…. But when there are NO changes to the GPO why should the CSE re-run the respective policy? Exactly,…there is no reason. This has always been the case, the code apparently didn’t change since 2008 but the log just happens to be enabled by default.
So, dude what do we have to do to get the Winlogon.log file back?
There are a few methods available and I’m listing them in the preferred order of applicability:

use the NoListChanges = 0

This triggers the CSE to re-evaluate and trigger the policy even though there were no changes to the policy.


Make a change to a policy in the security section, that’s the piece the CSE is responsible for and triggers a re-run which then creates the Winlogon.log

In any case, please make sure you undo the changes you’ve done for troubleshooting!
Winlogon.log is a debugging file which you should enable if needed to find and fix your issues but then disable it later again. Same for the NoListChanges, please make sure that you revert this change back!
Hope you enjoyed the post and it saves you some time in finding the Winlogon.log file now.

Windows Server Essentials 2016 – Update on Remote Web Access

Remote Web Access, a feature inside Windows Server Essentials 2016 (also used in the Windows Server Essentials role that is available in Windows Server Standard 2016 and Windows Server Datacenter 2016) may cause users to experience trouble connecting remotely. The issue occurs after Office 365 with AAD Integration is completed and a certain amount of time passes without a reboot, typically 36-48 hours.
The server will be responsive, but the https://servername/remote web site will indicate that it is not accessible and will redirect users to their Administrator with the following message:
“Cannot connect to Remote Web Access. Please contact the person who manages the server. “
There is a temporary workaround discussed on the windows server forum here, and it is a safe workaround to use until the fix is available. The issue is caused by WCF connections not being cleaned up by the Essentials provider framework and they are no longer removed by the CLR in Windows Server 2016. To verify this, you can check the amount of WCF connections by running the following PowerShell command in an elevated console:
netstat -a | select-string ‘:65532’ | measure-object -line
There should be 100-300 connections typically.
The fix has been tested and checked in and it will be available in the May update package for Windows Server 2016. When the kb article is published and the fix is available, I will post about it here.
Scott Johnson
Windows Server Essentials

Deploy Node.js applications to Azure App Service

This blog post shows how you can deploy a new Node.js application from Visual Studio Team Services or Microsoft Team Foundation Server to Azure App Service.

Download our Node.js Hello World sample app or Install Node.js Tools for Visual Studio and create a new Node.js application.
Upload your code to Team Services or your on-premises Team Foundation Server: either push your code to Git or check in your code to TFVC.


Open your team project in your web browser (If you don’t see your team project listed on the home page, select Browse.)

On-premises TFS: http://{your_server}:8080/tfs/DefaultCollection/{your_team_project}
Visual Studio Team Services: https://{your_account}{your_team_project}

Create a build definition (Build & Release tab > Builds)


Click Empty to start with an empty definition.
In the repository tab of build definition make sure the repository selected is the one where you pushed (Git) or checked in (TFVC) your code

Add the build steps
On the Tasks or Build tab, add these steps.

Package: npm install
Install your npm package dependencies.

Command: install
Set the working folder to the folder where your application code is committed in the repository

For example, in case of sample app it will be nodejs-express-hello-world-app

Build: Gulp
Pack your files into a .zip file.

Gulp File Path: gulpfile.js
Advanced, Arguments: –packageName=$(Build.BuildId).zip –packagePath=$(Build.ArtifactStagingDirectory)
Set the Gulp file path. For example, in case of sample app it will be nodejs-express-hello-world-app/gulpfile.js

Build: Publish Build Artifacts
Publish the build outputs, such as the .zip file as we do in the example below.

Copy Root: $(Build.ArtifactStagingDirectory)
Contents: $(Build.BuildId).zip
Artifact name: drop
Artifact Type: Server

Enable continuous integration (CI)
On the Triggers tab, enable Continuous integration (CI). This tells the system to queue a build whenever someone on your team commits or checks in new code.
Save, queue, and test the build
Save and queue the build. Once the build is done, click the link to the completed build (for example, Build 1634), click Artifacts, and then click Explore to see the .zip file produced by the build. This is the web deploy package that your release definition will consume to deploy your app.

Open the Releases tab of the Build & Release hub, open the + drop-down in the list of release definitions, and choose Create release definition
In the DEPLOYMENT TEMPLATES dialog, select the Azure App Service Deployment template and choose OK.

Select the build definition you created earlier as the source of artifact to be deployed.

Configure the Azure App Service Deployment task:

Deploy: Azure App Service Deploy

Azure Subscription: Select a connection from the list under Available Azure Service Connections. If no connections appear, choose Manage, select New Service Endpoint | Azure Resource Manager, and follow the prompts. Then return to your release definition, refresh the Azure Subscription list, and select the connection you just created.
Note: If your Azure subscription is defined in an Azure Government Cloud, ensure your deployment process meets the relevant compliance requirements. For more details, see Azure Government Cloud deployments.
App Service Name: the name of the App Service (the part of the URL without
Deploy to Slot: make sure this is cleared (the default)
Virtual Application : leave blank
Package or Folder: $(System.DefaultWorkingDirectory)***.zip (the default)

Take App Offline: If you run into locked .DLL problems when you test the release, as explained below, try selecting this check box.
Deployment script: The task gives you additional flexibility to run deployment script on the Azure App Service. For example, you can run a script to update dependencies (node packages) on the Azure App Service instead of packaging the dependencies in the build step.
Generate web.config: In case you are deploying a Node.js application which was not created by using Node.js Tools for Visual Studio then the task will help you generate the web.config required to run the Node apps on Azure App Service.

This is required because Windows Azure App Service uses iisnode to host Node.js applications in IIS on Windows

Type a name for the new release definition and, optionally, change the name of the environment from Default Environment to Dev. Also, set the deployment condition on the environment to “Automatically start after release creation”.
Save the new release definition. Create a new release and verify that the application has been deployed correctly.

Related Topics

js Tools for Visual Studio helps you develop Node.js applications by providing Visual Studio project templates as well as intellisense, debugging and profiling support.
Windows Azure App Service use iisnode to host Node.js applications in IIS on Windows.