Solving truncated Azure Container Instance logs

Whenever you run a piece of automation logging is an essential part to know if your automation works as it should, or that you have a problem that you might not be aware of. The importance only increases when you have jobs running in the background, not really visible. Like I have some PowerShell scripts running in a scheduled hangfire job running containers, on Azure Container Instances, doing some querying in Azure and Azure DevOps. The basics, without using any application performance monitoring like Application Insights, is the output stream. The problem with it is that the logs in Azure Container Instances are being truncated and that is an open issue ever since 2017 on GitHub.

Diagnose

Locally

Having a simple PowerShell script that loops 10.000 times writing output to the stdout. (main.ps1).

function Run
{
    [CmdletBinding()]
    param()

    for ($i = 0; $i -lt 10000; $i++) 
    {
        Write-Output "Keep on counting $i"
    }
}

Run

Ran by the container on startup
Dockerfile

FROM mcr.microsoft.com/azure-cli
# Download the powershell '.tar.gz' archive
RUN curl -L https://github.com/PowerShell/PowerShell/releases/download/v7.0.2/powershell-7.0.2-linux-alpine-x64.tar.gz -o /tmp/powershell.tar.gz
# Create the target folder where powershell will be placed
RUN mkdir -p /opt/microsoft/powershell/7
# Expand powershell to the target folder
RUN tar zxf /tmp/powershell.tar.gz -C /opt/microsoft/powershell/7
# Set execute permissions
RUN chmod +x /opt/microsoft/powershell/7/pwsh
# Create the symbolic link that points to pwsh
RUN ln -s /opt/microsoft/powershell/7/pwsh /usr/bin/pwsh
# Add powershell script
ADD ./main.ps1 .
# Start PowerShell
CMD pwsh main.ps1 -Verbose 

Running the container locally is resulting in a continuous feed.
docker build -t logbuild .
docker run logbuild

Azure Container Instances

When running in an Azure Container Instance the result is somewhat different. Due to a bug, it is being truncated. I am missing a good 9800 rows of output.
az container create --resourcegroup my-containers --name logbuild --image mycontainers.azurecr.io/logbuild

Requesting the logs through the CLI with
az container logs --resource-group my-containers --name logbuild
or looking it up in the portal UI has the same result.

Even using the attached to the output stream does not work stable, getting parts of the log but then disconnecting and missing other parts.
az container attach --resource-group my-containers --name logbuild

Workaround

To work around this problem you can attach a file share to your container instance and copy your output streams to file. Because you want it to show in your console when you run it locally or without a file share. This is easily accomplished by adding a volume mapping to the docker file and changing the CMD to copy using tee.

VOLUME [ "/logme" ]
ADD ./main.ps1 .
CMD pwsh main.ps1 -Verbose | tee /logme/server.log

The problem with this is that it will suppress the exit code of the script. This is because the exit code resembles the last executed command and in our case it became tee. To make sure we get our exit code we need to instruct failure of the pipeline if either left or right-hand side of the command fails. This can be done by using the set -o pipefail. When making it one line it looks like this.

VOLUME [ "/logme" ]
ADD ./main.ps1 .
CMD set -o pipefail && pwsh main.ps1 -Verbose | tee /logme/server.log

Next, we need to start it using a FileShare, in this article you can read more on how to enable this and set it up. During the creation of you can now mount the file share to the volume created in your container.

az container create \
    --resourcegroup my-containers \
    --name logbuild \
    --image mycontainers.azurecr.io/logbuild \
    --azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME \
    --azure-file-volume-account-key $STORAGE_KEY \
    --azure-file-volume-share-name $ACI_PERS_SHARE_NAME \
    --azure-file-volume-mount-path /logme/

After your container exited you can then query your file share and see the full log there.

Summary

Instead of reading the log from the Azure Container Instance directly, have your stdout redirect to a file in an attached Azure FileShare. Read the logs from there. Remember to clean up after you retrieved the logs and to make sure you still get the exit code of your script to validate if your container exited unexpectedly. Full scripts can be found on my GitHub.

Please leave a comment if I missed something, and maybe this blog can save you some time cause this can be a pain to figure out.

Erick

Share

You may also like...

Leave a Reply