Category Archives: Azure DevOps

failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized

Symptoms:

AKS pod failed to start with the following error:

failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized.

Analysis:

We need to provide a secret for the container to pull images from Azure Container Registry (ACR).

Solution:

When using a deployment yaml file, we nned to provide a secret within that yaml file.

When using AzureFunctionOnKubernetes task, we need to add the following parameter to the Argument section

--pull-secret xxxxx

##[error]Error from server (BadRequest): error when creating “build/dev2/functions.yaml”: ConfigMap in version “v1” cannot be handled as a ConfigMap

Symptoms:

##[error]Error from server (BadRequest): error when creating “build/dev2/functions.yaml”: ConfigMap in version “v1” cannot be handled as a ConfigMap

Analysis:

When running AzureFunctionOnKubernetes from ADO pipeline task to deploy Azure FunctionApp with using –write-configs and –config-file, it will generate the deployment plan with config instead of secret.

However, when secretName is specified in the task, it will run the deployment command with “–secret-name xxx” behind the scene.

The error happens when these two config are conflicting to each other (when secrete was expected but configMap is being passed in)

Solution:

Leave the value of Secret Name blank when setting the AzureFunctionOnKubernetes task.

No pods is running after a successful Azure FunctionApp deployment to AKS by AzureFunctionOnKubernetes

Symptoms:

No pods is running after a successful Azure FunctionApp deployment by AzureFunctionOnKubernetes

Analysis:

AzureFunctionOnKubernetes is using Azure Function Tools to deploy FunctionApp with KEDA. And the default min-replicas is set to 0 in its ScaledObject.

We can see the deployment is being scaled down if we do a “kubectl describe deployment xxxxx”.

Solution:

Explicitly set it to 1 (or more than 1) in the AzureFunctionOnKubernetes task

 --min-replicas 1

##[error]Error building project – AzureFunctionOnKubernetes task

Symptoms:

##[error]Error building project – AzureFunctionOnKubernetes task

Analysis:

First, we need to turn on the debug to see more details. See here for how to turn on the debug.

Turns out, it was running a dry run if we did not turn it off and that was causing the problem.

Solutions:

Turn off the dry-run option by entering the following into the Arguments part of task

 --dry-run false

##[error]error: SchemaError(com.azure.clusterconfig.v1alpha1.FluxConfig): invalid object doesn’t have additional properties

Symptoms:

##[error]error: SchemaError(com.azure.clusterconfig.v1alpha1.FluxConfig): invalid object doesn’t have additional properties

Analysis:

Need newer version of Kubectl on the ADO Ubuntu agent.

Solution:

Upgrade Kubectl on ADO agent by adding a task to install latest version of Kubectl in the pipeline yaml file

- task: KubectlInstaller@0
        inputs:
          kubectlVersion: 'latest'

error converting YAML to JSON: yaml: line 2: found character that cannot start any token

Symptoms:

error converting YAML to JSON: yaml: line 2: found character that cannot start any token

Analysis:

This is a general json parsing error. This particular error came up during the setup of Azure FunctionApp pipeline and was because of that

  1. older version of Kubectl on ADO Ubuntu image cannot recognize data block
  2. Use parameter in task without passing a value

Solution:

  1. Upgrade Kubectl on ADO agent by adding a task to install latest version of Kubectl in the pipeline yaml file
- task: KubectlInstaller@0
        inputs:
          kubectlVersion: 'latest'

2. Passing the necessary values

upgrade to ADO 2020 stucked at step 71

Symptom

Step 71 (Pre Schema partition upgrade to Dev17.M146) of the database upgrade ran forever during the ADO upgrade to 2020.

Analysis

The status of the process showed “suspended”.

The step looked like this from the upgrade log

By dumping the text from sys.dm_exec_sql_text, I found it is related to “AnalyticsInternal.prc_iSetTransformHold” storedProcedure.

SELECT dm_ws.wait_duration_ms,
dm_ws.wait_type,
dm_es.status,
dm_t.TEXT,
dm_qp.query_plan,
dm_ws.session_ID,
dm_es.cpu_time,
dm_es.memory_usage,
dm_es.logical_reads,
dm_es.total_elapsed_time,
dm_es.program_name,
DB_NAME(dm_r.database_id) DatabaseName,
— Optional columns
dm_ws.blocking_session_id,
dm_r.wait_resource,
dm_es.login_name,
dm_r.command,
dm_r.last_wait_type
FROM sys.dm_os_waiting_tasks dm_ws
INNER JOIN sys.dm_exec_requests dm_r ON dm_ws.session_id = dm_r.session_id
INNER JOIN sys.dm_exec_sessions dm_es ON dm_es.session_id = dm_r.session_id
CROSS APPLY sys.dm_exec_sql_text (dm_r.sql_handle) dm_t
CROSS APPLY sys.dm_exec_query_plan (dm_r.plan_handle) dm_qp
WHERE dm_es.is_user_process = 1
GO

Inside the StoredProcedure, it is looping to check the transformation status:

And that WAITFOR DELAY ’00:00:05’ is what we saw when query the suspended process ID.

Then when I query the table AnalyticsInternal.tbl_Batch, I did find a record with OperationActive’s value set to 1.

Since ADO is done/offline at this moment during the upgrade, the status will never be changed thus the try-and-wait loop will run forever.

Solution

By changing OperationActive to 0, the upgrade continued and finished without any other blocks.

ElasticSearch for Azure DevOps install errors

Symptoms

cp "!ES_CLASSPATH!" "org.elasticsearch.tools.launchers.JvmOptionsParser" "!ES_JVM_OPTIONS!" || echo jvm_options_parser_failed"`) was unexpected at this time

Analysis

N/A

Workaround

  • Modify bin\ElasticSearch-Service.bat
    • Find the original troubled line
      • for /F “usebackq delims=” %%a in (`”%JAVA% -cp “!ES_CLASSPATH!” “org.elasticsearch.tools.launchers.JvmOptionsParser” “!ES_JVM_OPTIONS!” || echo jvm_options_parser_failed”`) do set JVM_OPTIONS=%%a
    • Manually run command to get the list of options from the config file
      • “C:\Program Files (x86)\Java\zulu8.38.0.13-ca-jre8.0.212-win_x64\bin\java.exe” -cp “C:\ElasticSearch\elasticsearchv6.2\lib\*” “org.elasticsearch.tools.launchers.JvmOptionsParser” “C:\ElasticSearch\elasticsearchv6.2\config\jvm.options”
    • Embed the list back to the for-loop
      • for /f “tokens=*” %%a in (“-Dfile.encoding=UTF-8 -Dio.netty.noKeySetOptimization=true -Dio.netty.noUnsafe=true -Dio.netty.recycler.maxCapacityPerThread=0 -Djava.awt.headless=true -Djava.io.tmpdir=tmp -Djna.nosys=true -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Xms31000m -Xmx31000m -Xss1m -XX:+AlwaysPreTouch -XX:+HeapDumpOnOutOfMemoryError -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:-OmitStackTraceInFastThrow”) do set JVM_OPTIONS=%%a

@endlocal & set “MAYBE_JVM_OPTIONS_PARSER_FAILED=%JVM_OPTIONS%” & set ES_JAVA_OPTS=%JVM_OPTIONS:${ES_TMPDIR}=!ES_TMPDIR!% %ES_JAVA_OPTS%

  • Zip it back to under zip folder
  • Run Configure-ElasticSearch.ps1

Use oData to pull data from ADO

Data Source

Epic

let
Source = OData.Feed(“http:// <adoServer>/<adoCollection>/<adoProject>/_odata/v1.0” & “/WorkItems?$filter=WorkItemType eq ‘” & “Product Backlog Item” & “‘ and state ne ‘Removed’ and CreatedDate ge 2020-01-01T00:00:00-06:00″, null, [Implementation=”2.0”]),
#”Removed Other Columns” = Table.SelectColumns(Source,{“WorkItemId”, “PlanningType”, “ReportCategory”, “RequestedStartDate”, “RequestedDueDate”, “Date_Done”, “EstimatedHours”, “WorkedHours”, “LeadTimeDays”, “CycleTimeDays”, “ProjectSK”, “WorkItemRevisionSK”, “AreaSK”, “IterationSK”, “AssignedToUserSK”, “CreatedByUserSK”, “Revision”, “Watermark”, “Title”, “ParentWorkItemId”, “WorkItemType”, “ChangedDate”, “CreatedDate”, “State”, “Effort”, “TagNames”, “StateCategory”, “BoardLocations”, “Teams”, “Parent”, “Iteration”, “AssignedTo”, “Tags”})
in
#”Removed Other Columns”

Feature

let
Source = OData.Feed(“http:// <adoServer>/<adoCollection>/<adoProject>/_odata/v1.0” & “/WorkItems?$filter=WorkItemType eq ‘” & “Feature” & “‘ and state ne ‘Removed’ and CreatedDate ge 2020-01-01T00:00:00-06:00″, null, [Implementation=”2.0”]),
#”Removed Other Columns” = Table.SelectColumns(Source,{“WorkItemId”, “LeadTimeDays”, “CycleTimeDays”, “ProjectSK”, “WorkItemRevisionSK”, “AreaSK”, “IterationSK”, “AssignedToUserSK”, “CreatedByUserSK”, “Revision”, “Watermark”, “Title”, “ParentWorkItemId”, “WorkItemType”, “ChangedDate”, “CreatedDate”, “State”, “Effort”, “TagNames”, “StateCategory”, “BoardLocations”, “Teams”, “Parent”, “Iteration”, “AssignedTo”, “Tags”})
in
#”Removed Other Columns”

PBI

let
Source = OData.Feed(“http://<adoServer>/<adoCollection>/<adoProject>/_odata/v1.0&#8221; & “/WorkItems?$filter=WorkItemType eq ‘” & “Product Backlog Item” & “‘ and state ne ‘Removed’ and CreatedDate ge 2020-01-01T00:00:00-06:00″, null, [Implementation=”2.0”]),
#”Removed Other Columns” = Table.SelectColumns(Source,{“WorkItemId”, “PlanningType”, “ReportCategory”, “RequestedStartDate”, “RequestedDueDate”, “Date_Done”, “EstimatedHours”, “WorkedHours”, “LeadTimeDays”, “CycleTimeDays”, “ProjectSK”, “WorkItemRevisionSK”, “AreaSK”, “IterationSK”, “AssignedToUserSK”, “CreatedByUserSK”, “Revision”, “Watermark”, “Title”, “ParentWorkItemId”, “WorkItemType”, “ChangedDate”, “CreatedDate”, “State”, “Effort”, “TagNames”, “StateCategory”, “BoardLocations”, “Teams”, “Parent”, “Iteration”, “AssignedTo”, “Tags”})
in
#”Removed Other Columns”

Area

let
Source = OData.Feed(“http:// <adoServer>/<adoCollection>/<adoProject> /_odata/v1.0”),
Areas_table = Source{[Name=”Areas”,Signature=”table”]}[Data],
#”Removed Columns” = Table.RemoveColumns(Areas_table,{“ProjectSK”, “AreaId”, “Number”, “AreaLevel7”, “AreaLevel8”, “AreaLevel9”, “AreaLevel10”, “AreaLevel11”, “AreaLevel12”, “AreaLevel13”, “AreaLevel14”, “Depth”, “Project”, “Teams”})
in
#”Removed Columns”

User

let
Source = OData.Feed(“http://<adoServer>/<adoCollection>/<adoProject>/_odata/v1.0&#8221; & “/Users”),
#”Removed Other Columns” = Table.SelectColumns(Source,{“UserName”, “UserSK”})
in
#”Removed Other Columns”

Iteration

let
Source = OData.Feed(“http://<adoServer>/<adoCollection>/<adoProject&gt; /_odata/v1.0”),
Areas_table = Source{[Name=”Iterations”,Signature=”table”]}[Data],
#”Removed Columns” = Table.RemoveColumns(Areas_table,{“ProjectSK”, “IterationId”, “IterationName”, “Number”, “IterationLevel7”, “IterationLevel8”, “IterationLevel9”, “IterationLevel10”, “IterationLevel11”, “IterationLevel12”, “IterationLevel13”, “IterationLevel14”, “Depth”, “Project”, “Teams”})
in
#”Removed Columns”

Custom Fields/Columns

PBI

AssignedTo

AssignedTo = LOOKUPVALUE(oData_Users[UserName], oData_Users[UserSK], oData_PBI[AssignedToUserSK])

CreatedMonth

CreatedMonth = FORMAT(MONTH(oData_PBI[CreatedDate]), “00”) & “-” & FORMAT( oData_PBI[CreatedDate], “mmmm”)

CreatedWeek

CreatedWeek = “WK” & FORMAT( WEEKNUM(oData_PBI[CreatedDate]), “00”)

IterationPath

IterationPath = LOOKUPVALUE(oData_Iterations[IterationPath], oData_Iterations[IterationSK], oData_PBI[IterationSK])

MyRequestedDueDate

myRequestedDueDate = IF ( IF (oData_PBI[RequestedDueDate] = BLANK(), oData_PBI[Date_Done], oData_PBI[RequestedDueDate]) = BLANK(), TODAY() + 7 – WEEKDAY(TODAY(),2), IF (oData_PBI[RequestedDueDate] = BLANK(), oData_PBI[Date_Done], oData_PBI[RequestedDueDate]))

MyRequestedStartDate

myRequestedStartDate = IF (oData_PBI[RequestedStartDate] = BLANK(), oData_PBI[CreatedDate], oData_PBI[RequestedStartDate])

WorkitemCountRunningTotal

WorkitemCountRunningTotal = CALCULATE(DISTINCTCOUNT(oData_PBI[Title]), FILTER( ALL( oData_PBI ), oData_PBI[CreatedDate] <= EARLIER( oData_PBI[CreatedDate])))

Feature

PBICompletedCount

PBICompletedCount = VAR num = CALCULATE( COUNTROWS(RELATEDTABLE(oData_PBI)), FILTER( RELATEDTABLE(oData_PBI), oData_PBI[StateCategory] = “Completed” ) )RETURN IF(ISBLANK(num), 0, num)

PBIPercentOfCompletion

PBIPercentOfCompletion = var perc = [PBICompletedCount]/[PBITotalCount]RETURN IF([PBITotalCount] = 0, 0, IFERROR(perc, 0))

PBITotalCount

PBITotalCount = VAR num = CALCULATE( COUNTROWS(RELATEDTABLE(oData_PBI)))RETURN IF(ISBLANK(num), 0, num)