| date: '2025-03-25' | |
| sections: | |
| security_fixes: | |
| - | | |
| Packages have been updated to the latest security versions. | |
| bugs: | |
| - | | |
| The `ghe-upgrade` command returned a zero exit code despite encountering errors. | |
| - | | |
| When performing an upgrade with an upgrade package, the process did not terminate when an invalid target partition was provided with the `-t` flag. | |
| - | | |
| Users could not use the `/manage/v1/config/apply` API endpoint to trigger the first configuration run on an instance. | |
| - | | |
| For instances in a high availability configuration, Elasticsearch indices were deleted on failover and when `ghe-repl-teardown REPLICA_HOSTNAME` was run from the primary instance. All indices are recoverable except audit log indices, whose source of truth is Elasticsearch itself. | |
| - | | |
| Restoring from a backup did not always apply the latest data from GitHub Actions. All GitHub Actions data is now restored with a backup. | |
| - | | |
| In Azure environments, running `ghe-single-config-apply` or `ghe-repl-setup` resulted in "Permission denied" errors during the pre-flight check. | |
| - | | |
| On instances with a GitHub Advanced Security license, some secret scanning alerts were opened incorrectly despite the relevant folders or files being excluded from secret scanning. | |
| - | | |
| For appliances in a high availability configuration, Elasticsearch indices were deleted either on failover, or when running `ghe-repl-teardown <REPLICA_HOSTNAME>` from the primary instance. | |
| changes: | |
| - | | |
| Elasticsearch shards are excluded from the replica node when stopping replication via `ghe-repl-stop`. To prevent Elasticsearch from being stopped before all shards have been removed, Elasticsearch is polled until the shard count on the replica node is zero instead of waiting for a maximum timeout of 30 seconds. | |
| - | | |
| Update the bundled `actions/setup-dotnet` with the latest versions from https://github.com/actions/setup-dotnet. | |
| known_issues: | |
| - | | |
| During the validation phase of a configuration run, a `No such object` error may occur for the Notebook and Viewscreen services. This error can be ignored as the services should still correctly start. | |
| - | | |
| If the root site administrator is locked out of the Management Console after failed login attempts, the account does not unlock automatically after the defined lockout time. Someone with administrative SSH access to the instance must unlock the account using the administrative shell. For more information, see [AUTOTITLE](/admin/configuration/administering-your-instance-from-the-management-console/troubleshooting-access-to-the-management-console#unlocking-the-root-site-administrator-account). | |
| - | | |
| On an instance with the HTTP `X-Forwarded-For` header configured for use behind a load balancer, all client IP addresses in the instance's audit log erroneously appear as 127.0.0.1. | |
| - | | |
| {% data reusables.release-notes.large-adoc-files-issue %} | |
| - | | |
| Admin stats REST API endpoints may timeout on appliances with many users or repositories. Retrying the request until data is returned is advised. | |
| - | | |
| When following the instructions for [Replacing the primary database node](/admin/monitoring-and-managing-your-instance/configuring-clustering/replacing-a-cluster-node#replacing-the-primary-database-node-mysql-or-mysql-and-mssql), `ghe-cluster-config-apply` might fail with errors. If this occurs, re-running `ghe-cluster-config-apply` is expected to succeed. | |
| - | | |
| Running a config apply as part of the steps for [Replacing a node in an emergency](/admin/monitoring-managing-and-updating-your-instance/configuring-clustering/replacing-a-cluster-node#replacing-a-node-in-an-emergency) may fail with errors if the node being replaced is still reachable. If this occurs, shutdown the node and repeat the steps. | |
| - | | |
| {% data reusables.release-notes.2024-06-possible-frontend-5-minute-outage-during-hotpatch-upgrade %} | |
| - | | |
| When restoring data originally backed up from a 3.13 or greater appliance version, the Elasticsearch indices need to be reindexed before some of the data will show up. This happens via a nightly scheduled job. It can also be forced by running `/usr/local/share/enterprise/ghe-es-search-repair`. | |
| - | | |
| An organization-level code scanning configuration page is displayed on instances that do not use GitHub Advanced Security or code scanning. | |
| - | | |
| In the header bar displayed to site administrators, some icons are not available. | |
| - | | |
| When enabling automatic update checks for the first time in the Management Console, the status is not dynamically reflected until the "Updates" page is reloaded. | |
| - | | |
| When restoring from a backup snapshot, a large number of `mapper_parsing_exception` errors may be displayed. | |
| - | | |
| After a restore, existing outside collaborators cannot be added to repositories in a new organization. This issue can be resolved by running `/usr/local/share/enterprise/ghe-es-search-repair` on the appliance. | |
| - | | |
| After a geo-replica is promoted to be a primary by running `ghe-repl-promote`, the actions workflow of a repository does not have any suggested workflows. | |