repo stringclasses 21
values | pull_number float64 45 194k | instance_id stringlengths 16 34 | issue_numbers stringlengths 6 27 | base_commit stringlengths 40 40 | patch stringlengths 263 270k | test_patch stringlengths 312 408k | problem_statement stringlengths 38 47.6k | hints_text stringlengths 1 257k ⌀ | created_at stringdate 2016-01-11 17:37:29 2024-10-18 14:52:41 | language stringclasses 4
values | Dockerfile stringclasses 279
values | P2P stringlengths 2 10.2M | F2P stringlengths 11 38.9k | F2F stringclasses 86
values | test_command stringlengths 27 11.4k | task_category stringclasses 5
values | is_no_nodes bool 2
classes | is_func_only bool 2
classes | is_class_only bool 2
classes | is_mixed bool 2
classes | num_func_changes int64 0 238 | num_class_changes int64 0 70 | num_nodes int64 0 264 | is_single_func bool 2
classes | is_single_class bool 2
classes | modified_nodes stringlengths 2 42.2k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
coder/code-server | 4,597 | coder__code-server-4597 | ['4176'] | 9e583fa562322bfba95ec06c0537d112f51d61eb | diff --git a/ci/helm-chart/Chart.yaml b/ci/helm-chart/Chart.yaml
--- a/ci/helm-chart/Chart.yaml
+++ b/ci/helm-chart/Chart.yaml
@@ -20,4 +20,4 @@ version: 1.0.5
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Ver... | diff --git a/test/unit/node/test-plugin/package.json b/test/unit/node/test-plugin/package.json
--- a/test/unit/node/test-plugin/package.json
+++ b/test/unit/node/test-plugin/package.json
@@ -3,7 +3,7 @@
"name": "test-plugin",
"version": "1.0.0",
"engines": {
- "code-server": "^3.7.0"
+ "code-server": "^4... | release: 4.0.0
<!-- Maintainer: fill out the checklist -->
## Checklist
- [x] Assign to next release manager
- [x] Close previous release milestone
- [x] Create next release milestone
- [x] Associate issue with next release milestone
| Any progress? There were some problems with the previous release. I want to experience 3.12.1
@pavlelee Very close! You'll see some remaining TODOs from [this PR](https://github.com/cdr/code-server/pull/4414). We need to create issues and add those to [this milestone](https://github.com/cdr/code-server/milestone/32). F... | 2021-12-09 20:43:13+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:14
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ['/testbed/test/unit/common/emitter.test.ts->should run the correct callbacks', '/testbed/test/unit/node/proxy_agent.test.ts->should return false when NO_PROXY is set to https://example.com', '/testbed/test/unit/node/util.test.ts->should return true with a hashedPassword for a SHA256 password', '/testbed/test/unit/node... | ['/testbed/test/unit/node/plugin.test.ts->plugin /test-plugin/test-app (websocket)', '/testbed/test/unit/node/plugin.test.ts->plugin /test-plugin/error', '/testbed/test/unit/node/plugin.test.ts->plugin /test-plugin/test-app', '/testbed/test/unit/node/plugin.test.ts->plugin /api/applications'] | ['/testbed/test/unit/node/testbed.test.ts->createApp should unlink a socket before listening on the socket'] | yarn test:unit --json --silent | Feature | true | false | false | false | 0 | 0 | 0 | false | false | [] |
coder/code-server | 4,678 | coder__code-server-4678 | ['4675'] | 3d999986b28fc01148650fc1122d321e16950ea2 | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -22,6 +22,14 @@ VS Code v99.99.999
## [Unreleased](https://github.com/cdr/code-server/releases)
+VS Code v0.00.0
+
+### Changed
+
+- Add here
+
+## [4.0.1](https://github.com/cdr/code-server/releases/tag/v4.0.1) - 2022-01-04
+
VS Co... | diff --git a/test/e2e/extensions.test.ts b/test/e2e/extensions.test.ts
--- a/test/e2e/extensions.test.ts
+++ b/test/e2e/extensions.test.ts
@@ -7,6 +7,6 @@ describe("Extensions", true, () => {
await codeServerPage.executeCommandViaMenus("code-server: Get proxy URI")
- await codeServerPage.page.waitForSelecto... | release: 4.0.1
<!-- Maintainer: fill out the checklist -->
## Checklist
- [x] Assign to next release manager
- [x] Close previous release milestone
- [x] Create next release milestone
- [x] Associate issue with next release milestone
| null | 2022-01-04 17:27:59+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:14
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ["/testbed/test/unit/node/util.test.ts->should return ARGON2 for password with 'argon2'", '/testbed/test/unit/node/update.test.ts->should keep existing information', '/testbed/test/unit/node/routes/health.test.ts->/healthz (websocket)', '/testbed/test/unit/node/util.test.ts->should return true if is match', '/testbed/t... | ['/testbed/test/unit/node/plugin.test.ts->plugin /test-plugin/test-app (websocket)', '/testbed/test/unit/node/plugin.test.ts->plugin /test-plugin/error', '/testbed/test/unit/node/plugin.test.ts->plugin /test-plugin/test-app', '/testbed/test/unit/node/plugin.test.ts->plugin /api/applications'] | ['/testbed/test/unit/node/routes/vscode.test.ts->vscode should not redirect when last opened is ignored', '/testbed/test/unit/node/routes/vscode.test.ts->vscode should have a default workspace', '/testbed/test/unit/node/routes/vscode.test.ts->vscode should redirect to last query folder/workspace', '/testbed/test/unit/n... | yarn test:unit --json --silent | Feature | true | false | false | false | 0 | 0 | 0 | false | false | [] |
coder/code-server | 4,680 | coder__code-server-4680 | ['4600'] | 7695de2831b774a63ca3d8947bb8b3154799b81d | diff --git a/CHANGELOG.md b/CHANGELOG.md
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -32,12 +32,11 @@ implementation (#4414).
- Web socket compression has been made the default (when supported). This means
the `--enable` flag will no longer take `permessage-deflate` as an option.
- Extra extension directories have be... | diff --git a/test/unit/node/cli.test.ts b/test/unit/node/cli.test.ts
--- a/test/unit/node/cli.test.ts
+++ b/test/unit/node/cli.test.ts
@@ -63,6 +63,8 @@ describe("parser", () => {
"--verbose",
"2",
+ ["--locale", "ja"],
+
["--log", "error"],
"--help",
@@ -103,6 +... | Builtin extensions always require reload
@bpmct found an issue with the extensions panel while testing the [4.0.0 release](https://github.com/cdr/code-server/pull/4597#issuecomment-990381354).
## Steps to Reproduce
1. run code-server with 0 extensions installed
```shell
# create an empty directory
# that way w... | We don't think this is an issue but will retest after 4.0.0 is out.
### Actual (https://vscode-r.jupyter.b-data.ch, v4.0.0, empty `~/.local/share/code-server/extensions`)
Shows popular extensions. Says 'Reload Required' for _builtin_ extension; pre-installed extensions* not affected.
*Pre-installed using `code-se... | 2022-01-04 18:02:33+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:14
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ["/testbed/test/unit/node/util.test.ts->should return ARGON2 for password with 'argon2'", '/testbed/test/unit/node/update.test.ts->should keep existing information', '/testbed/test/unit/node/routes/health.test.ts->/healthz (websocket)', '/testbed/test/unit/node/util.test.ts->should return true if is match', '/testbed/t... | ['/testbed/test/unit/node/cli.test.ts->parser should parse all available options'] | ['/testbed/test/unit/node/routes/vscode.test.ts->vscode should not redirect when last opened is ignored', '/testbed/test/unit/node/routes/vscode.test.ts->vscode should have a default workspace', '/testbed/test/unit/node/routes/vscode.test.ts->vscode should redirect to last query folder/workspace', '/testbed/test/unit/n... | yarn test:unit --json --silent | Bug Fix | true | false | false | false | 0 | 0 | 0 | false | false | [] |
coder/code-server | 4,923 | coder__code-server-4923 | ['1466'] | 78658f1cf48a5e019a82cde937cfa8feed8b986b | diff --git a/src/node/app.ts b/src/node/app.ts
--- a/src/node/app.ts
+++ b/src/node/app.ts
@@ -11,7 +11,7 @@ import { disposer } from "./http"
import { isNodeJSErrnoException } from "./util"
import { handleUpgrade } from "./wsRouter"
-type ListenOptions = Pick<DefaultedArgs, "socket" | "port" | "host">
+type Listen... | diff --git a/test/unit/node/app.test.ts b/test/unit/node/app.test.ts
--- a/test/unit/node/app.test.ts
+++ b/test/unit/node/app.test.ts
@@ -107,6 +107,18 @@ describe("createApp", () => {
app.dispose()
})
+ it("should change the file mode of a socket", async () => {
+ const defaultArgs = await setDefaults({... | Add option to set unix socket permissions
Hello,
when using the --socket option, I can tell code-server which socket to use, but not the permissions. At the moment the default permissions are 0755, which means that only the user is able to write to the socket while it's world readable...
When running together wit... | I'd agree with this. Setting users/groups seems a bit odd to me though. Is there an example of software you know that has this syntax?
Usually a program/system has a configuration file where these settings are defined in. As most of the socket related stuff is handled by systemd on a newer Linux system, the settings lo... | 2022-02-28 14:07:07+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:14
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ["/testbed/test/unit/node/util.test.ts->should return ARGON2 for password with 'argon2'", '/testbed/test/unit/node/update.test.ts->should keep existing information', '/testbed/test/unit/node/routes/health.test.ts->/healthz (websocket)', '/testbed/test/unit/node/util.test.ts->should return true if is match', '/testbed/t... | ['/testbed/test/unit/node/cli.test.ts->parser should parse all available options', '/testbed/test/unit/node/cli.test.ts->parser should override with --link'] | ['/testbed/test/unit/node/routes/vscode.test.ts->vscode should do nothing when nothing is passed in', '/testbed/test/unit/node/routes/vscode.test.ts->vscode should load all route variations', '/testbed/test/unit/node/routes/vscode.test.ts->vscode should not redirect when last opened is ignored', '/testbed/test/unit/nod... | yarn test:unit --json --silent | Feature | false | true | false | false | 1 | 0 | 1 | true | false | ["src/node/cli.ts->program->function_declaration:setDefaults"] |
coder/code-server | 4,970 | coder__code-server-4970 | ['4915'] | 77296c7187998408a7cfc793974494262aa4a634 | diff --git a/src/node/cli.ts b/src/node/cli.ts
--- a/src/node/cli.ts
+++ b/src/node/cli.ts
@@ -120,11 +120,11 @@ type OptionType<T> = T extends boolean
? "string[]"
: "unknown"
-type Options<T> = {
+export type Options<T> = {
[P in keyof T]: Option<OptionType<T[P]>>
}
-const options: Options<Required<User... | diff --git a/test/unit/node/cli.test.ts b/test/unit/node/cli.test.ts
--- a/test/unit/node/cli.test.ts
+++ b/test/unit/node/cli.test.ts
@@ -13,6 +13,11 @@ import {
shouldOpenInExistingInstance,
splitOnFirstEquals,
toVsCodeArgs,
+ optionDescriptions,
+ options,
+ Options,
+ AuthType,
+ OptionalString,
} fr... | [Testing]: write tests for optionDescriptions
We're missing coverage for L240-260 in `src/node/cli.ts`:
https://github.com/coder/code-server/blob/main/src/node/cli.ts#L239-L266
Fix this by writing a couple tests for: `optionDescriptions`
| null | 2022-03-09 22:27:12+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:14
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ["/testbed/test/unit/node/util.test.ts->should return ARGON2 for password with 'argon2'", '/testbed/test/unit/node/update.test.ts->should keep existing information', '/testbed/test/unit/node/routes/health.test.ts->/healthz (websocket)', '/testbed/test/unit/node/util.test.ts->should return true if is match', '/testbed/t... | ['/testbed/test/unit/node/update.test.ts->update should not reject if unable to fetch'] | ['/testbed/test/unit/node/routes/vscode.test.ts->vscode should redirect to the passed in folder using human-readable query', '/testbed/test/unit/node/routes/vscode.test.ts->vscode should not redirect when last opened is ignored', '/testbed/test/unit/node/routes/vscode.test.ts->vscode should redirect to the passed in wo... | yarn test:unit --json --silent | Testing | true | false | false | false | 0 | 0 | 0 | false | false | [] |
coder/code-server | 5,633 | coder__code-server-5633 | ['5632'] | 71a127a62befeff1d55efe70be8f182e01cb29b6 | diff --git a/src/browser/pages/login.html b/src/browser/pages/login.html
--- a/src/browser/pages/login.html
+++ b/src/browser/pages/login.html
@@ -10,7 +10,7 @@
http-equiv="Content-Security-Policy"
content="style-src 'self'; script-src 'self' 'unsafe-inline'; manifest-src 'self'; img-src 'self' data:; fon... | diff --git a/test/unit/node/cli.test.ts b/test/unit/node/cli.test.ts
--- a/test/unit/node/cli.test.ts
+++ b/test/unit/node/cli.test.ts
@@ -67,6 +67,8 @@ describe("parser", () => {
"1",
"--verbose",
+ ["--app-name", "custom instance name"],
+ ["--welcome-text", "welcome to code"... | [Feat]: allow setting the app name and a welcome text on login page
## What is your suggestion?
allowing to change the text and app / instance name on the login page
## Why do you want this feature?
telling apart multiple instances
## Are there any workarounds to get this functionality today?
you can fork ... | null | 2022-10-09 14:39:46+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:16
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ['/testbed/test/unit/node/heart.test.ts->should log a warning when isActive rejects', "/testbed/test/unit/node/util.test.ts->should return ARGON2 for password with 'argon2'", '/testbed/test/unit/node/routes/login.test.ts->should return correct app-name when unset', '/testbed/test/unit/node/util.test.ts->should return f... | ['/testbed/test/unit/node/cli.test.ts->parser should parse all available options', '/testbed/test/unit/node/routes/login.test.ts->login /login should return correct welcome text when none is set but app-name is', '/testbed/test/unit/node/routes/login.test.ts->login /login should return correct welcome text', '/testbed/... | ['/testbed/test/unit/node/testbed.test.ts->createApp should unlink a socket before listening on the socket'] | yarn test:unit --json --silent | Feature | true | false | false | false | 0 | 0 | 0 | false | false | [] |
coder/code-server | 5,707 | coder__code-server-5707 | ['5661'] | ca182b9fb51e2b1683d6e154ba5086fc7e8c3238 | diff --git a/docs/FAQ.md b/docs/FAQ.md
--- a/docs/FAQ.md
+++ b/docs/FAQ.md
@@ -32,6 +32,7 @@
- [Does code-server have any security login validation?](#does-code-server-have-any-security-login-validation)
- [Are there community projects involving code-server?](#are-there-community-projects-involving-code-server)
- [H... | diff --git a/test/unit/node/cli.test.ts b/test/unit/node/cli.test.ts
--- a/test/unit/node/cli.test.ts
+++ b/test/unit/node/cli.test.ts
@@ -43,6 +43,7 @@ describe("parser", () => {
delete process.env.LOG_LEVEL
delete process.env.PASSWORD
delete process.env.CS_DISABLE_FILE_DOWNLOADS
+ delete process.env... | [Feat]: Promote coder/coder in Get Started screen
Our [new project](https://github.com/coder/coder) is a natural extension of code-server and relevant to those attempting to set up code-server for their teams.
Let's add a loud callout to the Welcome Screen that says something to the effect of "Setting up code-server... | To get to this, use Command Palette > Help: Get Started
Here's what it looks like:
<img width="1712" alt="image" src="https://user-images.githubusercontent.com/3806031/197252800-ab8981cd-c54e-43bf-b9ba-e9425a871283.png">
| 2022-10-25 21:58:46+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:16
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ['/testbed/test/unit/node/heart.test.ts->should log a warning when isActive rejects', "/testbed/test/unit/node/util.test.ts->should return ARGON2 for password with 'argon2'", '/testbed/test/unit/node/routes/login.test.ts->should return correct app-name when unset', '/testbed/test/unit/node/util.test.ts->should return f... | ['/testbed/test/unit/node/cli.test.ts->parser should parse all available options', '/testbed/test/unit/node/cli.test.ts->parser should use env var CS_DISABLE_GETTING_STARTED_OVERRIDE', '/testbed/test/unit/node/cli.test.ts->parser should use env var CS_DISABLE_GETTING_STARTED_OVERRIDE set to true'] | ['/testbed/test/unit/node/testbed.test.ts->createApp should unlink a socket before listening on the socket'] | yarn test:unit --json --silent | Feature | false | true | false | false | 1 | 0 | 1 | true | false | ["src/node/cli.ts->program->function_declaration:setDefaults"] |
coder/code-server | 6,115 | coder__code-server-6115 | ['5311'] | a44bd71043d5550f751ff6d06d6ea16ac2742118 | diff --git a/src/node/cli.ts b/src/node/cli.ts
--- a/src/node/cli.ts
+++ b/src/node/cli.ts
@@ -571,6 +571,9 @@ export async function setDefaults(cliArgs: UserProvidedArgs, configArgs?: Config
// Filter duplicate proxy domains and remove any leading `*.`.
const proxyDomains = new Set((args["proxy-domain"] || []).m... | diff --git a/test/unit/node/cli.test.ts b/test/unit/node/cli.test.ts
--- a/test/unit/node/cli.test.ts
+++ b/test/unit/node/cli.test.ts
@@ -43,6 +43,7 @@ describe("parser", () => {
delete process.env.PASSWORD
delete process.env.CS_DISABLE_FILE_DOWNLOADS
delete process.env.CS_DISABLE_GETTING_STARTED_OVERRI... | [Feat]: make VSCODE_PROXY_URI use the subdomain proxy when it is enabled
## What is your suggestion?
When `VSCODE_PROXY_URI` is enabled, use the subdomain proxy.
## Why do you want this feature?
Popular extensions like Tabnine can't use relative paths and need to be able to talk to code-server on specific por... | We might also want a way to override this for cases like Coder where we already provide a subdomain proxy outside of code-server. For this we can probably just check if that variable is already set and if so avoid overriding.
To implement we need to check the `proxy-domain` flag and use that in the environment vari... | 2023-03-28 20:03:27+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:16
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ['/testbed/test/unit/node/heart.test.ts->should log a warning when isActive rejects', "/testbed/test/unit/node/util.test.ts->should return ARGON2 for password with 'argon2'", '/testbed/test/unit/node/routes/login.test.ts->should return correct app-name when unset', '/testbed/test/unit/node/util.test.ts->should return f... | ['/testbed/test/unit/node/cli.test.ts->parser should set proxy uri to first domain', '/testbed/test/unit/node/cli.test.ts->parser should set proxy uri'] | ['/testbed/test/unit/node/testbed.test.ts->createApp should unlink a socket before listening on the socket'] | yarn test:unit --json --silent | Feature | false | true | false | false | 1 | 0 | 1 | true | false | ["src/node/cli.ts->program->function_declaration:setDefaults"] |
coder/code-server | 6,225 | coder__code-server-6225 | ['6195'] | 74af05dfbe0d5085ad2d1b71685cac4638372657 | diff --git a/patches/proxy-uri.diff b/patches/proxy-uri.diff
--- a/patches/proxy-uri.diff
+++ b/patches/proxy-uri.diff
@@ -113,7 +113,7 @@ Index: code-server/lib/vscode/src/vs/code/browser/workbench/workbench.ts
interface ICredential {
service: string;
-@@ -511,6 +512,38 @@ function doCreateUri(path: string, qu... | diff --git a/test/unit/node/cli.test.ts b/test/unit/node/cli.test.ts
--- a/test/unit/node/cli.test.ts
+++ b/test/unit/node/cli.test.ts
@@ -413,7 +413,7 @@ describe("parser", () => {
const defaultArgs = await setDefaults(args)
expect(defaultArgs).toEqual({
...defaults,
- "proxy-domain": ["coder.com... | Support proxying ports without separate sub-domains
### Is there an existing issue for this?
- [X] I have searched the existing issues
### OS/Web Information
- Web Browser: EDGE
- Local OS: Windows
- Remote OS: Linux
- Remote Architecture: x64
- `code-server --version`: 4.12.0
### Steps to Reproduce
config e... | Ah yeah the subdomain proxy requires that the port be the first and only part of the sub-domain, so something like `{{port}}.code.domain.tld` (with `proxy-domain` set to `code.domain.tld`) or `{{port}}.domain.tld` (with `proxy-domain` set to `domain.tld`) instead would work.
> Ah yeah the subdomain proxy requires that ... | 2023-05-20 11:02:02+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:16
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ['/testbed/test/unit/node/heart.test.ts->should log a warning when isActive rejects', '/testbed/test/unit/node/routes/login.test.ts->should return correct app-name when unset', '/testbed/test/unit/node/http.test.ts->http://localhost:8080 -> [forwarded: for=127.0.0.1, host=localhost:8081, proto=http]', '/testbed/test/un... | ['/testbed/test/unit/node/cli.test.ts->parser should set proxy uri to first domain', '/testbed/test/unit/node/cli.test.ts->parser should set proxy uri', '/testbed/test/unit/node/cli.test.ts->parser should filter proxy domains'] | ['/testbed/test/unit/node/testbed.test.ts->createApp should unlink a socket before listening on the socket'] | yarn test:unit --json --silent | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["src/node/cli.ts->program->function_declaration:setDefaults", "src/node/http.ts->program->function_declaration:getHost"] |
coder/code-server | 6,278 | coder__code-server-6278 | ['6275'] | 5d3c9edce436d11d51aa1e586c11eaa49d626dc2 | diff --git a/src/node/main.ts b/src/node/main.ts
--- a/src/node/main.ts
+++ b/src/node/main.ts
@@ -126,7 +126,9 @@ export const runCodeServer = async (
logger.info(`Using config file ${humanPath(os.homedir(), args.config)}`)
logger.info(`${protocol.toUpperCase()} server listening on ${serverAddress.toString()}`... | diff --git a/test/unit/node/vscodeSocket.test.ts b/test/unit/node/vscodeSocket.test.ts
--- a/test/unit/node/vscodeSocket.test.ts
+++ b/test/unit/node/vscodeSocket.test.ts
@@ -1,5 +1,50 @@
-import { EditorSessionManager } from "../../../src/node/vscodeSocket"
-import { clean, tmpdir, listenOn } from "../../utils/helpers... | [Bug]: Can't start 2 instances of code-server `4.14.0` for separate users
### Is there an existing issue for this?
- [X] I have searched the existing issues
### OS/Web Information
- Web Browser: Chrome
- Local OS: Ubuntu
- Remote OS: Windows
- Remote Architecture: amd64
- `code-server --version`: 4.14.0
... | Having the same issue, this totally bricked our shared development environment.
**EDIT** For anyone else who ends up here, a downgrade worked for my environment.
Before we were writing to a file instead of a socket and I think
we must have been ignoring write errors but with the new system we
do not. We may want ... | 2023-06-20 20:46:46+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:16
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ['/testbed/test/unit/node/heart.test.ts->should log a warning when isActive rejects', '/testbed/test/unit/node/routes/login.test.ts->should return correct app-name when unset', '/testbed/test/unit/node/http.test.ts->http://localhost:8080 -> [forwarded: for=127.0.0.1, host=localhost:8081, proto=http]', '/testbed/test/un... | ['/testbed/test/unit/node/vscodeSocket.test.ts->DEFAULT_SOCKET_PATH should be a unique path per user', '/testbed/test/unit/node/vscodeSocket.test.ts->makeEditorSessionManagerServer warns if socket cannot be created'] | ['/testbed/test/unit/node/testbed.test.ts->createApp should unlink a socket before listening on the socket'] | yarn test:unit --json --silent | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/node/vscodeSocket.ts->program->function_declaration:makeEditorSessionManagerServer"] |
coder/code-server | 6,423 | coder__code-server-6423 | ['6422'] | 913fc3086678a9f265bdcb8ebbc68c1c199c33a7 | diff --git a/src/node/cli.ts b/src/node/cli.ts
--- a/src/node/cli.ts
+++ b/src/node/cli.ts
@@ -732,6 +732,9 @@ export function bindAddrFromArgs(addr: Addr, args: UserProvidedArgs): Addr {
if (args["bind-addr"]) {
addr = parseBindAddr(args["bind-addr"])
}
+ if (process.env.CODE_SERVER_HOST) {
+ addr.host ... | diff --git a/test/unit/node/cli.test.ts b/test/unit/node/cli.test.ts
--- a/test/unit/node/cli.test.ts
+++ b/test/unit/node/cli.test.ts
@@ -789,6 +789,50 @@ describe("bindAddrFromArgs", () => {
expect(actual).toStrictEqual(expected)
})
+ it("should use process.env.CODE_SERVER_HOST if set", () => {
+ const ... | [Feat]: Set the host address with environment variable
## What is your suggestion?
It would be nice if we could set the host address with an environment variable, just as like as the port.
## Why do you want this feature?
There is a [docker-based project](https://github.com/linuxserver/docker-code-server), and I c... | null | 2023-09-08 02:49:51+00:00 | TypeScript | FROM public.ecr.aws/docker/library/node:16
RUN apt-get update && apt-get install -y git build-essential g++ libx11-dev libkrb5-dev gnupg unzip curl wget software-properties-common && curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | bash ... | ['/testbed/test/unit/node/heart.test.ts->should log a warning when isActive rejects', '/testbed/test/unit/node/routes/login.test.ts->should return correct app-name when unset', '/testbed/test/unit/node/http.test.ts->http://localhost:8080 -> [forwarded: for=127.0.0.1, host=localhost:8081, proto=http]', '/testbed/test/un... | ['/testbed/test/unit/node/cli.test.ts->bindAddrFromArgs should use process.env.CODE_SERVER_HOST if set'] | ['/testbed/test/unit/node/testbed.test.ts->createApp should unlink a socket before listening on the socket'] | yarn test:unit --json --silent | Feature | false | true | false | false | 1 | 0 | 1 | true | false | ["src/node/cli.ts->program->function_declaration:bindAddrFromArgs"] |
Significant-Gravitas/AutoGPT | 4,652 | Significant-Gravitas__AutoGPT-4652 | ['3681'] | 9150f32f8b8602395534795ddd2d930a1684e419 | diff --git a/autogpt/memory/message_history.py b/autogpt/memory/message_history.py
--- a/autogpt/memory/message_history.py
+++ b/autogpt/memory/message_history.py
@@ -14,7 +14,8 @@
is_string_valid_json,
)
from autogpt.llm.base import ChatSequence, Message, MessageRole, MessageType
-from autogpt.llm.utils import ... | diff --git a/tests/unit/test_message_history.py b/tests/unit/test_message_history.py
new file mode 100644
--- /dev/null
+++ b/tests/unit/test_message_history.py
@@ -0,0 +1,145 @@
+import math
+import time
+from unittest.mock import MagicMock
+
+import pytest
+
+from autogpt.agent import Agent
+from autogpt.config impor... | COMMAND = list_files - openai.error.InvalidRequestError: This model's maximum context length is 4097 tokens
### ⚠️ Search for existing issues first ⚠️
- [X] I have searched the existing issues, and there is no existing issue for my problem
### Which Operating System are you using?
Docker
### Which version of Auto-... | I also encountered the same problem and couldn't continue the project
It's coming from updating memory summary. That appears to be a global behaviour. Your are constrained by 4096 tokens context window given the model you are using - likely gpt 3.5 - if you used gpt-4, you would not error out here. I can think of addin... | 2023-06-11 09:10:14+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
# Set the working directory in the container
WORKDIR /testbed
# Install git and other dependencies
RUN apt-get update && apt-get install -y git
# Copy the current directory contents into the container at /testbed
C... | [] | ['tests/unit/test_message_history.py:None:test_message_history_batch_summary'] | null | python -m pytest /testbed/tests/unit/test_message_history.py -v | Bug Fix | false | false | false | true | 2 | 1 | 3 | false | false | ["autogpt/memory/message_history.py->module->class_definition:MessageHistory->function_definition:update_running_summary", "autogpt/memory/message_history.py->module->class_definition:MessageHistory->function_definition:summarize_batch", "autogpt/memory/message_history.py->module->class_definition:MessageHistory"] |
huggingface/transformers | 3,147 | huggingface__transformers-3147 | ['3093'] | 1741d740f2c557c817dbed4ddf89bcb14f211e7d | diff --git a/src/transformers/configuration_utils.py b/src/transformers/configuration_utils.py
--- a/src/transformers/configuration_utils.py
+++ b/src/transformers/configuration_utils.py
@@ -98,6 +98,18 @@ def __init__(self, **kwargs):
logger.error("Can't set {} with value {} for {}".format(key, value,... | diff --git a/tests/test_configuration_common.py b/tests/test_configuration_common.py
--- a/tests/test_configuration_common.py
+++ b/tests/test_configuration_common.py
@@ -57,8 +57,18 @@ def create_and_test_config_from_and_save_pretrained(self):
self.parent.assertEqual(config_second.to_dict(), config_first.to... | wrong 'label2id' and 'id2label' in config when loading from pretrained
# 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Bert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modi... | null | 2020-03-05 21:15:10+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y git build-essential && rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY . .
# Instal... | ['tests/test_configuration_auto.py:AutoConfigTest:test_config_model_type_from_local_file', 'tests/test_configuration_auto.py:AutoConfigTest:test_pattern_matching_fallback', 'tests/test_configuration_auto.py:AutoConfigTest:test_config_model_type_from_model_identifier', 'tests/test_configuration_auto.py:AutoConfigTest:te... | ['tests/test_modeling_tf_roberta.py:TFRobertaModelTest:test_config', 'tests/test_modeling_bart.py:BARTModelTest:test_config', 'tests/test_modeling_ctrl.py:CTRLModelTest:test_config', 'tests/test_modeling_gpt2.py:GPT2ModelTest:test_config', 'tests/test_modeling_tf_gpt2.py:TFGPT2ModelTest:test_config', 'tests/test_modeli... | null | sh -c "PYTHONPATH=/testbed pytest -v tests/ -k 'test_config' --junitxml=test-results.xml" | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["src/transformers/configuration_utils.py->module->class_definition:PretrainedConfig->function_definition:num_labels", "src/transformers/configuration_utils.py->module->class_definition:PretrainedConfig"] |
huggingface/transformers | 3,198 | huggingface__transformers-3198 | ['2508'] | 292186a3e7e1a819aa591901591673639c752157 | diff --git a/src/transformers/tokenization_xlm_roberta.py b/src/transformers/tokenization_xlm_roberta.py
--- a/src/transformers/tokenization_xlm_roberta.py
+++ b/src/transformers/tokenization_xlm_roberta.py
@@ -104,6 +104,7 @@ class XLMRobertaTokenizer(PreTrainedTokenizer):
vocab_files_names = VOCAB_FILES_NAMES
... | diff --git a/tests/test_tokenization_xlm_roberta.py b/tests/test_tokenization_xlm_roberta.py
--- a/tests/test_tokenization_xlm_roberta.py
+++ b/tests/test_tokenization_xlm_roberta.py
@@ -14,14 +14,113 @@
# limitations under the License.
+import os
import unittest
-from transformers.tokenization_xlm_roberta impo... | XLMRobertaTokenizer is a wrong tokenizer for XLMRoberta
## 🐛 Bug
<!-- Important information -->
Model I am using (Bert, XLNet....): XLMRoberta
Language I am using the model on (English, Chinese....): multi-language, but mostly english
The problem arise when:
try to tokenise a sentence that contains the ... | null | 2020-03-09 22:43:53+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
C... | ['tests/test_tokenization_xlm_roberta.py:XLMRobertaTokenizationTest:test_encode_plus_with_padding', 'tests/test_tokenization_xlm_roberta.py:XLMRobertaTokenizationTest:test_encode_input_type', 'tests/test_tokenization_xlm_roberta.py:XLMRobertaTokenizationTest:test_batch_encode_plus_padding', 'tests/test_tokenization_xlm... | ['tests/test_tokenization_xlm_roberta.py:XLMRobertaTokenizationTest:test_add_special_tokens', 'tests/test_tokenization_xlm_roberta.py:XLMRobertaTokenizationTest:test_full_tokenizer', 'tests/test_tokenization_xlm_roberta.py:XLMRobertaTokenizationTest:test_added_tokens_do_lower_case', 'tests/test_tokenization_xlm_roberta... | null | python -m pytest -v /testbed/tests/test_tokenization_xlm_roberta.py | Bug Fix | false | false | false | true | 2 | 2 | 4 | false | false | ["src/transformers/tokenization_xlm_roberta.py->module->class_definition:XLMRobertaTokenizer->function_definition:vocab_size", "src/transformers/tokenization_xlm_roberta.py->module->class_definition:XLMRobertaTokenizer->function_definition:__init__", "src/transformers/tokenization_xlm_roberta.py->module->class_definiti... |
huggingface/transformers | 3,716 | huggingface__transformers-3716 | ['3711'] | f8208fa456039b46873a2e497b6318d30a4fc84e | diff --git a/src/transformers/modeling_transfo_xl.py b/src/transformers/modeling_transfo_xl.py
--- a/src/transformers/modeling_transfo_xl.py
+++ b/src/transformers/modeling_transfo_xl.py
@@ -859,7 +859,7 @@ def forward(self, input_ids=None, mems=None, head_mask=None, inputs_embeds=None,
Return:
:obj:`tu... | diff --git a/tests/test_modeling_transfo_xl.py b/tests/test_modeling_transfo_xl.py
--- a/tests/test_modeling_transfo_xl.py
+++ b/tests/test_modeling_transfo_xl.py
@@ -164,7 +164,7 @@ def create_transfo_xl_lm_head(self, config, input_ids_1, input_ids_2, lm_labels)
return outputs
def check_transfo... | TransfoXLLMHead doesn't shift labels internally when called for loss
# 🐛 Bug
When called with labels to get the language-modeling loss, `TransfoXLLMHead.forward` computes the NLLLoss of the outputs directly against the labels, rather than against the shifted labels like the documentation indicates (and like the oth... | null | 2020-04-09 10:16:32+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
C... | ['tests/test_modeling_transfo_xl.py:TransfoXLModelTest:test_head_pruning_save_load_from_pretrained', 'tests/test_modeling_transfo_xl.py:TransfoXLModelTest:test_initialization', 'tests/test_modeling_transfo_xl.py:TransfoXLModelTest:test_head_pruning_save_load_from_config_init', 'tests/test_modeling_transfo_xl.py:Transfo... | ['tests/test_modeling_transfo_xl.py:TransfoXLModelTest:test_transfo_xl_lm_head'] | null | python -m pytest -v /testbed/tests/test_modeling_transfo_xl.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/modeling_transfo_xl_utilities.py->module->class_definition:ProjectedAdaptiveLogSoftmax->function_definition:forward", "src/transformers/modeling_transfo_xl.py->module->class_definition:TransfoXLLMHeadModel->function_definition:forward"] |
huggingface/transformers | 4,759 | huggingface__transformers-4759 | ['3554'] | 5bf9afbf351f9419505eb1c9e0c5ab78883c3caf | diff --git a/src/transformers/modeling_transfo_xl.py b/src/transformers/modeling_transfo_xl.py
--- a/src/transformers/modeling_transfo_xl.py
+++ b/src/transformers/modeling_transfo_xl.py
@@ -20,6 +20,7 @@
import logging
+from typing import Optional
import torch
import torch.nn as nn
@@ -507,6 +508,85 @@ def _i... | diff --git a/tests/test_modeling_transfo_xl.py b/tests/test_modeling_transfo_xl.py
--- a/tests/test_modeling_transfo_xl.py
+++ b/tests/test_modeling_transfo_xl.py
@@ -12,8 +12,7 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissi... | resize_token_embeddings error for Transformer-XL
# 🐛 Bug
## Information
Model I am using : Transformer-XL
Language I am using the model on : English
The problem arises when using:
* [ ] my own modified scripts: a fine-tuning script for TransfoXLLMHeadModel
## To reproduce
The following code aims to ... | Hi @vsieplus ,
This is a known bug and sadly we don't have a solution for this now. TransfoXLLMHead uses adaptive weight embeddings which makes it not very easy to implement this function. Should be implemented in the long run though - I will note it down. @thomwolf @LysandreJik | 2020-06-04 10:49:49+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
C... | ['tests/test_modeling_transfo_xl.py:TransfoXLModelTest:test_head_pruning_save_load_from_pretrained', 'tests/test_modeling_transfo_xl.py:TransfoXLModelTest:test_initialization', 'tests/test_modeling_transfo_xl.py:TransfoXLModelTest:test_head_pruning_save_load_from_config_init', 'tests/test_modeling_transfo_xl.py:Transfo... | ['tests/test_modeling_transfo_xl.py:TransfoXLModelTest:test_resize_tokens_embeddings'] | null | python -m pytest -v /testbed/tests/test_modeling_transfo_xl.py | Bug Fix | false | false | false | true | 6 | 2 | 8 | false | false | ["src/transformers/modeling_transfo_xl.py->module->class_definition:TransfoXLLMHeadModel->function_definition:_resize_cutoffs", "src/transformers/modeling_transfo_xl.py->module->class_definition:TransfoXLPreTrainedModel", "src/transformers/modeling_transfo_xl.py->module->class_definition:TransfoXLPreTrainedModel->funct... |
huggingface/transformers | 5,060 | huggingface__transformers-5060 | ['5049'] | d5477baf7d87b9bdad386f2f317732b85277b06b | diff --git a/src/transformers/data/data_collator.py b/src/transformers/data/data_collator.py
--- a/src/transformers/data/data_collator.py
+++ b/src/transformers/data/data_collator.py
@@ -33,31 +33,34 @@ def default_data_collator(features: List[InputDataClass]) -> Dict[str, torch.Ten
# have the same attributes.
... | diff --git a/tests/test_trainer.py b/tests/test_trainer.py
--- a/tests/test_trainer.py
+++ b/tests/test_trainer.py
@@ -24,6 +24,27 @@
@require_torch
class DataCollatorIntegrationTest(unittest.TestCase):
+ def test_default_with_dict(self):
+ features = [{"labels": i, "inputs": [0, 1, 2, 3, 4, 5]} for i in ... | DataCollator problem
# ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can... | i have the same. It is new bug. i run this week ago and worked
try this:
```python
class T2TDataCollator:
def __call__(self, batch):
```
@abrozso Hi and thanks for the hint, however, it doesn't seem to fix the problem.
I got the following error when the fine-tuning starts:
06/16/2020 09:03:23 - INFO - tra... | 2020-06-16 13:28:18+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
C... | ['tests/test_trainer.py:TrainerIntegrationTest:test_trainer_eval_mrpc', 'tests/test_trainer.py:TrainerIntegrationTest:test_trainer_eval_lm', 'tests/test_trainer.py:DataCollatorIntegrationTest:test_lm_tokenizer_with_padding', 'tests/test_trainer.py:DataCollatorIntegrationTest:test_default_classification', 'tests/test_tr... | ['tests/test_trainer.py:DataCollatorIntegrationTest:test_default_with_dict'] | null | python -m pytest -v /testbed/tests/test_trainer.py | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["src/transformers/data/data_collator.py->module->function_definition:default_data_collator", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:__init__"] |
huggingface/transformers | 5,122 | huggingface__transformers-5122 | ['5114', '5114'] | ca2d0f98c4a89d50b79ddb06b59b6bffc31ff137 | diff --git a/src/transformers/data/data_collator.py b/src/transformers/data/data_collator.py
--- a/src/transformers/data/data_collator.py
+++ b/src/transformers/data/data_collator.py
@@ -42,10 +42,10 @@ def default_data_collator(features: List[InputDataClass]) -> Dict[str, torch.Ten
# Special handling for labels.
... | diff --git a/tests/test_trainer.py b/tests/test_trainer.py
--- a/tests/test_trainer.py
+++ b/tests/test_trainer.py
@@ -25,7 +25,7 @@
@require_torch
class DataCollatorIntegrationTest(unittest.TestCase):
def test_default_with_dict(self):
- features = [{"labels": i, "inputs": [0, 1, 2, 3, 4, 5]} for i in ran... | data_collator.py does not allow NoneType labels for test set predictions on Glue
# 🐛 Bug
## Information
Model I am using (Bert, XLNet ...): Distilbert
Language I am using the model on (English, Chinese ...): English
The problem arises when using:
* [x] the official example scripts: (give details below)
... | 2020-06-18 20:18:36+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
C... | ['tests/test_trainer.py:TrainerIntegrationTest:test_trainer_eval_mrpc', 'tests/test_trainer.py:TrainerIntegrationTest:test_trainer_eval_lm', 'tests/test_trainer.py:DataCollatorIntegrationTest:test_lm_tokenizer_with_padding', 'tests/test_trainer.py:DataCollatorIntegrationTest:test_default_classification', 'tests/test_tr... | ['tests/test_trainer.py:DataCollatorIntegrationTest:test_default_with_no_labels'] | null | pytest -v /testbed/tests/test_trainer.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/data/data_collator.py->module->function_definition:default_data_collator"] | |
huggingface/transformers | 5,749 | huggingface__transformers-5749 | ['7665'] | 5668fdb09e1bcd888930c1ff242bf200649da39c | diff --git a/src/transformers/tokenization_bert.py b/src/transformers/tokenization_bert.py
--- a/src/transformers/tokenization_bert.py
+++ b/src/transformers/tokenization_bert.py
@@ -398,6 +398,7 @@ def tokenize(self, text, never_split=None):
"""
# union() returns a new set by concatenating the two se... | diff --git a/tests/test_tokenization_bert.py b/tests/test_tokenization_bert.py
--- a/tests/test_tokenization_bert.py
+++ b/tests/test_tokenization_bert.py
@@ -222,6 +222,17 @@ def test_is_punctuation(self):
self.assertFalse(_is_punctuation("A"))
self.assertFalse(_is_punctuation(" "))
+ def test_c... | tokenizer_bert.py not call _clean_text?
for transformers/src/transformers/tokenization_bert.py, there is a function called _clean_text.
But seems this function is not be called at all?
In google bert(https://github.com/google-research/bert/blob/master/tokenization.py) there exists a same function and that function ... | null | 2020-07-14 14:22:48+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
C... | ['tests/test_tokenization_bert.py:BertTokenizationTest:test_rust_and_python_full_tokenizers', 'tests/test_tokenization_bert.py:BertTokenizationTest:test_batch_encode_plus_batch_sequence_length', 'tests/test_tokenization_bert.py:BertTokenizationTest:test_is_punctuation', 'tests/test_tokenization_bert.py:BertTokenization... | ['tests/test_tokenization_bert.py:BertTokenizationTest:test_clean_text'] | null | pytest -v /testbed/tests/test_tokenization_bert.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/tokenization_bert.py->module->class_definition:BasicTokenizer->function_definition:tokenize"] |
huggingface/transformers | 6,098 | huggingface__transformers-6098 | ['6096'] | dafa296c952c08fca3686f1cf8f3a8f8eb116744 | diff --git a/src/transformers/tokenization_bart.py b/src/transformers/tokenization_bart.py
--- a/src/transformers/tokenization_bart.py
+++ b/src/transformers/tokenization_bart.py
@@ -122,6 +122,7 @@ def __init__(self, *args, **kwargs):
}
self.id_to_lang_code = {v: k for k, v in self.lang_code_to_id.it... | diff --git a/tests/test_modeling_mbart.py b/tests/test_modeling_mbart.py
--- a/tests/test_modeling_mbart.py
+++ b/tests/test_modeling_mbart.py
@@ -123,6 +123,7 @@ def test_mbart_fast_forward(self):
self.assertEqual(logits.shape, expected_shape)
+@require_torch
class MBartCC25IntegrationTest(AbstractMBartI... | mBART: incorrect <mask> token id
# 🐛 Bug
## Information
Model I am using: mBART
## To reproduce
```
from transformers import MBartTokenizer
tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-cc25')
print(tokenizer.convert_tokens_to_ids(['<mask>', 'ar_AR']))
```
The output for the above c... | null | 2020-07-28 16:35:53+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
C... | ['tests/test_tokenization_mbart.py:MBartTokenizationTest:test_padding_to_multiple_of', 'tests/test_tokenization_mbart.py:MBartEnroIntegrationTest:test_enro_tokenizer_batch_encode_plus', 'tests/test_tokenization_mbart.py:MBartTokenizationTest:test_maximum_encoding_length_single_input', 'tests/test_tokenization_mbart.py:... | ['tests/test_tokenization_mbart.py:MBartEnroIntegrationTest:test_mask_token'] | null | pytest -v /testbed/tests/test_modeling_mbart.py /testbed/tests/test_tokenization_mbart.py | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["src/transformers/tokenization_bart.py->module->class_definition:MBartTokenizer->function_definition:__init__"] |
huggingface/transformers | 6,322 | huggingface__transformers-6322 | ['5136'] | 930153e7d2d658267b7630a047a4bfc85b86042d | diff --git a/src/transformers/tokenization_transfo_xl.py b/src/transformers/tokenization_transfo_xl.py
--- a/src/transformers/tokenization_transfo_xl.py
+++ b/src/transformers/tokenization_transfo_xl.py
@@ -22,11 +22,13 @@
import os
import pickle
import re
+import warnings
from collections import Counter, OrderedDi... | diff --git a/tests/test_tokenization_fast.py b/tests/test_tokenization_fast.py
--- a/tests/test_tokenization_fast.py
+++ b/tests/test_tokenization_fast.py
@@ -12,14 +12,12 @@
OpenAIGPTTokenizer,
PreTrainedTokenizer,
RobertaTokenizer,
- TransfoXLTokenizer,
is_torch_available,
)
from transformers... | Transformer-XL tokenizer cannot properly tokenize brackets
# 🐛 Bug
## Information
The `TransfoXLTokenizer` is not able to tokenize words with surrounding brackets correctly. I compared it with the `BertTokenizer` from `bert-base-uncased` which gives the expected result. Example text is: `"Hello (bracket)"`
Mode... | **UPDATE**
I've done some further research and discovered that the tokenization of strings containing either
1. any opening bracket, e.g. `( [ {`
2. words with dashes, e.g. `10-year-old`
3. other symbols with no space afterwards, e.g. (`km/h` or `$3`)
4. numbers, either floating point, e.g. `3.23`, or large comm... | 2020-08-07 09:26:47+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
C... | ['tests/test_tokenization_transfo_xl.py:TransfoXLTokenizationTest:test_get_vocab', 'tests/test_tokenization_transfo_xl.py:TransfoXLTokenizationTest:test_special_tokens_mask', 'tests/test_tokenization_fast.py:RobertaFastTokenizerTest:test_all_tokenizers', 'tests/test_tokenization_transfo_xl.py:TransfoXLTokenizationTest:... | ['tests/test_tokenization_transfo_xl.py:TransfoXLTokenizationTest:test_full_tokenizer_moses_numbers'] | null | pytest -v /testbed/tests/test_tokenization_fast.py /testbed/tests/test_tokenization_transfo_xl.py | Bug Fix | false | false | false | true | 8 | 4 | 12 | false | false | ["src/transformers/tokenization_transfo_xl.py->module->class_definition:TransfoXLTokenizerFast", "src/transformers/tokenization_transfo_xl.py->module->class_definition:TransfoXLTokenizer->function_definition:convert_tokens_to_string", "src/transformers/tokenization_transfo_xl.py->module->class_definition:TransfoXLToken... |
huggingface/transformers | 6,735 | huggingface__transformers-6735 | ['6319'] | a32d85f0d405be53117b96075eef2875d2185892 | diff --git a/docs/source/model_doc/encoderdecoder.rst b/docs/source/model_doc/encoderdecoder.rst
--- a/docs/source/model_doc/encoderdecoder.rst
+++ b/docs/source/model_doc/encoderdecoder.rst
@@ -1,12 +1,13 @@
Encoder Decoder Models
------------------------
-This class can wrap an encoder model, such as ``BertModel`... | diff --git a/tests/test_modeling_encoder_decoder.py b/tests/test_modeling_encoder_decoder.py
--- a/tests/test_modeling_encoder_decoder.py
+++ b/tests/test_modeling_encoder_decoder.py
@@ -33,6 +33,7 @@
from transformers import (
BertLMHeadModel,
BertModel,
+ BertTokenizer,
EncoderD... | num_beams error in GPT2DoubleHead model
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.9.1
- Platform: Linux
- Python version: 3.6
- PyTorch version... | encountered the same issue
I think @patrickvonplaten might have some ideas. | 2020-08-25 22:34:28+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_modeling_openai.py:OpenAIGPTModelTest:test_lm_head_model_random_beam_search_generate', 'tests/test_modeling_encoder_decoder.py:BertEncoderDecoderModelTest:test_save_and_load_from_encoder_decoder_pretrained', 'tests/test_modeling_openai.py:OpenAIGPTModelTest:test_torchscript', 'tests/test_modeling_openai.py... | ['tests/test_modeling_gpt2.py:GPT2ModelTest:test_gpt2_double_lm_head_model', 'tests/test_modeling_gpt2.py:GPT2ModelTest:test_lm_head_model_random_beam_search_generate', 'tests/test_modeling_t5.py:T5ModelTest:test_model', 'tests/test_modeling_openai.py:OpenAIGPTModelTest:test_openai_gpt_double_lm_head_model'] | null | pytest -v --json-report --json-report-file=test_results.json /testbed/tests/test_modeling_encoder_decoder.py /testbed/tests/test_modeling_gpt2.py /testbed/tests/test_modeling_openai.py /testbed/tests/test_modeling_t5.py /testbed/tests/test_modeling_tf_gpt2.py /testbed/tests/test_modeling_tf_openai.py /testbed/tests/tes... | Bug Fix | false | false | false | true | 28 | 16 | 44 | false | false | ["src/transformers/modeling_openai.py->module->class_definition:OpenAIGPTDoubleHeadsModelOutput", "src/transformers/modeling_transfo_xl.py->module->class_definition:TransfoXLLMHeadModelOutput->function_definition:logits", "src/transformers/modeling_outputs.py->module->class_definition:Seq2SeqLMOutput", "src/transformer... |
huggingface/transformers | 6,744 | huggingface__transformers-6744 | ['4411'] | 42fddacd1cac3cc57c3326aa51a409f5090b1261 | diff --git a/docs/source/main_classes/pipelines.rst b/docs/source/main_classes/pipelines.rst
--- a/docs/source/main_classes/pipelines.rst
+++ b/docs/source/main_classes/pipelines.rst
@@ -21,6 +21,7 @@ There are two categories of pipeline abstractions to be aware about:
- :class:`~transformers.TokenClassificationPi... | diff --git a/tests/test_pipelines.py b/tests/test_pipelines.py
--- a/tests/test_pipelines.py
+++ b/tests/test_pipelines.py
@@ -28,6 +28,9 @@
]
TF_TRANSLATION_FINETUNED_MODELS = [("patrickvonplaten/t5-tiny-random", "translation_en_to_fr")]
+TEXT2TEXT_FINETUNED_MODELS = ["patrickvonplaten/t5-tiny-random"]
+TF_TEXT2TE... | Pipeline for Conditional Generation (T5 type models)
As text-to-text models (like T5) increase the accessibility of multi-task learning, it also makes sense to have a flexible "Conditional Generation" pipeline.
For example, I should be able to use this pipeline for a multitude of tasks depending on how I format the ... | Yes having a "Conditional Generation" pipeline makes sense given that variety of tasks can be solved using it. We can use T5, BART for these tasks as well as the new Encoder-Decoder. I would like to call it `TextToTextPipeline` though, since we can solve non-generative tasks also as demonstrated in the T5 paper. I thin... | 2020-08-26 12:14:44+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_pipelines.py:MonoColumnInputTestCase:test_torch_summarization', 'tests/test_pipelines.py:ZeroShotClassificationPipelineTests:test_torch_zero_shot_classification', 'tests/test_pipelines.py:MonoColumnInputTestCase:test_torch_fill_mask_with_targets', 'tests/test_pipelines.py:MonoColumnInputTestCase:test_torch... | ['tests/test_pipelines.py:MonoColumnInputTestCase:test_torch_text2text'] | null | pytest -v /testbed/tests/test_pipelines.py --junitxml=test-results.xml | Feature | false | false | false | true | 1 | 2 | 3 | false | false | ["src/transformers/pipelines.py->module->class_definition:Text2TextGenerationPipeline", "src/transformers/pipelines.py->module->class_definition:Text2TextGenerationPipeline->function_definition:__call__", "src/transformers/pipelines.py->module->class_definition:Text2TextGenerationPipeline->function_definition:__init__"... |
huggingface/transformers | 7,075 | huggingface__transformers-7075 | ['7072'] | 28cf873036d078b47fb9dd38ac3421a7c874da44 | diff --git a/examples/benchmarking/run_benchmark.py b/examples/benchmarking/run_benchmark.py
--- a/examples/benchmarking/run_benchmark.py
+++ b/examples/benchmarking/run_benchmark.py
@@ -20,7 +20,25 @@
def main():
parser = HfArgumentParser(PyTorchBenchmarkArguments)
- benchmark_args = parser.parse_args_into_... | diff --git a/tests/test_benchmark.py b/tests/test_benchmark.py
--- a/tests/test_benchmark.py
+++ b/tests/test_benchmark.py
@@ -24,10 +24,10 @@ def test_inference_no_configs(self):
benchmark_args = PyTorchBenchmarkArguments(
models=[MODEL_ID],
training=False,
- no_inference=... | Clean up `benchmark_args_utils.py` "no_..." arguments
# 🚀 Feature request
Currently we have a mixture of negative and positive formulated arguments, *e.g.* `no_cuda` and `training` here: https://github.com/huggingface/transformers/blob/0054a48cdd64e7309184a64b399ab2c58d75d4e5/src/transformers/benchmark/benchmark_ar... | null | 2020-09-11 16:15:48+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | [] | ['tests/test_benchmark.py:BenchmarkTest:test_inference_encoder_decoder_with_configs', 'tests/test_benchmark.py:BenchmarkTest:test_save_csv_files', 'tests/test_benchmark.py:BenchmarkTest:test_inference_no_configs', 'tests/test_benchmark.py:BenchmarkTest:test_train_with_configs', 'tests/test_benchmark.py:BenchmarkTest:te... | null | pytest -v /testbed/tests/test_benchmark.py /testbed/tests/test_benchmark_tf.py | Refactoring | false | false | false | true | 40 | 14 | 54 | false | false | ["src/transformers/benchmark/benchmark_args.py->module->class_definition:PyTorchBenchmarkArguments", "src/transformers/benchmark/benchmark_utils.py->module->class_definition:Benchmark->function_definition:print_results", "src/transformers/benchmark/benchmark_utils.py->module->function_definition:measure_peak_memory_cpu... |
huggingface/transformers | 7,078 | huggingface__transformers-7078 | ['7077'] | 4cbd50e611e5bace6ba81d7bb7e730852bb09142 | diff --git a/src/transformers/tokenization_t5.py b/src/transformers/tokenization_t5.py
--- a/src/transformers/tokenization_t5.py
+++ b/src/transformers/tokenization_t5.py
@@ -96,8 +96,6 @@ class T5Tokenizer(PreTrainedTokenizer):
max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES
model_input_names ... | diff --git a/tests/test_tokenization_t5.py b/tests/test_tokenization_t5.py
--- a/tests/test_tokenization_t5.py
+++ b/tests/test_tokenization_t5.py
@@ -139,9 +139,6 @@ def test_prepare_seq2seq_batch(self):
self.assertEqual((2, 9), batch.input_ids.shape)
self.assertEqual((2, 9), batch.attention_mask.sha... | T5Tokenizer shouldn't add pad token as prefix to labels
## Information
`prepare_seq2seq_batch` method in `T5Tokenizer` now prefixes `pad` token to the `labels`. [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_t5.py#L362)
But in finetune.py [here](https://github.com/huggi... | null | 2020-09-11 18:00:15+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_tokenization_t5.py:T5TokenizationTest:test_number_of_added_tokens', 'tests/test_tokenization_t5.py:T5TokenizationTest:test_call', 'tests/test_tokenization_t5.py:T5TokenizationTest:test_batch_encode_plus_overflowing_tokens', 'tests/test_tokenization_t5.py:T5TokenizationTest:test_pretrained_model_lists', 'te... | ['tests/test_tokenization_t5.py:T5TokenizationTest:test_eos_in_input'] | null | pytest -v /testbed/tests/test_tokenization_t5.py | Bug Fix | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/tokenization_t5.py->module->class_definition:T5Tokenizer->function_definition:prepare_seq2seq_batch", "src/transformers/tokenization_t5.py->module->class_definition:T5Tokenizer->function_definition:build_inputs_with_special_tokens", "src/transformers/tokenization_t5.py->module->class_definition:T5Tok... |
huggingface/transformers | 7,272 | huggingface__transformers-7272 | ['6256'] | 2c8ecdf8a87019c438262d8c692e1bdffe05149f | diff --git a/src/transformers/configuration_longformer.py b/src/transformers/configuration_longformer.py
--- a/src/transformers/configuration_longformer.py
+++ b/src/transformers/configuration_longformer.py
@@ -67,6 +67,5 @@ class LongformerConfig(RobertaConfig):
model_type = "longformer"
def __init__(self,... | diff --git a/tests/test_modeling_auto.py b/tests/test_modeling_auto.py
--- a/tests/test_modeling_auto.py
+++ b/tests/test_modeling_auto.py
@@ -183,14 +183,14 @@ def test_token_classification_model_from_pretrained(self):
def test_from_pretrained_identifier(self):
model = AutoModelWithLMHead.from_pretrained... | LongformerForSequenceClassification has unused layers, making it unable to fine-tune with Data Distributed Parallel (required for gradient checkpointing)
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in tha... | Hey @Weilin37 , sorry to answer so late - this looks like a difficult bug. Let's start with this:
Can you check if your code works on this branch: `try_if_works_for_longformer_mult_gpu` . The changes I did to the branch can be seen here: https://github.com/huggingface/transformers/pull/6607. Since the pooler is not ne... | 2020-09-20 18:33:14+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_modeling_auto.py:AutoModelTest:test_parents_and_children_in_mappings'] | ['tests/test_modeling_auto.py:AutoModelTest:test_from_pretrained_identifier', 'tests/test_modeling_auto.py:AutoModelTest:test_from_identifier_from_model_type'] | null | pytest -v /testbed/tests/test_modeling_auto.py | Bug Fix | false | false | false | true | 8 | 70 | 78 | false | false | ["src/transformers/modeling_mobilebert.py->module->class_definition:MobileBertModel->function_definition:__init__", "src/transformers/modeling_mobilebert.py->module->class_definition:MobileBertModel", "src/transformers/modeling_tf_bert.py->module->class_definition:TFBertForMaskedLM", "src/transformers/modeling_bert.py-... |
huggingface/transformers | 7,374 | huggingface__transformers-7374 | ['7371', '7371'] | eadd870b2f503047dd81b8dcd9d115dc1b4a9196 | diff --git a/src/transformers/modeling_funnel.py b/src/transformers/modeling_funnel.py
--- a/src/transformers/modeling_funnel.py
+++ b/src/transformers/modeling_funnel.py
@@ -367,7 +367,6 @@ def pool_tensor(self, tensor, mode="mean", stride=2):
# Stride is applied on the second-to-last dimension.
stri... | diff --git a/tests/test_modeling_funnel.py b/tests/test_modeling_funnel.py
--- a/tests/test_modeling_funnel.py
+++ b/tests/test_modeling_funnel.py
@@ -428,16 +428,16 @@ def test_inference_tiny_model(self):
model = FunnelModel.from_pretrained("sgugger/funnel-random-tiny")
output = model(input_ids, toke... | FunnelTransformerForSequenceClassification crashes when fine tuning with mixed precision flag
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.2.0
- Pla... | 2020-09-24 19:37:35+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_modeling_funnel.py:FunnelModelTest:test_determinism', 'tests/test_modeling_funnel.py:FunnelBaseModelTest:test_hidden_states_output', 'tests/test_modeling_funnel.py:FunnelBaseModelTest:test_lm_head_model_random_no_beam_search_generate', 'tests/test_modeling_funnel.py:FunnelModelTest:test_torchscript', 'test... | ['tests/test_modeling_funnel.py:FunnelModelIntegrationTest:test_inference_tiny_model'] | null | pytest -v -s --disable-warnings /testbed/tests/test_modeling_funnel.py | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["src/transformers/modeling_tf_funnel.py->module->class_definition:TFFunnelRelMultiheadAttention->function_definition:call", "src/transformers/modeling_funnel.py->module->class_definition:FunnelRelMultiheadAttention->function_definition:forward", "src/transformers/modeling_funnel.py->module->class_definition:FunnelAtte... | |
huggingface/transformers | 7,562 | huggingface__transformers-7562 | ['7514'] | 52f44dd6d23f5c1b3d550685c50281fa6ca12ff3 | diff --git a/docs/source/model_doc/longformer.rst b/docs/source/model_doc/longformer.rst
--- a/docs/source/model_doc/longformer.rst
+++ b/docs/source/model_doc/longformer.rst
@@ -90,6 +90,32 @@ LongformerTokenizerFast
.. autoclass:: transformers.LongformerTokenizerFast
:members:
+Longformer specific outputs
+~... | diff --git a/tests/test_modeling_common.py b/tests/test_modeling_common.py
--- a/tests/test_modeling_common.py
+++ b/tests/test_modeling_common.py
@@ -220,12 +220,13 @@ def test_attention_outputs(self):
for model_class in self.all_model_classes:
inputs_dict["output_attentions"] = True
... | [Longformer] Output both local attentions and global attentions when `output_attentions=True` -> Good Second Issue
# 🚀 Feature request
**Good Second Issue** - A more advanced issue for contributors who want to dive more into Longformer's attention mechanism.
Longformer currently only outputs global attentions, w... | I am working on a pull request to address this. I don't see any major challenge so far, but this made me realize how much `attentions` in Bert-like models and in Longformers are different. Why not replace `attentions` in the Longformer by `local_attentions`?
This means that the interface of Longformers would become ... | 2020-10-04 01:44:37+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_modeling_longformer.py:LongformerModelTest:test_for_multiple_choice', 'tests/test_modeling_longformer.py:LongformerModelIntegrationTest:test_mask_invalid_locations', 'tests/test_modeling_longformer.py:LongformerModelTest:test_head_pruning', 'tests/test_modeling_longformer.py:LongformerModelTest:test_initia... | ['tests/test_modeling_longformer.py:LongformerModelIntegrationTest:test_layer_attn_probs', 'tests/test_modeling_longformer.py:LongformerModelIntegrationTest:test_layer_global_attn', 'tests/test_modeling_longformer.py:LongformerModelIntegrationTest:test_layer_local_attn'] | null | pytest -v -s --disable-warnings /testbed/tests/test_modeling_common.py /testbed/tests/test_modeling_longformer.py /testbed/tests/test_modeling_tf_common.py /testbed/tests/test_modeling_tf_longformer.py | Feature | false | false | false | true | 16 | 11 | 27 | false | false | ["src/transformers/modeling_tf_longformer.py->module->class_definition:TFLongformerSelfAttention", "src/transformers/modeling_tf_longformer.py->module->class_definition:TFLongformerSelfAttention->function_definition:_compute_global_attn_output_from_hidden", "src/transformers/modeling_longformer.py->module->class_defini... |
huggingface/transformers | 7,858 | huggingface__transformers-7858 | ['5990'] | dc552b9b7025ea9c38717f30ad3d69c2a972049d | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -16,7 +16,9 @@
The Trainer class, to easily train a 🤗 Transformers from scratch or finetune it on a new task.
"""
+import collections
import inspect
+import math
import os... | diff --git a/tests/test_trainer.py b/tests/test_trainer.py
old mode 100755
new mode 100644
--- a/tests/test_trainer.py
+++ b/tests/test_trainer.py
@@ -31,11 +31,14 @@
from torch.utils.data import IterableDataset
from transformers import (
+ AutoModelForMaskedLM,
AutoModelForSequenceClassific... | Trainer: exception raised when calling len() on IterableDataset
# 🐛 Bug
## Information
While pre-training a Longformer model from scratch, the text is delivered through an `IterableDataset` object. The code which is called by `Trainer.train()` still calls `len()` on this object, which raises an exception.
#5829 a... | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
| 2020-10-16 20:25:19+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_trainer.py:TrainerIntegrationTest:test_load_best_model_at_end', 'tests/test_trainer.py:TrainerIntegrationTest:test_trainer_with_datasets', 'tests/test_trainer.py:TrainerIntegrationTest:test_num_train_epochs_in_training', 'tests/test_trainer.py:TrainerIntegrationTest:test_number_of_steps_in_training', 'test... | ['tests/test_trainer.py:TrainerIntegrationTest:test_trainer_iterable_dataset'] | null | pytest -v /testbed/tests/test_trainer.py | Bug Fix | false | false | false | true | 10 | 1 | 11 | false | false | ["src/transformers/trainer.py->module->class_definition:Trainer->function_definition:__init__", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:num_examples", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:train", "src/transformers/trainer.py->mod... |
huggingface/transformers | 7,991 | huggingface__transformers-7991 | ['7929'] | 0397619ac65f0756a0c6bf4eee959eae2f106bc3 | diff --git a/src/transformers/tokenization_pegasus.py b/src/transformers/tokenization_pegasus.py
--- a/src/transformers/tokenization_pegasus.py
+++ b/src/transformers/tokenization_pegasus.py
@@ -47,8 +47,8 @@ class PegasusTokenizer(ReformerTokenizer):
pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP
ma... | diff --git a/tests/test_tokenization_reformer.py b/tests/test_tokenization_reformer.py
--- a/tests/test_tokenization_reformer.py
+++ b/tests/test_tokenization_reformer.py
@@ -63,6 +63,50 @@ def test_rust_and_python_full_tokenizers(self):
rust_ids = rust_tokenizer.encode(sequence)
self.assertListEqual(... | Reformer model does not work with padded sequences
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux
- Python version: 3.8.5
- Py... | null | 2020-10-22 20:59:50+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_tokenization_reformer.py:ReformerTokenizationTest:test_is_fast', 'tests/test_tokenization_reformer.py:ReformerTokenizationTest:test_maximum_encoding_length_pair_input', 'tests/test_tokenization_reformer.py:ReformerTokenizationTest:test_tokenizers_common_properties', 'tests/test_tokenization_reformer.py:Ref... | ['tests/test_tokenization_reformer.py:ReformerTokenizationTest:test_padding'] | null | pytest -v /testbed/tests/test_tokenization_reformer.py | Bug Fix | false | false | true | false | 0 | 3 | 3 | false | false | ["src/transformers/tokenization_reformer_fast.py->module->class_definition:ReformerTokenizerFast->function_definition:__init__", "src/transformers/tokenization_reformer.py->module->class_definition:ReformerTokenizer->function_definition:__init__", "src/transformers/tokenization_pegasus.py->module->class_definition:Pega... |
huggingface/transformers | 8,049 | huggingface__transformers-8049 | ['8029'] | 8bbe8247f13057b7df1b2c9abbfacb05b30020bf | diff --git a/src/transformers/tokenization_blenderbot.py b/src/transformers/tokenization_blenderbot.py
--- a/src/transformers/tokenization_blenderbot.py
+++ b/src/transformers/tokenization_blenderbot.py
@@ -166,6 +166,9 @@ def bpe(self, token: str) -> str:
tokens = token.split(" ")
words = []
... | diff --git a/tests/test_tokenization_blenderbot.py b/tests/test_tokenization_blenderbot.py
--- a/tests/test_tokenization_blenderbot.py
+++ b/tests/test_tokenization_blenderbot.py
@@ -75,6 +75,15 @@ def test_special_tokens_small_tok(self):
assert src_text != decoded # I wish it did!
assert decoded == ... | BlenderbotSmallTokenizer throws tuple index out of range error for stopword
Using transformers==3.4.0
Script used:
```
from transformers import BlenderbotSmallTokenizer, BlenderbotForConditionalGeneration
mname = 'facebook/blenderbot-90M'
tokenizer = BlenderbotSmallTokenizer.from_pretrained(mname)
sentence ... | null | 2020-10-26 13:21:17+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_tokenization_blenderbot.py:BlenderbotSmallTokenizerTest:test_embeded_special_tokens', 'tests/test_tokenization_blenderbot.py:BlenderbotSmallTokenizerTest:test_rust_tokenizer_signature', 'tests/test_tokenization_blenderbot.py:BlenderbotSmallTokenizerTest:test_full_blenderbot_small_tokenizer', 'tests/test_to... | ['tests/test_tokenization_blenderbot.py:BlenderbotSmallTokenizerTest:test_empty_word_small_tok'] | null | pytest -v /testbed/tests/test_tokenization_blenderbot.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/tokenization_blenderbot.py->module->class_definition:BlenderbotSmallTokenizer->function_definition:bpe"] |
huggingface/transformers | 8,435 | huggingface__transformers-8435 | ['5142'] | 4185b115d4b3fd408265ffd91581698325652c47 | diff --git a/src/transformers/tokenization_t5.py b/src/transformers/tokenization_t5.py
--- a/src/transformers/tokenization_t5.py
+++ b/src/transformers/tokenization_t5.py
@@ -249,8 +249,17 @@ def _convert_id_to_token(self, index):
def convert_tokens_to_string(self, tokens):
""" Converts a sequence of to... | diff --git a/tests/test_tokenization_t5.py b/tests/test_tokenization_t5.py
--- a/tests/test_tokenization_t5.py
+++ b/tests/test_tokenization_t5.py
@@ -222,3 +222,18 @@ def test_eos_in_input(self):
self.assertEqual(expected_src_tokens, src_ids)
self.assertEqual(expected_tgt_tokens, tgt_ids)
+
+ de... | T5 special tokens not mapped to unique indices in vocabulary
The docs recommend adding the special eos_token `<\s>` to the end of each string when encoding/decoding with `T5Tokenizer`. However, this (and the other special tokens e.g. `unk_token`, `pad_token` aren't assigned unique ids in the lookup vocabulary (they are... | Hey @sarahwie,
Thanks for your issue. I can reproduce the problem and see the reason for it. Currently, we rely on Google's sentencepiece tokenizer: https://github.com/google/sentencepiece for encoding and decoding in T5. What happens is that the `tokenizer.decode(tokens)` depends on the function
`sp_model.deco... | 2020-11-10 11:10:09+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_tokenization_t5.py:T5TokenizationTest:test_build_inputs_with_special_tokens', 'tests/test_tokenization_t5.py:T5TokenizationTest:test_number_of_added_tokens', 'tests/test_tokenization_t5.py:T5TokenizationTest:test_call', 'tests/test_tokenization_t5.py:T5TokenizationTest:test_batch_encode_plus_overflowing_to... | ['tests/test_tokenization_t5.py:T5TokenizationTest:test_fast_and_slow_same_result'] | null | pytest -v /testbed/tests/test_tokenization_t5.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/tokenization_t5.py->module->class_definition:T5Tokenizer->function_definition:convert_tokens_to_string"] |
huggingface/transformers | 8,437 | huggingface__transformers-8437 | ['7840'] | b93569457fd758a60f15d94ac7b3ba3a245096c0 | diff --git a/src/transformers/tokenization_t5.py b/src/transformers/tokenization_t5.py
--- a/src/transformers/tokenization_t5.py
+++ b/src/transformers/tokenization_t5.py
@@ -187,6 +187,28 @@ def _add_eos_if_not_present(self, token_ids: List[int]) -> List[int]:
else:
return token_ids + [self.eos_t... | diff --git a/tests/test_tokenization_t5.py b/tests/test_tokenization_t5.py
--- a/tests/test_tokenization_t5.py
+++ b/tests/test_tokenization_t5.py
@@ -223,6 +223,20 @@ def test_eos_in_input(self):
self.assertEqual(expected_src_tokens, src_ids)
self.assertEqual(expected_tgt_tokens, tgt_ids)
+ def ... | Token Type IDs returned from the tokenizer for T5 don't work with special tokens
With `transformers-3.3.1`:
```
import transformers
t = transformers.AutoTokenizer.from_pretrained('t5-small')
t.encode_plus(["a"], ["b"], add_special_tokens=True, return_token_type_ids=True)
```
This results in
```
{'input_ids'... | null | 2020-11-10 11:58:31+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_tokenization_t5.py:T5TokenizationTest:test_build_inputs_with_special_tokens', 'tests/test_tokenization_t5.py:T5TokenizationTest:test_number_of_added_tokens', 'tests/test_tokenization_t5.py:T5TokenizationTest:test_call', 'tests/test_tokenization_t5.py:T5TokenizationTest:test_batch_encode_plus_overflowing_to... | ['tests/test_tokenization_t5.py:T5TokenizationTest:test_token_type_ids'] | null | pytest -v /testbed/tests/test_tokenization_t5.py | Bug Fix | false | false | false | true | 2 | 2 | 4 | false | false | ["src/transformers/tokenization_t5.py->module->class_definition:T5Tokenizer->function_definition:create_token_type_ids_from_sequences", "src/transformers/tokenization_t5_fast.py->module->class_definition:T5TokenizerFast", "src/transformers/tokenization_t5_fast.py->module->class_definition:T5TokenizerFast->function_defi... |
huggingface/transformers | 8,554 | huggingface__transformers-8554 | ['8553'] | 0603564e9323bd424217581e5297da6cd202817b | diff --git a/src/transformers/models/prophetnet/modeling_prophetnet.py b/src/transformers/models/prophetnet/modeling_prophetnet.py
--- a/src/transformers/models/prophetnet/modeling_prophetnet.py
+++ b/src/transformers/models/prophetnet/modeling_prophetnet.py
@@ -1793,8 +1793,8 @@ def forward(
encoder_a... | diff --git a/tests/test_modeling_prophetnet.py b/tests/test_modeling_prophetnet.py
--- a/tests/test_modeling_prophetnet.py
+++ b/tests/test_modeling_prophetnet.py
@@ -417,7 +417,7 @@ def check_fast_integration(
decoder_attention_mask=decoder_attention_mask,
labels=lm_labels,
... | `disable_ngram_loss` doesn't work correctly in ProphetNetForConditionalGeneration
When I am using ProphetNet with `disable_ngram_loss=True` I am getting loss that is greater than with `disable_ngram_loss=False`. It seems to me that this is the problem of setting `fill_(self.padding_idx)` in `_compute_loss` instead of -... | null | 2020-11-15 21:55:06+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_modeling_prophetnet.py:ProphetNetStandaloneDecoderModelTest:test_save_load', 'tests/test_modeling_prophetnet.py:ProphetNetStandaloneEncoderModelTest:test_head_pruning', 'tests/test_modeling_prophetnet.py:ProphetNetStandaloneDecoderModelTest:test_training', 'tests/test_modeling_prophetnet.py:ProphetNetModel... | ['tests/test_modeling_prophetnet.py:ProphetNetModelTest:test_fast_integration'] | null | pytest -v /testbed/tests/test_modeling_prophetnet.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/models/prophetnet/modeling_prophetnet.py->module->class_definition:ProphetNetForConditionalGeneration->function_definition:_compute_loss", "src/transformers/models/prophetnet/modeling_prophetnet.py->module->class_definition:ProphetNetForCausalLM->function_definition:_compute_loss"] |
huggingface/transformers | 8,624 | huggingface__transformers-8624 | ['5605'] | cdfa56afe02c3ed5d2b86498515cfddf82d56f2c | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -676,11 +676,12 @@ def train(self, model_path: Optional[str] = None, trial: Union["optuna.Trial", D
self.state = TrainerState.load_from_json(os.path.join(model_path,... | diff --git a/tests/test_trainer.py b/tests/test_trainer.py
--- a/tests/test_trainer.py
+++ b/tests/test_trainer.py
@@ -465,6 +465,14 @@ def test_save_checkpoints(self):
trainer.train()
self.check_saved_checkpoints(tmpdir, 5, int(self.n_epochs * 64 / self.batch_size), False)
+ def test_gra... | Here maybe a bug, when we load staged checkpoint
https://github.com/huggingface/transformers/blob/40d98ebf50c4662bcd6dce6395bbed0b2142ea52/src/transformers/trainer.py#L458
I met this bug when I used the setting below:
global_steps = 2748
len(train_dataloader) = 27484
gradient_accumulation_steps = 4
In the or... | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I'm also puzzled by this. The calculations here seems incorrect.
To me these calculations are not incorrect if we take `step` as optimization step... | 2020-11-18 16:42:19+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_trainer.py:TrainerIntegrationTest:test_load_best_model_at_end', 'tests/test_trainer.py:TrainerIntegrationTest:test_trainer_with_datasets', 'tests/test_trainer.py:TrainerIntegrationTest:test_train_and_eval_dataloaders', 'tests/test_trainer.py:TrainerIntegrationTest:test_num_train_epochs_in_training', 'tests... | ['tests/test_trainer.py:TrainerIntegrationTest:test_resume_training_with_gradient_accumulation'] | null | pytest -v /testbed/tests/test_trainer.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/trainer.py->module->class_definition:Trainer->function_definition:train"] |
huggingface/transformers | 8,747 | huggingface__transformers-8747 | ['8601'] | 7f2c00913a32fcc4d09db89c51bb86d6fe1a59e8 | diff --git a/src/transformers/models/bart/modeling_bart.py b/src/transformers/models/bart/modeling_bart.py
--- a/src/transformers/models/bart/modeling_bart.py
+++ b/src/transformers/models/bart/modeling_bart.py
@@ -358,11 +358,13 @@ def forward(
# B x T x C -> T x B x C
x = x.transpose(0, 1)
- ... | diff --git a/tests/test_modeling_common.py b/tests/test_modeling_common.py
--- a/tests/test_modeling_common.py
+++ b/tests/test_modeling_common.py
@@ -689,6 +689,56 @@ def check_hidden_states_output(inputs_dict, config, model_class):
check_hidden_states_output(inputs_dict, config, model_class)
+ def... | Accessing gradients of Bart hidden states
The forums suggested that this be filed as a bug report:
https://discuss.huggingface.co/t/finding-gradients-in-zero-shot-learning/2033/5
The solution to the problem was solved on SO:
https://stackoverflow.com/questions/64823332/gradients-returning-none-in-huggingface-m... | @joeddav - feel free to ping me again if you're too busy. Leaving it up to you for now :-)
Hey thanks for opening the detailed issue. As I mentioned this is a Bart issue, nothing specific to zero shot, so I've renamed it to get the right eyes on it.
The problem here is that the hidden states are transposed _after_ ... | 2020-11-24 00:01:55+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8.16-slim-buster
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
&& rm -rf /var/lib/apt/lists/*
# Install Python depen... | ['tests/test_modeling_prophetnet.py:ProphetNetStandaloneDecoderModelTest:test_save_load', 'tests/test_modeling_reformer.py:ReformerLocalAttnModelTest:test_reformer_no_chunking', 'tests/test_modeling_prophetnet.py:ProphetNetStandaloneEncoderModelTest:test_head_pruning', 'tests/test_modeling_prophetnet.py:ProphetNetStand... | ['tests/test_modeling_prophetnet.py:ProphetNetModelTest:test_retain_grad_hidden_states_attentions', 'tests/test_modeling_prophetnet.py:ProphetNetStandaloneEncoderModelTest:test_retain_grad_hidden_states_attentions'] | null | pytest -v /testbed/tests/test_modeling_common.py /testbed/tests/test_modeling_longformer.py /testbed/tests/test_modeling_lxmert.py /testbed/tests/test_modeling_prophetnet.py /testbed/tests/test_modeling_reformer.py /testbed/tests/test_modeling_transfo_xl.py /testbed/tests/test_modeling_xlnet.py --capture=no | Bug Fix | false | true | false | false | 13 | 0 | 13 | false | false | ["src/transformers/models/openai/modeling_openai.py->module->class_definition:OpenAIGPTModel->function_definition:forward", "src/transformers/models/prophetnet/modeling_prophetnet.py->module->class_definition:ProphetNetEncoder->function_definition:forward", "src/transformers/models/bart/modeling_bart.py->module->class_... |
huggingface/transformers | 12,981 | huggingface__transformers-12981 | ['12970'] | 75b8990d9068a2c6ef448c190f2595c17fbcb993 | diff --git a/src/transformers/trainer.py b/src/transformers/trainer.py
--- a/src/transformers/trainer.py
+++ b/src/transformers/trainer.py
@@ -1005,6 +1005,7 @@ def train(
kwargs:
Additional keyword arguments used to hide deprecated arguments
"""
+ resume_from_checkpoint = ... | diff --git a/tests/test_trainer.py b/tests/test_trainer.py
--- a/tests/test_trainer.py
+++ b/tests/test_trainer.py
@@ -827,6 +827,20 @@ def test_resume_training_with_randomness(self):
self.assertAlmostEqual(a, a1, delta=1e-8)
self.assertAlmostEqual(b, b1, delta=1e-8)
+ # regression for this issue... | `Trainer.train(resume_from_checkpoint=False)` is causing an exception
Since `resume_from_checkpoint` can be `str` and `bool` it should be possible to pass `False` to it.
But when `resume_from_checkpoint` is `False` it causes an exception here:
https://github.com/huggingface/transformers/blob/3d4b3bc3fd77e0e48e23644... | That seems like the right fix indeed. Please go ahead with a PR, thanks! :-) | 2021-08-02 16:23:41+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY . .
# Install Py... | ['tests/test_trainer.py:TrainerIntegrationTest:test_number_of_steps_in_training', 'tests/test_trainer.py:TrainerIntegrationTest:test_resume_training_with_gradient_accumulation', 'tests/test_trainer.py:TrainerIntegrationTest:test_resume_training_with_randomness', 'tests/test_trainer.py:TrainerIntegrationTest:test_traini... | ['tests/test_trainer.py:TrainerIntegrationTest:test_training_with_resume_from_checkpoint_flase'] | null | python -m pytest /testbed/tests/test_trainer.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/trainer.py->module->class_definition:Trainer->function_definition:train"] |
huggingface/transformers | 13,436 | huggingface__transformers-13436 | ['13430'] | 2dd975b235118a578d34f7293e193d79a6437102 | diff --git a/src/transformers/models/clip/configuration_clip.py b/src/transformers/models/clip/configuration_clip.py
--- a/src/transformers/models/clip/configuration_clip.py
+++ b/src/transformers/models/clip/configuration_clip.py
@@ -230,6 +230,8 @@ class CLIPConfig(PretrainedConfig):
Dictionary of config... | diff --git a/tests/test_modeling_clip.py b/tests/test_modeling_clip.py
--- a/tests/test_modeling_clip.py
+++ b/tests/test_modeling_clip.py
@@ -20,6 +20,8 @@
import tempfile
import unittest
+import numpy as np
+
import requests
from transformers import CLIPConfig, CLIPTextConfig, CLIPVisionConfig
from transformer... | Difference between `logit_scale` initialisation in Transformers CLIP and the original OpenAI implementation.
I tried another training code based on the OpenAI'CLIP version: I found a difference at logit_scale between them. Does it mean temperature parameter? Is it the reason for loss rising?
huggingface transformers' ... | null | 2021-09-06 05:51:46+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY . .
# Install Py... | ['tests/test_modeling_clip.py:CLIPModelTest:test_model_outputs_equivalence', 'tests/test_modeling_clip.py:CLIPVisionModelTest:test_hidden_states_output', 'tests/test_modeling_clip.py:CLIPVisionModelTest:test_torch_fx', 'tests/test_modeling_clip.py:CLIPTextModelTest:test_correct_missing_keys', 'tests/test_modeling_clip.... | ['tests/test_modeling_clip.py:CLIPModelTest:test_initialization'] | null | python -m pytest /testbed/tests/test_modeling_clip.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 3 | 4 | false | false | ["src/transformers/models/clip/modeling_flax_clip.py->module->class_definition:FlaxCLIPModule->function_definition:setup", "src/transformers/models/clip/configuration_clip.py->module->class_definition:CLIPConfig->function_definition:__init__", "src/transformers/models/clip/modeling_clip.py->module->class_definition:CLI... |
huggingface/transformers | 13,491 | huggingface__transformers-13491 | ['11096'] | 1c191efc3abc391072ff0094a8108459bc08e3fa | diff --git a/src/transformers/models/gpt_neo/modeling_gpt_neo.py b/src/transformers/models/gpt_neo/modeling_gpt_neo.py
--- a/src/transformers/models/gpt_neo/modeling_gpt_neo.py
+++ b/src/transformers/models/gpt_neo/modeling_gpt_neo.py
@@ -134,114 +134,39 @@ def load_tf_weights_in_gpt_neo(model, config, gpt_neo_checkpoi... | diff --git a/tests/test_modeling_gpt_neo.py b/tests/test_modeling_gpt_neo.py
--- a/tests/test_modeling_gpt_neo.py
+++ b/tests/test_modeling_gpt_neo.py
@@ -36,7 +36,6 @@
GPTNeoForSequenceClassification,
GPTNeoModel,
)
- from transformers.models.gpt_neo.modeling_gpt_neo import GPTNeoAttentionMix... | GPTNeo: RuntimeError: shape mismatch when using past_key_values to go forward more than one token
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.0.dev0
-... | Hi @sboparen
Right now the caching is implemented such that when `past_key_values` are passed current token length must be 1.
This is due to the local attention layer which uses dynamic block length. This is a known limitation and I'm working on it at the moment.
This issue has been automatically marked as stale b... | 2021-09-09 07:31:52+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY . .
# Install Py... | ['tests/test_modeling_gpt_neo.py:GPTNeoModelTest:test_model_common_attributes', 'tests/test_modeling_gpt_neo.py:GPTNeoModelTest:test_feed_forward_chunking', 'tests/test_modeling_gpt_neo.py:GPTNeoModelTest:test_retain_grad_hidden_states_attentions', 'tests/test_modeling_gpt_neo.py:GPTNeoModelTest:test_resize_embeddings_... | ['tests/test_modeling_gpt_neo.py:GPTNeoModelTest:test_greedy_generate_dict_outputs_use_cache', 'tests/test_modeling_gpt_neo.py:GPTNeoModelTest:test_beam_search_generate_dict_outputs_use_cache', 'tests/test_modeling_gpt_neo.py:GPTNeoModelTest:test_sample_generate_dict_output', 'tests/test_modeling_gpt_neo.py:GPTNeoModel... | null | python -m pytest /testbed/tests/test_modeling_gpt_neo.py --json-report --json-report-file=test_output.json -v | Bug Fix | false | false | false | true | 14 | 7 | 21 | false | false | ["src/transformers/models/gpt_neo/modeling_gpt_neo.py->module->class_definition:GPTNeoSelfAttention->function_definition:forward", "src/transformers/models/gpt_neo/modeling_gpt_neo.py->module->class_definition:GPTNeoAttentionMixin->function_definition:_split_heads", "src/transformers/models/gpt_neo/modeling_gpt_neo.py-... |
huggingface/transformers | 13,495 | huggingface__transformers-13495 | ['13148'] | de635af3f1ef740aa32f53a91473269c6435e19e | diff --git a/src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py b/src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py
--- a/src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py
+++ b/src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py
@@ -650,7 +650,7 @@ def _batch_prepare_for_mo... | diff --git a/tests/test_tokenization_layoutlmv2.py b/tests/test_tokenization_layoutlmv2.py
--- a/tests/test_tokenization_layoutlmv2.py
+++ b/tests/test_tokenization_layoutlmv2.py
@@ -15,6 +15,7 @@
import inspect
import os
+import re
import shutil
import tempfile
import unittest
@@ -1777,13 +1778,515 @@ def test_... | Slow tokenizers return overflowing tokens in reversed order
When implementing the slow tokenizer for LayoutLMv2, I spotted some weird behaviour for slow tokenizers when specifying `return_overflowing_tokens = True`. Namely, in that case, overflowing tokens are returned in reversed order, and no padding is performed, un... | @NielsRogge I would like to contribute to this. Can I work on this issue?
Sure! The goal would be to make the slow tokenizers equivalent to the fast tokenizers. So that means:
- [ ] making sure overflowing tokens are returned in the correct order
- [ ] add special tokens to the overflowing tokens
- [ ] add a `o... | 2021-09-09 12:43:38+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY . .
# Install Py... | ['tests/test_tokenization_layoutlmv2.py:LayoutLMv2TokenizationTest:test_sequence_ids', 'tests/test_tokenization_layoutlmv2.py:LayoutLMv2TokenizationTest:test_added_tokens_do_lower_case', 'tests/test_tokenization_layoutlmv2.py:LayoutLMv2TokenizationTest:test_add_special_tokens', 'tests/test_tokenization_layoutlmv2.py:La... | ['tests/test_tokenization_layoutlmv2.py:LayoutLMv2TokenizationTest:test_maximum_encoding_length_pair_input', 'tests/test_tokenization_layoutlmv2.py:LayoutLMv2TokenizationTest:test_maximum_encoding_length_single_input'] | null | python -m pytest /testbed/tests/test_tokenization_layoutlmv2.py --json-report --json-report-file=test_output.json -v | Bug Fix | false | true | false | false | 4 | 0 | 4 | false | false | ["src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py->module->class_definition:LayoutLMv2Tokenizer->function_definition:_batch_prepare_for_model", "src/transformers/models/layoutlmv2/tokenization_layoutlmv2.py->module->class_definition:LayoutLMv2Tokenizer->function_definition:prepare_for_model", "src/transfo... |
huggingface/transformers | 13,573 | huggingface__transformers-13573 | ['13463'] | 41c186d2a4c0b9ae24a388e341710b33b2c2cc4f | diff --git a/docs/source/model_doc/gpt2.rst b/docs/source/model_doc/gpt2.rst
--- a/docs/source/model_doc/gpt2.rst
+++ b/docs/source/model_doc/gpt2.rst
@@ -41,6 +41,8 @@ Tips:
pre-computed values in the context of text generation. For PyTorch, see `past_key_values` argument of the
:meth:`~transformers.GPT2Model.fo... | diff --git a/tests/test_modeling_gpt2.py b/tests/test_modeling_gpt2.py
--- a/tests/test_modeling_gpt2.py
+++ b/tests/test_modeling_gpt2.py
@@ -15,6 +15,7 @@
import datetime
+import math
import unittest
from transformers import GPT2Config, is_torch_available
@@ -96,7 +97,9 @@ def __init__(
def get_large_mo... | Upcasting of attention computation for reliable pretraining of GPT-2 models
# 🚀 Feature request
In a recent [talk](https://youtu.be/AYPOzc50PHw?t=3662) about pretraining language models as part of the [Mistral](https://github.com/stanford-crfm/mistral/) project @siddk mentioned that in order to achieve stable pretr... | Also related are https://github.com/huggingface/huggingface_hub/issues/300 and https://github.com/stanford-crfm/mistral/issues/86
Hey folks, sorry I'm late to the party. Replying here to just to centralize things.
The upcasting + scaled-dot product attn reordering + scaling implemented in Mistral is a pretty straig... | 2021-09-15 04:32:03+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY . .
# Install Py... | ['tests/test_modeling_gpt2.py:GPT2ModelTest:test_load_with_mismatched_shapes', 'tests/test_modeling_gpt2.py:GPT2ModelTest:test_gpt2_double_lm_head_model', 'tests/test_modeling_gpt2.py:GPT2ModelTest:test_group_beam_search_generate_dict_output', 'tests/test_modeling_gpt2.py:GPT2ModelTest:test_sample_generate', 'tests/tes... | ['tests/test_modeling_gpt2.py:GPT2ModelTest:test_gpt2_weight_initialization'] | null | python -m pytest /testbed/tests/test_modeling_gpt2.py -v --junitxml=test-results.xml | Feature | false | false | false | true | 4 | 7 | 11 | false | false | ["src/transformers/models/gpt2/modeling_gpt2.py->module->class_definition:GPT2Attention->function_definition:_upcast_and_reordered_attn", "src/transformers/models/gpt2/modeling_gpt2.py->module->class_definition:GPT2Model->function_definition:__init__", "src/transformers/models/gpt2/modeling_gpt2.py->module->class_defin... |
huggingface/transformers | 13,693 | huggingface__transformers-13693 | ['13689'] | 8e908c8c74f556a82534f4cf1e7a1b4f7b55d24c | diff --git a/src/transformers/feature_extraction_sequence_utils.py b/src/transformers/feature_extraction_sequence_utils.py
--- a/src/transformers/feature_extraction_sequence_utils.py
+++ b/src/transformers/feature_extraction_sequence_utils.py
@@ -187,23 +187,6 @@ def pad(
padding_strategy = self._get_padding_s... | diff --git a/tests/test_feature_extraction_speech_to_text.py b/tests/test_feature_extraction_speech_to_text.py
--- a/tests/test_feature_extraction_speech_to_text.py
+++ b/tests/test_feature_extraction_speech_to_text.py
@@ -235,3 +235,16 @@ def test_cepstral_mean_and_variance_normalization_trunc_longest(self):
... | New Wav2Vec2 padding has slightly backward breaking changes
The PR: https://github.com/huggingface/transformers/pull/13650 introduced some quite tricky backwards breaking changes that we should try to fix.
The problem is the following: A user might directly use `feature_extractor.pad(...)` instead of `feature_extra... | @anton-l - could you maybe look into it? :-) It's quite a tricky backwards compatible bug and we should have had tests to catch this problem. Would be great if you could try to open a PR to fix it :-) | 2021-09-22 08:05:39+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
libsndfile1 \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY ... | ['tests.test_feature_extraction_speech_to_text.Speech2TextFeatureExtractionTest:test_feat_extract_common_properties', 'tests.test_feature_extraction_speech_to_text.Speech2TextFeatureExtractionTest:test_feat_extract_to_json_file', 'tests.test_feature_extraction_wav2vec2.Wav2Vec2FeatureExtractionTest:test_batch_feature',... | ['tests.test_feature_extraction_wav2vec2.Wav2Vec2FeatureExtractionTest:test_double_precision_pad:'] | null | python -m unittest /testbed/tests/test_feature_extraction_speech_to_text.py /testbed/tests/test_feature_extraction_wav2vec2.py -v | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/feature_extraction_sequence_utils.py->module->class_definition:SequenceFeatureExtractor->function_definition:pad"] |
huggingface/transformers | 13,865 | huggingface__transformers-13865 | ['13847'] | 3a8de58c5192b620228128430ea52e6eda81c40a | diff --git a/src/transformers/hf_argparser.py b/src/transformers/hf_argparser.py
--- a/src/transformers/hf_argparser.py
+++ b/src/transformers/hf_argparser.py
@@ -17,6 +17,7 @@
import re
import sys
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser, ArgumentTypeError
+from copy import copy
from enum... | diff --git a/tests/test_hf_argparser.py b/tests/test_hf_argparser.py
--- a/tests/test_hf_argparser.py
+++ b/tests/test_hf_argparser.py
@@ -126,8 +126,10 @@ def test_with_default_bool(self):
expected = argparse.ArgumentParser()
expected.add_argument("--foo", type=string_to_bool, default=False, const=... | Default arguments of clm example are confusing
I was having a look at the `run_clm.py` script and which new arguments are available to push to the hub.
```sh
python transformers\examples\pytorch\language-modeling\run_clm.py -h
```
I see the following options (note the True defaults for all):
```
--no_keep... | Unfortunately, since the two arguments are accepted, there is no way for us to automate a better documentation of them from the `HfArgumentParser` (if you have ideas, by all means!) so you should rely on the documentation of [`TrainingArguments`](https://huggingface.co/transformers/main_classes/trainer.html#trainingarg... | 2021-10-04 15:07:51+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY . .
# Install Py... | ['tests/test_hf_argparser.py:HfArgumentParserTest:test_with_list', 'tests/test_hf_argparser.py:HfArgumentParserTest:test_with_required', 'tests/test_hf_argparser.py:HfArgumentParserTest:test_integration_training_args', 'tests/test_hf_argparser.py:HfArgumentParserTest:test_basic', 'tests/test_hf_argparser.py:HfArgumentP... | ['tests/test_hf_argparser.py:HfArgumentParserTest:test_with_default_bool'] | null | python -m pytest /testbed/tests/test_hf_argparser.py -v --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser", "src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:_add_dataclass_arguments"] |
huggingface/transformers | 13,919 | huggingface__transformers-13919 | ['13880'] | 279ce5b705a0b8689f2a8e5d5258dbb5421c9e6c | diff --git a/src/transformers/generation_stopping_criteria.py b/src/transformers/generation_stopping_criteria.py
--- a/src/transformers/generation_stopping_criteria.py
+++ b/src/transformers/generation_stopping_criteria.py
@@ -71,6 +71,12 @@ class MaxNewTokensCriteria(StoppingCriteria):
"""
def __init__(sel... | diff --git a/tests/test_generation_utils.py b/tests/test_generation_utils.py
--- a/tests/test_generation_utils.py
+++ b/tests/test_generation_utils.py
@@ -24,7 +24,13 @@
if is_torch_available():
import torch
- from transformers import BartForConditionalGeneration, BartTokenizer, top_k_top_p_filtering
+ fr... | GPT-J float16 model output stopping after first word
## Environment info
- `transformers` version: 4.11.2
- Platform: Linux-5.4.0-1045-aws-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): ... | Hi! This is because the`max_length` argument specifies the total length including the length of prompt tokens and here the length of prompt tokens is 209, which is more than `max_length` hence only one token is generated.
If you instead want to specify how many new tokens to generate then use the `max_new_tokens` ar... | 2021-10-07 10:27:12+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY . .
# Install Py... | ['tests/test_generation_utils.py:GenerationIntegrationTests:test_max_length_backward_compat_greedy', 'tests/test_generation_utils.py:GenerationIntegrationTests:test_max_length_backward_compat_group_beam_search', 'tests/test_generation_utils.py:GenerationIntegrationTests:test_max_length_backward_compat_sample', 'tests/t... | ['tests/test_generation_utils.py:GenerationIntegrationTests:test_max_new_tokens_decoder_only', 'tests/test_generation_utils.py:GenerationIntegrationTests:test_max_new_tokens_encoder_decoder'] | null | python -m pytest /testbed/tests/test_generation_utils.py --json-report --json-report-file=report.json -v | Bug Fix | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/generation_utils.py->module->class_definition:GenerationMixin->function_definition:_get_stopping_criteria", "src/transformers/generation_stopping_criteria.py->module->class_definition:MaxNewTokensCriteria->function_definition:__init__", "src/transformers/generation_utils.py->module->class_definition:... |
huggingface/transformers | 13,988 | huggingface__transformers-13988 | ['13779'] | 408b2d2bd08f667cf4154730cc323c4e49657eed | diff --git a/src/transformers/models/byt5/tokenization_byt5.py b/src/transformers/models/byt5/tokenization_byt5.py
--- a/src/transformers/models/byt5/tokenization_byt5.py
+++ b/src/transformers/models/byt5/tokenization_byt5.py
@@ -237,7 +237,7 @@ def convert_tokens_to_string(self, tokens):
else:
... | diff --git a/tests/test_tokenization_byt5.py b/tests/test_tokenization_byt5.py
--- a/tests/test_tokenization_byt5.py
+++ b/tests/test_tokenization_byt5.py
@@ -290,6 +290,22 @@ def test_special_tokens_initialization_with_non_empty_additional_special_tokens(
),
)
+ def test_deco... | ByT5: problem with tokenizer.decode()
## Environment info
- transformers version: 4.11.0
- Platform: Google Colab
- Python version: 3.7.12
- Using GPU in script?: NO
- Using distributed or parallel set-up in script?: NO
### Who can help
ByT5: @patrickvonplaten
Documentation: @sgugger
## Information
... | Hey :)
for faster debugging this can be break-downed to:
```python
from transformers import T5ForConditionalGeneration, ByT5Tokenizer
model_checkpoint = 'google/byt5-small'
tokenizer = ByT5Tokenizer.from_pretrained(model_checkpoint)
print(tokenizer.decode([258], skip_special_tokens=True, clean_up_tokeniza... | 2021-10-13 18:02:20+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY . .
# Install Py... | ['tests/test_tokenization_byt5.py:ByT5TokenizationTest:test_tokenizer_mismatch_warning', 'tests/test_tokenization_byt5.py:ByT5TokenizationTest:test_tokenizer_slow_store_full_signature', 'tests/test_tokenization_byt5.py:ByT5TokenizationTest:test_build_inputs_with_special_tokens', 'tests/test_tokenization_byt5.py:ByT5Tok... | ['tests/test_tokenization_byt5.py:ByT5TokenizationTest:test_decode_single_bytes'] | null | python -m pytest /testbed/tests/test_tokenization_byt5.py -v --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/byt5/tokenization_byt5.py->module->class_definition:ByT5Tokenizer->function_definition:convert_tokens_to_string"] |
huggingface/transformers | 13,989 | huggingface__transformers-13989 | ['13522'] | 408b2d2bd08f667cf4154730cc323c4e49657eed | diff --git a/docs/source/model_doc/auto.rst b/docs/source/model_doc/auto.rst
--- a/docs/source/model_doc/auto.rst
+++ b/docs/source/model_doc/auto.rst
@@ -27,7 +27,32 @@ Instantiating one of :class:`~transformers.AutoConfig`, :class:`~transformers.Au
will create a model that is an instance of :class:`~transformers.B... | diff --git a/tests/test_configuration_auto.py b/tests/test_configuration_auto.py
--- a/tests/test_configuration_auto.py
+++ b/tests/test_configuration_auto.py
@@ -14,6 +14,7 @@
# limitations under the License.
import os
+import tempfile
import unittest
from transformers.models.auto.configuration_auto import CON... | The new impl for CONFIG_MAPPING prevents users from adding any custom models
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.10+
- Platform: Ubuntu 18.04
-... | Adding a config/model/tokenizer to those constants wasn't really supported before (but I agree it may have worked in some situations). A mechanism to add a custom model/config/tokenizer is on the roadmap!
Slightly different but which may be of interest, we are also starting to implement support for custom modeling (so... | 2021-10-13 18:33:16+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
&& rm -rf /var/lib/apt/lists/*
# Copy the repository contents
COPY . .
# Install Py... | ['tests/test_modeling_auto.py:AutoModelTest:test_from_pretrained_identifier', 'tests/test_modeling_auto.py:AutoModelTest:test_parents_and_children_in_mappings', 'tests/test_configuration_auto.py:AutoConfigTest:test_config_model_type_from_model_identifier', 'tests/test_modeling_auto.py:AutoModelTest:test_from_pretrained... | ['tests/test_tokenization_auto.py:AutoTokenizerTest:test_new_tokenizer_fast_registration', 'tests/test_configuration_auto.py:AutoConfigTest:test_new_config_registration', 'tests/test_modeling_tf_auto.py:TFAutoModelTest:test_new_model_registration', 'tests/test_tokenization_auto.py:AutoTokenizerTest:test_new_tokenizer_r... | null | python -m pytest -v /testbed/tests/test_configuration_auto.py /testbed/tests/test_modeling_auto.py /testbed/tests/test_modeling_tf_auto.py /testbed/tests/test_tokenization_auto.py --junitxml=test-results.xml | Feature | false | false | false | true | 18 | 7 | 25 | false | false | ["src/transformers/models/auto/auto_factory.py->module->class_definition:_LazyAutoMapping->function_definition:items", "src/transformers/models/auto/tokenization_auto.py->module->class_definition:AutoTokenizer->function_definition:register", "src/transformers/models/auto/auto_factory.py->module->class_definition:_LazyA... |
huggingface/transformers | 14,355 | huggingface__transformers-14355 | ['14332'] | 700a748fe6f0ed62185710f20e1c78e083edc14b | diff --git a/docs/source/model_doc/segformer.rst b/docs/source/model_doc/segformer.rst
--- a/docs/source/model_doc/segformer.rst
+++ b/docs/source/model_doc/segformer.rst
@@ -38,6 +38,58 @@ Cityscapes validation set and shows excellent zero-shot robustness on Cityscapes
This model was contributed by `nielsr <https://h... | diff --git a/tests/test_feature_extraction_beit.py b/tests/test_feature_extraction_beit.py
--- a/tests/test_feature_extraction_beit.py
+++ b/tests/test_feature_extraction_beit.py
@@ -17,6 +17,7 @@
import unittest
import numpy as np
+from datasets import load_dataset
from transformers.file_utils import is_torch_a... | `SegformerFeatureExtractor` trying to access non-existent `.ndim` attribute
## Environment info
- `transformers` version: 4.12.3
- Platform: AWS Sagemaker with Amazon Linux 2 base
- Python version: 3.8.12
### Who can help
@NielsRogge or @sgugger
## Information
Model I am using (Bert, XLNet ...): Segforme... | I did some more debugging on this and it looks like the problem is with the application of `self.pad()` to the `segmentation_maps`.
The `segmentation_maps` are `PIL.Image` objects when they are passed to `self.pad()`. This is not a problem for the `images` when they are passed to `self.pad()` because `images` have a... | 2021-11-10 12:20:52+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
build-essential \
python3-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy reposit... | ['tests.test_feature_extraction_segformer.SegformerFeatureExtractionTest:test_init_without_params', 'tests.test_feature_extraction_beit.BeitFeatureExtractionTest:test_init_without_params', 'tests.test_feature_extraction_beit.BeitFeatureExtractionTest:test_feat_extract_to_json_string', 'tests.test_feature_extraction_bei... | ['tests.test_feature_extraction_segformer.SegformerFeatureExtractionTest:test_call_pil:', 'tests.test_feature_extraction_segformer.SegformerFeatureExtractionTest:test_call_numpy:', 'tests.test_feature_extraction_segformer.SegformerFeatureExtractionTest:test_call_pytorch:', 'tests.test_feature_extraction_segformer.Segfo... | null | python -m unittest /testbed/tests/test_feature_extraction_beit.py /testbed/tests/test_feature_extraction_segformer.py -v | Bug Fix | false | false | false | true | 16 | 8 | 24 | false | false | ["src/transformers/models/segformer/feature_extraction_segformer.py->module->function_definition:is_seq_of", "src/transformers/models/segformer/feature_extraction_segformer.py->module->function_definition:_scale_size", "src/transformers/models/beit/feature_extraction_beit.py->module->class_definition:BeitFeatureExtract... |
huggingface/transformers | 14,779 | huggingface__transformers-14779 | ['12118'] | 7ae6f070044b0171a71f3269613bf02fd9fca6f2 | diff --git a/src/transformers/generation_utils.py b/src/transformers/generation_utils.py
--- a/src/transformers/generation_utils.py
+++ b/src/transformers/generation_utils.py
@@ -43,6 +43,7 @@
from .generation_stopping_criteria import (
MaxLengthCriteria,
MaxTimeCriteria,
+ StoppingCriteria,
Stopping... | diff --git a/tests/test_generation_utils.py b/tests/test_generation_utils.py
--- a/tests/test_generation_utils.py
+++ b/tests/test_generation_utils.py
@@ -52,7 +52,7 @@
TopKLogitsWarper,
TopPLogitsWarper,
)
- from transformers.generation_stopping_criteria import MaxLengthCriteria, StoppingCrit... | Passing a custom stopping_criteria list to model.generate() yields a multiple value error for that keyword arg
---
name: "\U0001F41B Bug Report"
about: Submit a bug report to help us improve transformers
title: ''
labels: ''
assignees: ''
---
## Environment info
<!-- You can run the command `transformers-... | Hey @bitbanger,
Could you provide a reproducible code snippet that we could just copy paste into a python shell to reproduce the error? :-) Thanks!
Hi there! Thanks for your response! Sure, here you go. I've confirmed that this code yields the error when run in the environment described in my report:
```
import ... | 2021-12-15 11:28:36+00:00 | Python | FROM public.ecr.aws/docker/library/python:3.8-slim as builder
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
WORKDIR /testbed
# Install build dependencies
RUN apt-get update && apt-get install -y git build-essential python3-dev && rm -rf /var/lib/apt/lists/*
# Copy all repository files
... | ['tests/test_generation_utils.py:GenerationIntegrationTests:test_encoder_decoder_generate_with_inputs_embeds', 'tests/test_generation_utils.py:GenerationIntegrationTests:test_max_length_backward_compat_group_beam_search', 'tests/test_generation_utils.py:GenerationIntegrationTests:test_generate_too_many_encoder_kwargs',... | ['tests/test_generation_utils.py:GenerationIntegrationTests:test_custom_stopping_criteria', 'tests/test_generation_utils.py:GenerationIntegrationTests:test_custom_logits_processor', 'tests/test_generation_utils.py:GenerationIntegrationTests:test_custom_stopping_criteria_overload_error'] | null | python -m pytest -v --tb=short /testbed/tests/test_generation_utils.py | Bug Fix | false | false | false | true | 5 | 1 | 6 | false | false | ["src/transformers/generation_utils.py->module->class_definition:GenerationMixin", "src/transformers/generation_utils.py->module->class_definition:GenerationMixin->function_definition:_get_stopping_criteria", "src/transformers/generation_utils.py->module->class_definition:GenerationMixin->function_definition:generate",... |
huggingface/transformers | 15,158 | huggingface__transformers-15158 | ['15156'] | c4f7eb124b218741d66dd1d86b5d744024a78f6f | diff --git a/src/transformers/models/bert/tokenization_bert_fast.py b/src/transformers/models/bert/tokenization_bert_fast.py
--- a/src/transformers/models/bert/tokenization_bert_fast.py
+++ b/src/transformers/models/bert/tokenization_bert_fast.py
@@ -188,15 +188,17 @@ def __init__(
**kwargs,
)
-... | diff --git a/tests/test_tokenization_bert.py b/tests/test_tokenization_bert.py
--- a/tests/test_tokenization_bert.py
+++ b/tests/test_tokenization_bert.py
@@ -299,3 +299,40 @@ def test_offsets_with_special_characters(self):
[e[1] for e in expected_results], tokenizer_r.convert_ids_to_tokens(tokens[... | the `tokenize_chinese_chars` argument is not always taken into account with the fast version of the bert tokenizer
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` versi... | null | 2022-01-14 12:19:38+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_tokenization_bert.py:BertTokenizationTest:test_rust_and_python_full_tokenizers', 'tests/test_tokenization_bert.py:BertTokenizationTest:test_batch_encode_plus_batch_sequence_length', 'tests/test_tokenization_bert.py:BertTokenizationTest:test_saving_tokenizer_trainer', 'tests/test_tokenization_bert.py:BertTo... | ['tests/test_tokenization_bert.py:BertTokenizationTest:test_change_tokenize_chinese_chars'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/test_tokenization_bert.py | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["src/transformers/models/bert/tokenization_bert_fast.py->module->class_definition:BertTokenizerFast->function_definition:__init__"] |
huggingface/transformers | 15,473 | huggingface__transformers-15473 | ['15466'] | b9418a1d97d33dac0e7ec1df7fc1178f361104c5 | diff --git a/examples/pytorch/language-modeling/run_clm.py b/examples/pytorch/language-modeling/run_clm.py
--- a/examples/pytorch/language-modeling/run_clm.py
+++ b/examples/pytorch/language-modeling/run_clm.py
@@ -30,7 +30,7 @@
from typing import Optional
import datasets
-from datasets import load_dataset
+from da... | diff --git a/tests/test_trainer.py b/tests/test_trainer.py
--- a/tests/test_trainer.py
+++ b/tests/test_trainer.py
@@ -288,6 +288,7 @@ def get_regression_trainer(a=0, b=0, double_output=False, train_len=64, eval_len
data_collator = kwargs.pop("data_collator", None)
optimizers = kwargs.pop("optimizers"... | Preprocess/transform logits before caching them for computing metrics.
# 🚀 Feature request
I think it'd be nice to have a simple way to preprocess the logits before caching them for computing metrics.
## Motivation
When the `Trainer` `compute_metrics` are set, during evaluation the logits are accumulated (som... | I think it would be a valuable addition, as you describe the problematic situation very well, when someone wants to compute perplexity with a language model having a very large vocab size, for instance.
The `TrainingArguments` can't have a new argument of type callable, but I think we could have a new argument in th... | 2022-02-02 07:06:19+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_trainer.py:TrainerOptimizerChoiceTest:test_fused_adam_no_apex', 'tests/test_trainer.py:TrainerIntegrationTest:test_trainer_works_with_dict', 'tests/test_trainer.py:TrainerOptimizerChoiceTest:test_fused_adam', 'tests/test_trainer.py:TrainerIntegrationTest:test_no_wd_param_group', 'tests/test_trainer.py:Trai... | ['tests/test_trainer.py:TrainerIntegrationTest:test_number_of_steps_in_training', 'tests/test_trainer.py:TrainerIntegrationTest:test_resume_training_with_gradient_accumulation', 'tests/test_trainer.py:TrainerIntegrationPrerunTest:test_training_loss', 'tests/test_trainer.py:TrainerIntegrationTest:test_training_arguments... | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/test_trainer.py | Feature | false | false | false | true | 7 | 2 | 9 | false | false | ["src/transformers/trainer.py->module->class_definition:Trainer->function_definition:__init__", "examples/pytorch/language-modeling/run_mlm.py->module->function_definition:main->function_definition:preprocess_logits_for_metrics", "examples/pytorch/language-modeling/run_clm.py->module->function_definition:main->function... |
huggingface/transformers | 15,795 | huggingface__transformers-15795 | ['15739'] | 8481ecefbd7e701bc061b321cb1695d16eac95a9 | diff --git a/src/transformers/hf_argparser.py b/src/transformers/hf_argparser.py
--- a/src/transformers/hf_argparser.py
+++ b/src/transformers/hf_argparser.py
@@ -14,13 +14,13 @@
import dataclasses
import json
-import re
import sys
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser, ArgumentTypeEr... | diff --git a/tests/utils/test_hf_argparser.py b/tests/utils/test_hf_argparser.py
--- a/tests/utils/test_hf_argparser.py
+++ b/tests/utils/test_hf_argparser.py
@@ -88,8 +88,17 @@ def __post_init__(self):
self.required_enum = BasicEnum(self.required_enum)
+@dataclass
+class StringLiteralAnnotationExample:
+ ... | Add compatibility for Postponed Evaluation of Annotations (PEP 563)
Hello,
The code says that it will add compatibility for Postponed Evaluation of Annotations ([PEP 563](https://www.python.org/dev/peps/pep-0563/)) when Python 3.9 is released (which already happened on 2020.10.5). Is there any plan to complete this?... | Hey! We don't have to do the bandwidth to do it right now, but we'd welcome contributions! Let me tag this as a first good issue, and let me know if you're interested in taking a stab at it!
I'm glad to help with that, maybe it'll take some time. I never contribute here, I'll try to follow the CONTRIBUTING.md, post pro... | 2022-02-23 18:01:27+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_basic', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_optional', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_list', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_default_bool', 'tests/utils/test_hf_ar... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_string_literal_annotation'] | null | pytest -v --tb=short /testbed/tests/utils/test_hf_argparser.py --junitxml=test-results.xml | Feature | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser", "src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:_parse_dataclass_field", "src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:_add_dataclass_argu... |
huggingface/transformers | 15,831 | huggingface__transformers-15831 | ['15109'] | ad0d7d17451fea6457c9ee81898f7f64ad7ef848 | diff --git a/src/transformers/models/marian/configuration_marian.py b/src/transformers/models/marian/configuration_marian.py
--- a/src/transformers/models/marian/configuration_marian.py
+++ b/src/transformers/models/marian/configuration_marian.py
@@ -112,6 +112,7 @@ class MarianConfig(PretrainedConfig):
def __init... | diff --git a/tests/marian/test_modeling_marian.py b/tests/marian/test_modeling_marian.py
--- a/tests/marian/test_modeling_marian.py
+++ b/tests/marian/test_modeling_marian.py
@@ -268,6 +268,58 @@ def test_generate_fp16(self):
model.generate(input_ids, attention_mask=attention_mask)
model.generate(num_... | Why is Marian to Torch converter hardcoded for tied vocab ?
I see the following condition:
https://github.com/huggingface/transformers/blob/16f0b7d72c6d4e122957392c342b074aa2c5c519/src/transformers/models/marian/convert_marian_to_pytorch.py#L462
While training my Marian model, I do not want to tie my source and tar... | I understand that this was created only to add support for [baseline models released from Tatoeba Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models).
But it would be great if we can generalize it. Thanks!
cc @patil-suraj
Hi @sshleifer
Just saw your comment on this thread: https://github... | 2022-02-25 13:27:44+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/marian/test_tokenization_marian.py:MarianTokenizationTest:test_padding_with_attention_mask', 'tests/marian/test_modeling_marian.py:MarianStandaloneDecoderModelTest:test_feed_forward_chunking', 'tests/marian/test_modeling_marian.py:MarianStandaloneDecoderModelTest:test_save_load_keys_to_ignore_on_save', 'tests/m... | ['tests/marian/test_modeling_marian.py:MarianModelTest:test_share_encoder_decoder_embeddings', 'tests/marian/test_modeling_marian.py:MarianModelTest:test_resize_decoder_token_embeddings'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/marian/test_modeling_marian.py /testbed/tests/marian/test_tokenization_marian.py | Feature | false | false | false | true | 29 | 11 | 40 | false | false | ["src/transformers/models/marian/convert_marian_to_pytorch.py->module->function_definition:load_layers_", "src/transformers/models/marian/tokenization_marian.py->module->class_definition:MarianTokenizer->function_definition:get_tgt_vocab", "src/transformers/models/marian/modeling_marian.py->module->class_definition:Mar... |
huggingface/transformers | 15,843 | huggingface__transformers-15843 | ['15840'] | 84eaa6acf582206dba33135727dc3bfff05a7e9c | diff --git a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py b/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
--- a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
+++ b/src/transformers/models/wav2vec2/tokenization_wav2vec2.py
@@ -258,6 +258,8 @@ def convert_tokens_to_string(
""... | diff --git a/tests/pipelines/test_pipelines_automatic_speech_recognition.py b/tests/pipelines/test_pipelines_automatic_speech_recognition.py
--- a/tests/pipelines/test_pipelines_automatic_speech_recognition.py
+++ b/tests/pipelines/test_pipelines_automatic_speech_recognition.py
@@ -29,7 +29,7 @@
)
from transformers.p... | Timestamps in AutomaticSpeechRecognitionPipeline not aligned in sample space
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.1... | Hi @lemswasabi ,
Thanks for the report. We unfortunately cannot drop the stride as when there's batching involved, the tensors cannot be of different shapes. We can however keep track of the stride and fix the timestamps.
I'll submit a patch tomorrow probably.
Hi @Narsil,
Thanks for having a look at it. | 2022-02-28 08:09:21+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_training_new_tokenizer_with_special_tokens_change', 'tests/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_truncation_side_in_kwargs', 'tests/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_saving_toke... | ['tests/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_word_offsets_from_char_offsets'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/pipelines/test_pipelines_automatic_speech_recognition.py /testbed/tests/wav2vec2/test_tokenization_wav2vec2.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 7 | 0 | 7 | false | false | ["src/transformers/pipelines/automatic_speech_recognition.py->module->class_definition:AutomaticSpeechRecognitionPipeline->function_definition:_forward", "src/transformers/pipelines/automatic_speech_recognition.py->module->function_definition:apply_stride", "src/transformers/pipelines/automatic_speech_recognition.py->m... |
huggingface/transformers | 15,913 | huggingface__transformers-15913 | ['15888'] | 439de3f7f98ccc0d0fc4b1e3a02fac9bb761c809 | diff --git a/src/transformers/models/clip/processing_clip.py b/src/transformers/models/clip/processing_clip.py
--- a/src/transformers/models/clip/processing_clip.py
+++ b/src/transformers/models/clip/processing_clip.py
@@ -23,17 +23,17 @@ class CLIPProcessor(ProcessorMixin):
r"""
Constructs a CLIP processor w... | diff --git a/tests/clip/test_processor_clip.py b/tests/clip/test_processor_clip.py
--- a/tests/clip/test_processor_clip.py
+++ b/tests/clip/test_processor_clip.py
@@ -21,7 +21,7 @@
import numpy as np
import pytest
-from transformers import CLIPTokenizer
+from transformers import CLIPTokenizer, CLIPTokenizerFast
fr... | CLIPProcessor with CLIPTokenizerFast
# 🚀 Feature request
Current `CLIPProcessor` doesn't support `CLIPTokenizerFast` requiring `CLIPTokenizer`.
In my thinking, there is no reason not to support `CLIPTokenizerFast` for `CLIPProcessor`
## Motivation
<!-- Please outline the motivation for the proposal. Is your ... | Hey @cosmoquester !
The `CLIPTokenizerFast` was not used in the processor because there was an issue with it which is now fixed, cf #15067
So yes, we can now support `CLIPTokenizerFast` for `CLIPProcessor`. Feel free to open a PR! | 2022-03-03 13:04:08+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/clip/test_processor_clip.py:CLIPProcessorTest:test_processor', 'tests/clip/test_processor_clip.py:CLIPProcessorTest:test_tokenizer_decode', 'tests/clip/test_processor_clip.py:CLIPProcessorTest:test_feature_extractor', 'tests/clip/test_processor_clip.py:CLIPProcessorTest:test_tokenizer'] | ['tests/clip/test_processor_clip.py:CLIPProcessorTest:test_save_load_pretrained_additional_features', 'tests/clip/test_processor_clip.py:CLIPProcessorTest:test_save_load_pretrained_default'] | null | pytest -v --tb=short /testbed/tests/clip/test_processor_clip.py --junitxml=test-results.xml | Feature | false | false | false | true | 3 | 1 | 4 | false | false | ["src/transformers/models/clip/processing_clip.py->module->class_definition:CLIPProcessor->function_definition:__call__", "src/transformers/models/clip/processing_clip.py->module->class_definition:CLIPProcessor", "src/transformers/models/clip/processing_clip.py->module->class_definition:CLIPProcessor->function_definiti... |
huggingface/transformers | 16,198 | huggingface__transformers-16198 | ['16185'] | d35e0c62477d8a99baca3d2ae2e64ec62b64527c | diff --git a/src/transformers/models/clip/configuration_clip.py b/src/transformers/models/clip/configuration_clip.py
--- a/src/transformers/models/clip/configuration_clip.py
+++ b/src/transformers/models/clip/configuration_clip.py
@@ -15,6 +15,8 @@
""" CLIP model configuration"""
import copy
+import os
+from typing... | diff --git a/tests/clip/test_modeling_clip.py b/tests/clip/test_modeling_clip.py
--- a/tests/clip/test_modeling_clip.py
+++ b/tests/clip/test_modeling_clip.py
@@ -588,6 +588,21 @@ def _create_and_check_torchscript(self, config, inputs_dict):
self.assertTrue(models_equal)
+ def test_load_vision_text_... | CLIPVisionModel errors on trying to load openai/clip-vit-base-patch16
`CLIPVisionModel` errors on trying to load [openai/clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16), which was added to HF (using `CLIPModel` for loading `patch16` as the documentation example for that repo works without er... | Thank you for reporting this! Looking into it.
Found the issue, `CLIPVisionConfig` does not correctly copy the vision arguments from the `CLIPConfig`. It uses the default values., which are defined for the patch32 model.
A quick fix to get this working for now is to load `CLIPConfig`, retrieve the `vision_config` fr... | 2022-03-16 13:57:01+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/clip/test_modeling_clip.py:CLIPTextModelTest:test_torch_fx', 'tests/clip/test_modeling_clip.py:CLIPModelTest:test_determinism', 'tests/clip/test_modeling_clip.py:CLIPTextModelTest:test_head_pruning_integration', 'tests/clip/test_modeling_clip.py:CLIPModelTest:test_inputs_embeds', 'tests/clip/test_modeling_clip.... | ['tests/clip/test_modeling_clip.py:CLIPModelTest:test_load_vision_text_config'] | null | python -m pytest -v --tb=short --show-capture=no /testbed/tests/clip/test_modeling_clip.py --junitxml=test-results.xml | Bug Fix | false | false | false | true | 2 | 2 | 4 | false | false | ["src/transformers/models/clip/configuration_clip.py->module->class_definition:CLIPVisionConfig->function_definition:from_pretrained", "src/transformers/models/clip/configuration_clip.py->module->class_definition:CLIPTextConfig", "src/transformers/models/clip/configuration_clip.py->module->class_definition:CLIPTextConf... |
huggingface/transformers | 16,661 | huggingface__transformers-16661 | ['16660', '16660'] | 33cb21150c034aae0f11b9ab6e38752a7c6d1784 | diff --git a/src/transformers/tokenization_utils_base.py b/src/transformers/tokenization_utils_base.py
--- a/src/transformers/tokenization_utils_base.py
+++ b/src/transformers/tokenization_utils_base.py
@@ -1150,35 +1150,35 @@ def additional_special_tokens_ids(self) -> List[int]:
@bos_token_id.setter
def bo... | diff --git a/tests/byt5/test_tokenization_byt5.py b/tests/byt5/test_tokenization_byt5.py
--- a/tests/byt5/test_tokenization_byt5.py
+++ b/tests/byt5/test_tokenization_byt5.py
@@ -332,3 +332,41 @@ def test_convert_tokens_to_string_format(self):
string = tokenizer.convert_tokens_to_string(tokens)
... | Tokenizers setter of ids of special tokens don't work
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?)... | 2022-04-08 01:31:48+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/byt5/test_tokenization_byt5.py:ByT5TokenizationTest:test_is_fast', 'tests/canine/test_tokenization_canine.py:CanineTokenizationTest:test_special_tokens_initialization_with_non_empty_additional_special_tokens', 'tests/canine/test_tokenization_canine.py:CanineTokenizationTest:test_save_sentencepiece_tokenizer', '... | ['tests/byt5/test_tokenization_byt5.py:ByT5TokenizationTest:test_tokenizers_common_ids_setters', 'tests/canine/test_tokenization_canine.py:CanineTokenizationTest:test_tokenizers_common_ids_setters'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/byt5/test_tokenization_byt5.py /testbed/tests/canine/test_tokenization_canine.py /testbed/tests/test_tokenization_common.py | Bug Fix | false | true | false | false | 8 | 0 | 8 | false | false | ["src/transformers/tokenization_utils_base.py->module->class_definition:SpecialTokensMixin->function_definition:additional_special_tokens_ids", "src/transformers/tokenization_utils_base.py->module->class_definition:SpecialTokensMixin->function_definition:eos_token_id", "src/transformers/tokenization_utils_base.py->modu... | |
huggingface/transformers | 16,814 | huggingface__transformers-16814 | ['15536'] | dee6f01636746dae6e73c3d258870b04d1b0832d | diff --git a/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py b/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
--- a/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
+++ b/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
@@ -22,7 +22,7 @@
fr... | diff --git a/tests/encoder_decoder/test_modeling_encoder_decoder.py b/tests/encoder_decoder/test_modeling_encoder_decoder.py
--- a/tests/encoder_decoder/test_modeling_encoder_decoder.py
+++ b/tests/encoder_decoder/test_modeling_encoder_decoder.py
@@ -142,6 +142,22 @@ def check_encoder_decoder_model(
output... | Error when passing encoder_outputs as tuple to EncoderDecoder models
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.13.0-27-g... | Hey @jsnfly,
Regarding the first point - agree, it'd be good to check if the input is a tuple and if it is we can wrap it into a `ModelOutput` object. Would you be interested in opening a PR for this? :-)
Regarding the 2nd point - that's very interesting (cc @sanchit-gandhi). Also makes a lot of sense since ASR b... | 2022-04-18 07:46:21+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/encoder_decoder/test_modeling_encoder_decoder.py:BertEncoderDecoderModelTest:test_relative_position_embeds', 'tests/encoder_decoder/test_modeling_encoder_decoder.py:GPT2EncoderDecoderModelTest:test_encoder_decoder_model_from_pretrained', 'tests/encoder_decoder/test_modeling_encoder_decoder.py:BertGenerationEnco... | ['tests/encoder_decoder/test_modeling_encoder_decoder.py:BartEncoderDecoderModelTest:test_encoder_decoder_model', 'tests/encoder_decoder/test_modeling_encoder_decoder.py:ProphetNetEncoderDecoderModelTest:test_encoder_decoder_model', 'tests/encoder_decoder/test_modeling_encoder_decoder.py:BertEncoderDecoderModelTest:tes... | null | pytest -v --tb=short --show-capture=no /testbed/tests/encoder_decoder/test_modeling_encoder_decoder.py | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["src/transformers/models/encoder_decoder/modeling_encoder_decoder.py->module->class_definition:EncoderDecoderModel->function_definition:forward", "src/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py->module->class_definition:VisionEncoderDecoderModel->function_definition:forward", "src/tr... |
huggingface/transformers | 16,819 | huggingface__transformers-16819 | ['16810'] | 33cd4be57690ec5f2c32cfb02970898fab706218 | diff --git a/src/transformers/activations.py b/src/transformers/activations.py
--- a/src/transformers/activations.py
+++ b/src/transformers/activations.py
@@ -152,19 +152,19 @@ def forward(self, input: Tensor) -> Tensor:
ACT2FN = {
- "relu": nn.ReLU(),
- "silu": SiLUActivation(),
- "swish": SiLUActivation... | diff --git a/tests/utils/test_activations.py b/tests/utils/test_activations.py
--- a/tests/utils/test_activations.py
+++ b/tests/utils/test_activations.py
@@ -46,18 +46,19 @@ def test_gelu_10(self):
self.assertTrue(torch.allclose(y_gelu * clipped_mask, y_gelu_10 * clipped_mask))
def test_get_activation(... | Missing activation Function
I think the sigmoid / softmax activation function is missing here
https://github.com/huggingface/transformers/blob/31ec2cb2badfbdd4c1ac9c6c9b8a74e974984206/src/transformers/models/roberta/modeling_tf_roberta.py#L1299
| null | 2022-04-18 15:46:00+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_activations.py:TestActivations:test_get_activation', 'tests/utils/test_activations.py:TestActivations:test_gelu_versions', 'tests/utils/test_activations_tf.py:TestTFActivations:test_gelu_10', 'tests/utils/test_activations.py:TestActivations:test_gelu_10'] | ['tests/utils/test_activations_tf.py:TestTFActivations:test_get_activation'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/utils/test_activations.py /testbed/tests/utils/test_activations_tf.py | Bug Fix | true | false | false | false | 0 | 0 | 0 | false | false | [] |
huggingface/transformers | 17,053 | huggingface__transformers-17053 | ['16976'] | 1073f00d4ea3eae6279c80d311387012b20d0113 | diff --git a/docs/source/en/_toctree.yml b/docs/source/en/_toctree.yml
--- a/docs/source/en/_toctree.yml
+++ b/docs/source/en/_toctree.yml
@@ -61,6 +61,8 @@
title: Export 🤗 Transformers models
- local: performance
title: 'Performance and Scalability: How To Fit a Bigger Model and Train It Faster'
+ - loc... | diff --git a/tests/trainer/test_trainer.py b/tests/trainer/test_trainer.py
--- a/tests/trainer/test_trainer.py
+++ b/tests/trainer/test_trainer.py
@@ -15,6 +15,7 @@
import dataclasses
import gc
+import json
import math
import os
import random
@@ -65,7 +66,7 @@
)
from transformers.trainer_utils import PREFIX_CH... | Bug: Finetuning large models resume checkpoint error
When finetuning a large model (e.g. Eleuther 6B), you shard the checkpoints upon saving [here](https://github.com/huggingface/transformers/blob/c79bbc3ba54a81dab2eac13d89f264ed64cb2460/src/transformers/modeling_utils.py#L193). However, upon resuming the checkpoint (a... | Indeed, I saw that yesterday and am working on a fix. | 2022-05-02 18:13:37+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_logging_inf_nan_filter', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_dynamic_shapes', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_load_best_model_at_end', 'tests/trainer/test_trainer.py:TrainerOptimizerChoiceTest:test_fused_adam_n... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_resume_training_with_shard_checkpoint'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/trainer/test_trainer.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 5 | 0 | 5 | false | false | ["src/transformers/modeling_utils.py->module->function_definition:load_sharded_checkpoint", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:_load_state_dict_in_model", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:_load_best_model", "src/transfor... |
huggingface/transformers | 17,055 | huggingface__transformers-17055 | ['17032'] | 31616b8d613dcb7ac69b562d51b42d0db379f72f | diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -732,13 +732,16 @@ def estimate_tokens(self, input_dict: Dict[str, Union[torch.Tensor, Any]]) -> in
Returns:
`int`: The total nu... | diff --git a/tests/trainer/test_trainer.py b/tests/trainer/test_trainer.py
--- a/tests/trainer/test_trainer.py
+++ b/tests/trainer/test_trainer.py
@@ -57,7 +57,6 @@
require_torch_bf16,
require_torch_gpu,
require_torch_multi_gpu,
- require_torch_non_multi_gpu,
require_torch_tf32,
require_torc... | [Trainer]: Resume training with `save_strategy="epoch"` does not load RNG state
### System Info
```shell
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.15.36-1-lts-x86_64-with-glibc2.33
- Python version: 3.8.12
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorf... | null | 2022-05-02 20:22:15+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_logging_inf_nan_filter', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_dynamic_shapes', 'tests/trainer/test_trainer.py:TrainerIntegrationTest:test_load_best_model_at_end', 'tests/trainer/test_trainer.py:TrainerOptimizerChoiceTest:test_fused_adam_n... | ['tests/trainer/test_trainer.py:TrainerIntegrationTest:test_resume_training_with_randomness'] | null | pytest -v --tb=short --show-capture=no --junitxml=test_output.xml /testbed/tests/trainer/test_trainer.py | Bug Fix | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/modeling_utils.py->module->class_definition:PreTrainedModel->function_definition:__init__", "src/transformers/trainer.py->module->class_definition:Trainer->function_definition:train", "src/transformers/modeling_utils.py->module->class_definition:ModuleUtilsMixin->function_definition:estimate_tokens"] |
huggingface/transformers | 17,082 | huggingface__transformers-17082 | ['15735'] | d76d2a2af7babf73d6c5bc53facaccab05e912f8 | diff --git a/src/transformers/convert_slow_tokenizer.py b/src/transformers/convert_slow_tokenizer.py
--- a/src/transformers/convert_slow_tokenizer.py
+++ b/src/transformers/convert_slow_tokenizer.py
@@ -407,7 +407,7 @@ def converted(self) -> Tokenizer:
tokenizer.decoder = decoders.ByteLevel()
tokenize... | diff --git a/tests/models/deberta/test_tokenization_deberta.py b/tests/models/deberta/test_tokenization_deberta.py
--- a/tests/models/deberta/test_tokenization_deberta.py
+++ b/tests/models/deberta/test_tokenization_deberta.py
@@ -88,6 +88,12 @@ def test_full_tokenizer(self):
input_bpe_tokens = [0, 1, 2, 15, 1... | `DebertaTokenizer` always assigns token type ID 0
## Environment info
- `transformers` version: 4.16.2
- Platform: Linux-5.15.13-051513-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU... | Looks like this is the change that introduced this behavior.
https://github.com/huggingface/transformers/commit/57c1749efabf5c86bcfd4e4e078567a63a7c8a81#diff-7ff4f35b72b8541520ea52c851b55bc2682da83e01e6e0ceeb5289f7dd2f0620R217
Good catch! Would you like to open a PR to fix this? | 2022-05-04 11:51:41+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/deberta/test_tokenization_deberta.py:DebertaTokenizationTest:test_special_tokens_map_equal', 'tests/models/deberta/test_tokenization_deberta.py:DebertaTokenizationTest:test_max_length_equal', 'tests/models/deberta/test_tokenization_deberta.py:DebertaTokenizationTest:test_fast_only_inputs', 'tests/models/... | ['tests/models/deberta/test_tokenization_deberta.py:DebertaTokenizationTest:test_token_type_ids'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_output.json /testbed/tests/models/deberta/test_tokenization_deberta.py | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["src/transformers/models/deberta/tokenization_deberta_fast.py->module->class_definition:DebertaTokenizerFast->function_definition:create_token_type_ids_from_sequences", "src/transformers/models/deberta/tokenization_deberta.py->module->class_definition:DebertaTokenizer->function_definition:create_token_type_ids_from_se... |
huggingface/transformers | 17,764 | huggingface__transformers-17764 | ['17745'] | 21a772426dee10003fb0111abec514c9dcefda35 | diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -2458,7 +2458,7 @@ def _find_mismatched_keys(
if offload_state_dict:
# Load back temporarily offloaded state dict
- ... | diff --git a/tests/models/gpt_neox/test_modeling_gpt_neox.py b/tests/models/gpt_neox/test_modeling_gpt_neox.py
--- a/tests/models/gpt_neox/test_modeling_gpt_neox.py
+++ b/tests/models/gpt_neox/test_modeling_gpt_neox.py
@@ -218,6 +218,14 @@ def test_model_as_decoder_with_default_input_mask(self):
self.model_t... | GPT-NEOX RuntimeError
Hi, when I ran the model GPT-NEOX, I got the "RuntimeError: batch1 dim 2 must match batch2 dim1" in modeling_gpt_neox.py, line 212.
So I tried to debugg and fix this problem, I found the code "present = None if use_cache else (key, value)" in modeling_gpt_neox.py, line 146.
Is that logical wro... | Hey @yupei9 - great catch! I think you're 100% right - do you want to open a PR to fix it? Also cc @sgugger | 2022-06-17 18:12:44+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/gpt_neox/test_modeling_gpt_neox.py:GPTNeoXModelTest:test_headmasking', 'tests/models/gpt_neox/test_modeling_gpt_neox.py:GPTNeoXModelTest:test_model_as_decoder_with_default_input_mask', 'tests/models/gpt_neox/test_modeling_gpt_neox.py:GPTNeoXModelTest:test_torch_fx', 'tests/models/gpt_neox/test_modeling_g... | ['tests/models/gpt_neox/test_modeling_gpt_neox.py:GPTNeoXModelTest:test_decoder_model_past_large_inputs'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/gpt_neox/test_modeling_gpt_neox.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/models/gpt_neox/modeling_gpt_neox.py->module->class_definition:GPTNeoXAttention->function_definition:forward", "src/transformers/modeling_utils.py->module->class_definition:PreTrainedModel->function_definition:_load_pretrained_model"] |
huggingface/transformers | 18,851 | huggingface__transformers-18851 | ['18839'] | f719c0377f7f97c4bf9b6b54de209f4aad0aef4b | diff --git a/src/transformers/generation_beam_search.py b/src/transformers/generation_beam_search.py
--- a/src/transformers/generation_beam_search.py
+++ b/src/transformers/generation_beam_search.py
@@ -259,7 +259,7 @@ def process(
continue
if beam_indices is not None:
... | diff --git a/tests/generation/test_generation_beam_search.py b/tests/generation/test_generation_beam_search.py
--- a/tests/generation/test_generation_beam_search.py
+++ b/tests/generation/test_generation_beam_search.py
@@ -172,7 +172,7 @@ def cut_expected_tensor(tensor):
input_ids[correct_idx].tolist()... | BUG for beam_indices from model.generate()
### System Info
- `transformers` version: 4.22.0.dev0
- Platform: Linux-5.8.0-51-generic-x86_64-with-glibc2.10
- Python version: 3.8.13
- Huggingface_hub version: 0.8.1
- PyTorch version (GPU?): 1.12.1+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Fla... | Also, could you please check this ? https://discuss.huggingface.co/t/larger-sum-logits-larger-sum-probability/22358
Also cc @gante for `generate` :) | 2022-09-01 11:11:16+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_generation_beam_search.py:ConstrainedBeamSearchTest:test_constrained_beam_hypotheses', 'tests/generation/test_generation_beam_search.py:ConstrainedBeamSearchTest:test_constrained_beam_scorer_finalize', 'tests/generation/test_generation_beam_search.py:BeamSearchTest:test_beam_hypotheses', 'tests/... | ['tests/generation/test_generation_beam_search.py:BeamSearchTest:test_beam_scorer_update'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/generation/test_generation_beam_search.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/generation_beam_search.py->module->class_definition:BeamSearchScorer->function_definition:process"] |
huggingface/transformers | 19,073 | huggingface__transformers-19073 | ['19057'] | 5e636eee4af48ccd03b4d9c1a1e6f7a1b92a643f | diff --git a/src/transformers/tokenization_utils_base.py b/src/transformers/tokenization_utils_base.py
--- a/src/transformers/tokenization_utils_base.py
+++ b/src/transformers/tokenization_utils_base.py
@@ -1726,6 +1726,8 @@ def from_pretrained(cls, pretrained_model_name_or_path: Union[str, os.PathLike],
for f... | diff --git a/tests/test_tokenization_common.py b/tests/test_tokenization_common.py
--- a/tests/test_tokenization_common.py
+++ b/tests/test_tokenization_common.py
@@ -31,6 +31,7 @@
from typing import TYPE_CHECKING, Any, Dict, List, Tuple, Union
from huggingface_hub import HfFolder, delete_repo, set_access_token
+fr... | Loading tokenizer using from_pretrained seems to be broken for v4
### System Info
According to following `FutureWarning` loading tokenizer using a file path should work in v4:
```
FutureWarning: Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated and won't be possible anymor... | cc @sgugger
Indeed. I can reproduce, a fix is coming. This was caused by #18438 and this particular use case slipped through the cracks since it's untested (probably because it's deprecated behavior). | 2022-09-16 17:48:35+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_tokenization_common.py:TrieTest:test_trie_final', 'tests/test_tokenization_common.py:TrieTest:test_trie_skip', 'tests/test_tokenization_common.py:TrieTest:test_trie_suffix_tokens', 'tests/test_tokenization_common.py:TrieTest:test_trie_split', 'tests/test_tokenization_common.py:TrieTest:test_cut_text_harden... | ['tests/test_tokenization_common.py:TokenizerUtilTester:test_legacy_load_from_one_file'] | null | pytest /testbed/tests/test_tokenization_common.py -v --tb=short --json-report --json-report-file=test_output.json | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/tokenization_utils_base.py->module->class_definition:PreTrainedTokenizerBase->function_definition:from_pretrained"] |
huggingface/transformers | 19,219 | huggingface__transformers-19219 | ['19116'] | 2d956958252617a178a68a06582c99b133fe7d3d | diff --git a/src/transformers/hf_argparser.py b/src/transformers/hf_argparser.py
--- a/src/transformers/hf_argparser.py
+++ b/src/transformers/hf_argparser.py
@@ -281,7 +281,9 @@ def parse_json_file(self, json_file: str, allow_extra_keys: bool = False) -> Tup
- the dataclass instances in the same ord... | diff --git a/tests/utils/test_hf_argparser.py b/tests/utils/test_hf_argparser.py
--- a/tests/utils/test_hf_argparser.py
+++ b/tests/utils/test_hf_argparser.py
@@ -13,12 +13,17 @@
# limitations under the License.
import argparse
+import json
+import os
+import tempfile
import unittest
from argparse import Namespac... | HfArgumentParser support yaml parser
### Feature request
HfArgumentParser now supports for parsing dict and json files, will it be possible to support for parsing the widely used yaml files?
### Motivation
I think using yaml is a good way to record arguments.
### Your contribution
Not yet.
| cc @sgugger
If you want to open a PR, please go ahead!
You can just use
`parser.parse_dict(yaml.safe_load(f))`
Which could all go in a `parse_yaml_file` method :-) Doing this and also refactoring the `parse_json_file` to use `parse_dict`, as well as adding small tests would be nice additions that shouldn't be too ... | 2022-09-27 18:49:45+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_basic', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_string_literal_annotation', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_list', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_parse_dict_extra_key', 'te... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_parse_json', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_parse_yaml'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/utils/test_hf_argparser.py --junitxml=test-results.xml | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:parse_yaml_file", "src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:parse_json_file"] |
huggingface/transformers | 19,590 | huggingface__transformers-19590 | ['19528'] | 3d320c78c32334f66d72d57ff6322d9e3a7dc00b | diff --git a/src/transformers/models/bert/tokenization_bert_tf.py b/src/transformers/models/bert/tokenization_bert_tf.py
--- a/src/transformers/models/bert/tokenization_bert_tf.py
+++ b/src/transformers/models/bert/tokenization_bert_tf.py
@@ -3,6 +3,7 @@
import tensorflow as tf
+from tensorflow_text import BertTok... | diff --git a/tests/models/bert/test_tokenization_bert_tf.py b/tests/models/bert/test_tokenization_bert_tf.py
--- a/tests/models/bert/test_tokenization_bert_tf.py
+++ b/tests/models/bert/test_tokenization_bert_tf.py
@@ -40,8 +40,15 @@ class BertTokenizationTest(unittest.TestCase):
def setUp(self):
super().... | Allow TFBertTokenizer to use Tensorflow text BertTokenizer (and not FastBertTokenizer) to make it servable by TF Serving
### Feature request
I would like to serve a bundle of Tokenizer + Model on TF Serving, but can't do it because TF Serving still have no support for TF FastBertTokenizer annd FastBertNormalize oper... | null | 2022-10-13 18:00:22+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | [] | ['tests/models/bert/test_tokenization_bert_tf.py:BertTokenizationTest:test_output_equivalence'] | null | pytest -v --tb=short --show-capture=no --junitxml=test-results.xml /testbed/tests/models/bert/test_tokenization_bert_tf.py | Feature | false | false | false | true | 1 | 2 | 3 | false | false | ["src/transformers/models/bert/tokenization_bert_tf.py->module->class_definition:TFBertTokenizer->function_definition:unpaired_tokenize", "src/transformers/models/bert/tokenization_bert_tf.py->module->class_definition:TFBertTokenizer", "src/transformers/models/bert/tokenization_bert_tf.py->module->class_definition:TFBe... |
huggingface/transformers | 19,657 | huggingface__transformers-19657 | ['19289'] | d2e5b19b821f0cf43c7cf4f01be5faa1cb20aa64 | diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -836,13 +836,13 @@ def transform(self, X):
"""
Scikit / Keras interface to transformers' pipelines. This method will forward to __ca... | diff --git a/tests/pipelines/test_pipelines_common.py b/tests/pipelines/test_pipelines_common.py
--- a/tests/pipelines/test_pipelines_common.py
+++ b/tests/pipelines/test_pipelines_common.py
@@ -423,6 +423,56 @@ def test_unbatch_attentions_hidden_states(self):
self.assertEqual(len(outputs), 20)
+class Pipe... | Call to pipeline.predict() fails
### System Info
- `transformers` version: 4.21.1
- Platform: macOS-12.5.1-arm64-arm-64bit
- Python version: 3.9.12
- Huggingface_hub version: 0.2.1
- PyTorch version (GPU?): 1.12.1 (False)
- Tensorflow version (GPU?): 2.9.2 (False)
- Flax version (CPU?/GPU?/TPU?): not installed... | null | 2022-10-16 15:12:03+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/pipelines/test_pipelines_common.py:CommonPipelineTest:test_unbatch_attentions_hidden_states', 'tests/pipelines/test_pipelines_common.py:CommonPipelineTest:test_check_task', 'tests/pipelines/test_pipelines_common.py:PipelinePadTest:test_pipeline_padding', 'tests/pipelines/test_pipelines_common.py:CustomPipelineT... | ['tests/pipelines/test_pipelines_common.py:PipelineScikitCompatTest:test_pipeline_predict_pt', 'tests/pipelines/test_pipelines_common.py:PipelineScikitCompatTest:test_pipeline_transform_pt'] | null | pytest -v --tb=short --show-capture=no --json-report-file=test_output.json /testbed/tests/pipelines/test_pipelines_common.py | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/pipelines/base.py->module->class_definition:Pipeline->function_definition:transform", "src/transformers/pipelines/base.py->module->class_definition:Pipeline->function_definition:predict"] |
huggingface/transformers | 20,136 | huggingface__transformers-20136 | ['18748'] | fda125638f53febc059cb67f9d7abce058a8f44f | diff --git a/docs/source/en/model_doc/owlvit.mdx b/docs/source/en/model_doc/owlvit.mdx
--- a/docs/source/en/model_doc/owlvit.mdx
+++ b/docs/source/en/model_doc/owlvit.mdx
@@ -80,6 +80,8 @@ This model was contributed by [adirik](https://huggingface.co/adirik). The origi
[[autodoc]] OwlViTFeatureExtractor
- __cal... | diff --git a/tests/models/owlvit/test_modeling_owlvit.py b/tests/models/owlvit/test_modeling_owlvit.py
--- a/tests/models/owlvit/test_modeling_owlvit.py
+++ b/tests/models/owlvit/test_modeling_owlvit.py
@@ -19,7 +19,6 @@
import os
import tempfile
import unittest
-from typing import Dict, List, Tuple
import numpy ... | Add image-guided object detection support to OWL-ViT
Hi,
The [OWL-ViT](https://huggingface.co/docs/transformers/model_doc/owlvit) model is an open-vocabulary model that can be used for both zero-shot text-guided (supported) and one-shot image-guided (not supported) object detection.
It'd be great to add support ... | I think it would be a great addition, especially as it doesn't seem to be too much work to add. I'm guessing for the processor, and your description, the call signature would look something like this:
`def __call__(self, text=None, query_image=None, images=None, padding="max_length", return_tensors="np", **kwargs):... | 2022-11-09 11:18:55+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/owlvit/test_modeling_owlvit.py:OwlViTVisionModelTest:test_correct_missing_keys', 'tests/models/owlvit/test_modeling_owlvit.py:OwlViTTextModelTest:test_problem_types', 'tests/models/owlvit/test_modeling_owlvit.py:OwlViTModelTest:test_model_main_input_name', 'tests/models/owlvit/test_modeling_owlvit.py:Owl... | ['tests/models/owlvit/test_processor_owlvit.py:OwlViTProcessorTest:test_processor_case2'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test-results.json --report-log=pytest-log.jsonl /testbed/tests/models/owlvit/test_modeling_owlvit.py /testbed/tests/models/owlvit/test_processor_owlvit.py | Feature | false | false | false | true | 31 | 6 | 37 | false | false | ["src/transformers/models/owlvit/processing_owlvit.py->module->class_definition:OwlViTProcessor->function_definition:post_process_image_guided_detection", "src/transformers/models/owlvit/modeling_owlvit.py->module->class_definition:OwlViTModel->function_definition:forward", "src/transformers/models/owlvit/processing_ow... |
huggingface/transformers | 21,345 | huggingface__transformers-21345 | ['21344'] | 92ce53aab859012f7714dae6d6fce7a7d701e75f | diff --git a/src/transformers/activations.py b/src/transformers/activations.py
--- a/src/transformers/activations.py
+++ b/src/transformers/activations.py
@@ -25,6 +25,27 @@
logger = logging.get_logger(__name__)
+class PytorchGELUTanh(nn.Module):
+ """
+ A fast C implementation of the tanh approximation of t... | diff --git a/tests/utils/test_activations.py b/tests/utils/test_activations.py
--- a/tests/utils/test_activations.py
+++ b/tests/utils/test_activations.py
@@ -51,6 +51,7 @@ def test_get_activation(self):
get_activation("gelu_fast")
get_activation("gelu_new")
get_activation("gelu_python")
+ ... | Add the pytorch implementation of the OpenAI GeLU approximation
### Feature request
Add support for the pytorch implementation of OpenAI's approximation of the GeLU function, added in pytorch 1.12. This implementation is equivalent to `gelu_new` or `gelu_fast` but much faster. It can come as a separate activation fu... | null | 2023-01-27 23:00:12+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_activations.py:TestActivations:test_gelu_versions', 'tests/utils/test_activations.py:TestActivations:test_activations_are_distinct_objects', 'tests/utils/test_activations.py:TestActivations:test_gelu_10'] | ['tests/utils/test_activations.py:TestActivations:test_get_activation'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/utils/test_activations.py --junitxml=test-results.xml | Feature | false | false | false | true | 1 | 2 | 3 | false | false | ["src/transformers/activations.py->module->class_definition:PytorchGELUTanh", "src/transformers/activations.py->module->class_definition:PytorchGELUTanh->function_definition:forward", "src/transformers/activations.py->module->class_definition:PytorchGELUTanh->function_definition:__init__"] |
huggingface/transformers | 21,768 | huggingface__transformers-21768 | ['21689'] | 99ba36e72fe7d1528e2c6572373a425967ee544f | diff --git a/src/transformers/optimization.py b/src/transformers/optimization.py
--- a/src/transformers/optimization.py
+++ b/src/transformers/optimization.py
@@ -16,6 +16,7 @@
import math
import warnings
+from functools import partial
from typing import Callable, Iterable, Optional, Tuple, Union
import torch
@... | diff --git a/tests/optimization/test_optimization.py b/tests/optimization/test_optimization.py
--- a/tests/optimization/test_optimization.py
+++ b/tests/optimization/test_optimization.py
@@ -166,5 +166,21 @@ def test_schedulers(self):
)
scheduler = scheduler_func(self.optimizer, **kwargs)
+ ... | Make schedulers picklable
### Feature request
Change lambda functions passed to `LambdaLR` in `get_constant_schedule`, `get_constant_schedule_with_warmup`, `get_linear_schedule_with_warmup`, `get_cosine_schedule_with_warmup`, `get_cosine_with_hard_restarts_schedule_with_warmup` and `get_polynomial_decay_schedule_with_... | Thanks for explaining your issue in depth, and happy to review a PR! | 2023-02-23 19:13:53+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/optimization/test_optimization.py:OptimizationTest:test_adam_w', 'tests/optimization/test_optimization.py:OptimizationTest:test_adafactor'] | ['tests/optimization/test_optimization.py:ScheduleInitTest:test_schedulers'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/optimization/test_optimization.py --junitxml=test-results.xml | Feature | false | true | false | false | 19 | 0 | 19 | false | false | ["src/transformers/optimization.py->module->function_definition:get_cosine_schedule_with_warmup->function_definition:lr_lambda", "src/transformers/optimization.py->module->function_definition:get_cosine_with_hard_restarts_schedule_with_warmup", "src/transformers/optimization.py->module->function_definition:get_constant... |
huggingface/transformers | 21,969 | huggingface__transformers-21969 | ['21915'] | 0bb17295f04e565c94a79960ff7f7b6cd03acbfc | diff --git a/src/transformers/image_transforms.py b/src/transformers/image_transforms.py
--- a/src/transformers/image_transforms.py
+++ b/src/transformers/image_transforms.py
@@ -131,7 +131,8 @@ def to_pil_image(
The image to convert to the `PIL.Image` format.
do_rescale (`bool`, *optional*):
... | diff --git a/tests/test_image_transforms.py b/tests/test_image_transforms.py
--- a/tests/test_image_transforms.py
+++ b/tests/test_image_transforms.py
@@ -96,6 +96,11 @@ def test_to_pil_image_from_float(self, name, image_shape, dtype):
# make sure image is correctly rescaled
self.assertTrue(np.abs(np.... | Mask2Former ImageProcessor produces different results on Mac vs Windows.
### System Info
>>> transformers.__version__
'4.27.0.dev0'
>>> Python 3.10.6
Windows vs Mac
### Who can help?
@amyeroberts
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ... | Here is the image I used.

Also cc @alaradirik
Thanks for raising this issue @nickponline and for all the details!
Could you give details on how you're reading in the image e.g. through torchvision and th... | 2023-03-06 14:38:39+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_from_torch', 'tests/test_image_transforms.py:ImageTransformsTester:test_center_to_corners_format', 'tests/test_image_transforms.py:ImageTransformsTester:test_id_to_rgb', 'tests/test_image_transforms.py:ImageTransformsTester:test_normalize', 'tests... | ['tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_from_float_1_numpy_float_channels_first', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_from_float_0_numpy_float_channels_first', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_from_float_3_numpy_... | null | pytest -v --tb=short --show-capture=no /testbed/tests/test_image_transforms.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/image_transforms.py->module->function_definition:to_pil_image"] |
huggingface/transformers | 22,158 | huggingface__transformers-22158 | ['22147'] | 3b22bfbc6afbf7aa65ce0f255e3c75a0dd7524d3 | diff --git a/src/transformers/image_transforms.py b/src/transformers/image_transforms.py
--- a/src/transformers/image_transforms.py
+++ b/src/transformers/image_transforms.py
@@ -156,12 +156,20 @@ def to_pil_image(
# If there is a single channel, we squeeze it, as otherwise PIL can't handle it.
image = np.squ... | diff --git a/tests/test_image_transforms.py b/tests/test_image_transforms.py
--- a/tests/test_image_transforms.py
+++ b/tests/test_image_transforms.py
@@ -101,6 +101,27 @@ def test_to_pil_image_from_float(self, name, image_shape, dtype):
with self.assertRaises(ValueError):
to_pil_image(image)
+ ... | OneFormerProcessor、MaskFormerImageProcessor will cause errors if segmentation_maps only have elements 0 and 1
### System Info
transformers-4.26.0 do not have this bug
but transformers-4.27.0.dev0 has.
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [X] My own modified scripts... | cc @amyeroberts @alaradirik | 2023-03-14 14:05:52+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_image_transforms.py:ImageTransformsTester:test_get_resize_output_image_size', 'tests/test_image_transforms.py:ImageTransformsTester:test_resize', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_5_numpy_uint_channels_first', 'tests/test_image_transforms.py:ImageTransformsTester:test_... | ['tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_from_mask'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/test_image_transforms.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/image_transforms.py->module->function_definition:to_pil_image"] |
huggingface/transformers | 22,190 | huggingface__transformers-22190 | ['22189'] | 737681477c038d9ed060c4df03b0ebb5b50b69d0 | diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -769,8 +769,8 @@ def __init__(
self.modelcard = modelcard
self.framework = framework
- if self.framework == "pt" and device... | diff --git a/tests/pipelines/test_pipelines_common.py b/tests/pipelines/test_pipelines_common.py
--- a/tests/pipelines/test_pipelines_common.py
+++ b/tests/pipelines/test_pipelines_common.py
@@ -484,6 +484,14 @@ def add(number, extra=0):
outputs = list(dataset)
self.assertEqual(outputs, [[{"id": 2}, {... | transformers-cli serve not working
### System Info
System info
``` bash
- `transformers` version: 4.27.0
- Platform: macOS-12.3.1-arm64-arm-64bit
- Python version: 3.8.12
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 2.0.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (... | cc @Narsil | 2023-03-15 18:04:01+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/pipelines/test_pipelines_common.py:CommonPipelineTest:test_unbatch_attentions_hidden_states', 'tests/pipelines/test_pipelines_common.py:CommonPipelineTest:test_check_task', 'tests/pipelines/test_pipelines_common.py:PipelinePadTest:test_pipeline_padding', 'tests/pipelines/test_pipelines_common.py:CommonPipelineT... | ['tests/pipelines/test_pipelines_common.py:PipelineUtilsTest:test_pipeline_negative_device'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_results.json /testbed/tests/pipelines/test_pipelines_common.py | Bug Fix | false | false | true | false | 0 | 1 | 1 | false | true | ["src/transformers/pipelines/base.py->module->class_definition:Pipeline->function_definition:__init__"] |
huggingface/transformers | 22,458 | huggingface__transformers-22458 | ['22392'] | cd73b9a8c140fb74cd93187f5c3d380cfc308023 | diff --git a/src/transformers/image_transforms.py b/src/transformers/image_transforms.py
--- a/src/transformers/image_transforms.py
+++ b/src/transformers/image_transforms.py
@@ -118,6 +118,33 @@ def rescale(
return rescaled_image
+def _rescale_for_pil_conversion(image):
+ """
+ Detects whether or not th... | diff --git a/tests/test_image_transforms.py b/tests/test_image_transforms.py
--- a/tests/test_image_transforms.py
+++ b/tests/test_image_transforms.py
@@ -249,6 +249,14 @@ def test_resize(self):
# PIL size is in (width, height) order
self.assertEqual(resized_image.size, (40, 30))
+ # Check an... | Inconsistent Normalization for ViTImageProcessor when `do_resize` is False
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-5.4.0-121-generic-x86_64-with-glibc2.31
- Python version: 3.10.9
- Huggingface_hub version: 0.13.2
- PyTorch version (GPU?): 2.0.0+cu117 (True)
- Tensorflow version (GPU?):... | cc @amyeroberts
Hi @Interpause, thanks for raising this issue!
Indeed, this is a funny behaviour. This is happening because of the use of the PIL library to resize images and the rescaling behaviour that happens in `ToTensor`.
To explain in more detail, I'll refer to the input `im` and `im_pil` and `to_tens(im... | 2023-03-29 20:03:48+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/test_image_transforms.py:ImageTransformsTester:test_get_resize_output_image_size', 'tests/test_image_transforms.py:ImageTransformsTester:test_to_pil_image_5_numpy_uint_channels_first', 'tests/test_image_transforms.py:ImageTransformsTester:test_id_to_rgb', 'tests/test_image_transforms.py:ImageTransformsTester:te... | ['tests/test_image_transforms.py:ImageTransformsTester:test_resize'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/test_image_transforms.py | Bug Fix | true | false | false | false | 0 | 0 | 0 | false | false | ["src/transformers/image_transforms.py->module->function_definition:to_pil_image", "src/transformers/image_transforms.py->module->function_definition:resize", "src/transformers/image_transforms.py->module->function_definition:_rescale_for_pil_conversion"] |
huggingface/transformers | 22,649 | huggingface__transformers-22649 | ['21685'] | ee8e80a060d65ab349743ffcb5842365eb0e5606 | diff --git a/src/transformers/models/opt/modeling_opt.py b/src/transformers/models/opt/modeling_opt.py
--- a/src/transformers/models/opt/modeling_opt.py
+++ b/src/transformers/models/opt/modeling_opt.py
@@ -631,19 +631,21 @@ def forward(
else:
raise ValueError("You have to specify either decoder_i... | diff --git a/tests/models/opt/test_modeling_opt.py b/tests/models/opt/test_modeling_opt.py
--- a/tests/models/opt/test_modeling_opt.py
+++ b/tests/models/opt/test_modeling_opt.py
@@ -182,6 +182,19 @@ def create_and_check_decoder_model_past_large_inputs(self, config, inputs_dict):
# test that outputs are equal ... | `modeling_opt.py` if `previous_key_values` given and `attention_mask==None` the model throws an error.
### System Info
- `transformers` version: 4.26.1
- Platform: Linux-4.18.0-147.el8.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.16
- Huggingface_hub version: 0.12.1
- PyTorch version (GPU?): 1.13.1 (False)
... | Hey! Thanks for submitting this issue!
Passing attention maks solves the problem, and usually we expect to pass attention masks when you are using the `past_key_values`(for example in generate). It is debatable whether the default behaviour should rely on the past_key_values.
Do you have a specific usage in mind? ... | 2023-04-07 09:02:52+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/opt/test_modeling_opt.py:OPTModelTest:test_inputs_embeds', 'tests/models/opt/test_modeling_opt.py:OPTModelTest:test_model_common_attributes', 'tests/models/opt/test_modeling_opt.py:OPTModelTest:test_training', 'tests/models/opt/test_modeling_opt.py:OPTModelTest:test_forward_signature', 'tests/models/opt/... | ['tests/models/opt/test_modeling_opt.py:OPTModelTest:test_decoder_model_past_with_large_inputs'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/opt/test_modeling_opt.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/opt/modeling_opt.py->module->class_definition:OPTDecoder->function_definition:forward"] |
huggingface/transformers | 22,920 | huggingface__transformers-22920 | ['22904'] | 1e1cb6f8e5af1c592ed7d6ca035b0e07297e52b8 | diff --git a/src/transformers/models/sam/image_processing_sam.py b/src/transformers/models/sam/image_processing_sam.py
--- a/src/transformers/models/sam/image_processing_sam.py
+++ b/src/transformers/models/sam/image_processing_sam.py
@@ -378,12 +378,13 @@ def post_process_masks(
Remove padding and upscale mas... | diff --git a/tests/models/sam/test_processor_sam.py b/tests/models/sam/test_processor_sam.py
--- a/tests/models/sam/test_processor_sam.py
+++ b/tests/models/sam/test_processor_sam.py
@@ -17,8 +17,8 @@
import numpy as np
-from transformers.testing_utils import require_torchvision, require_vision
-from transformers.... | SAM: Notebook example not working
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: macOS-13.2-arm64-arm-64bit
- Python version: 3.10.6
- Huggingface_hub version: 0.13.4
- Safetensors version: 0.3.0
- PyTorch version (GPU?): 1.13.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax v... | I have similar issue when i run
```
img_url = "https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png"
raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB")
input_points = [[[450, 600]]] # 2D location of a window in the image
inputs = processor(raw_image, input_p... | 2023-04-21 13:38:26+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/sam/test_processor_sam.py:SamProcessorTest:test_image_processor', 'tests/models/sam/test_processor_sam.py:SamProcessorTest:test_save_load_pretrained_additional_features'] | ['tests/models/sam/test_processor_sam.py:SamProcessorTest:test_post_process_masks'] | null | pytest -v --tb=short --show-capture=no --junitxml=test-results.xml /testbed/tests/models/sam/test_processor_sam.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamImageProcessor->function_definition:post_process_masks"] |
huggingface/transformers | 23,126 | huggingface__transformers-23126 | ['20249'] | b61d5b47f640308068139561f673765b2af39874 | diff --git a/src/transformers/hf_argparser.py b/src/transformers/hf_argparser.py
--- a/src/transformers/hf_argparser.py
+++ b/src/transformers/hf_argparser.py
@@ -15,6 +15,7 @@
import dataclasses
import json
import sys
+import types
from argparse import ArgumentDefaultsHelpFormatter, ArgumentParser, ArgumentTypeErr... | diff --git a/tests/utils/test_hf_argparser.py b/tests/utils/test_hf_argparser.py
--- a/tests/utils/test_hf_argparser.py
+++ b/tests/utils/test_hf_argparser.py
@@ -15,6 +15,7 @@
import argparse
import json
import os
+import sys
import tempfile
import unittest
from argparse import Namespace
@@ -36,6 +37,10 @@
... | Support X | Y syntax on HfArgumentParser
### Feature request
[PEP-604](https://peps.python.org/pep-0604/) created the X | Y syntax on python 3.10, which is equivalent to Union[X, Y]. The use of this syntax is not supported by HfArgumentParser.
### Motivation
With this syntax I would like to use something lik... | Looks like adding support while not breaking previous Python version will be tricky, as `from types import UnionType` only work for Python 3.10 and above. We can look at a PR if you want to try a contribution, but I don't think we will add this ourselves until Python 3.10 is more widely supported (PyTorch and TensorFlo... | 2023-05-03 10:49:29+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_basic', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_string_literal_annotation', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_literal', 'tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_parse_dict_extra_key', ... | ['tests/utils/test_hf_argparser.py:HfArgumentParserTest:test_with_optional'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/utils/test_hf_argparser.py -rA --json-report --json-report-file=test_output.json | Feature | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:_parse_dataclass_field", "src/transformers/hf_argparser.py->module->class_definition:HfArgumentParser->function_definition:_add_dataclass_arguments"] |
huggingface/transformers | 23,141 | huggingface__transformers-23141 | ['23140'] | 78b7debf56efb907c6af767882162050d4fbb294 | diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py
--- a/src/transformers/models/whisper/modeling_whisper.py
+++ b/src/transformers/models/whisper/modeling_whisper.py
@@ -1562,6 +1562,7 @@ def generate(
generation_config.return_timestamps ... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -414,6 +414,21 @@ def test_generate_fp16(self):
model.generate(input_features)
model.gen... | Whisper generation support for passing acronym to language arg
### System Info
- `transformers` version: 4.29.0.dev0
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- Safetensors version: 0.2.8
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow version (GPU?... | null | 2023-05-03 22:47:37+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_modeling_whisper.py:WhisperEncoderModelTest:test_sample_generate', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_headmasking', 'tests/models/whisper/test_modeling_whisper.... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_generate_language'] | null | pytest -v --tb=short --show-capture=no --json-report-file=test-results.json /testbed/tests/models/whisper/test_modeling_whisper.py | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/whisper/modeling_whisper.py->module->class_definition:WhisperForConditionalGeneration->function_definition:generate"] |
huggingface/transformers | 23,223 | huggingface__transformers-23223 | ['22175'] | 9088fcae82f4e23021e600966626188ce6fbe6df | diff --git a/src/transformers/feature_extraction_sequence_utils.py b/src/transformers/feature_extraction_sequence_utils.py
--- a/src/transformers/feature_extraction_sequence_utils.py
+++ b/src/transformers/feature_extraction_sequence_utils.py
@@ -140,7 +140,7 @@ def pad(
return_attention_mask if return_att... | diff --git a/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py b/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py
--- a/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py
+++ b/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py
@@ -123,6 +123,14 @@ def test_call(self):
for enc_se... | wav2vec processor batching logic is too restrictive
### System Info
transformers version at the time of writing is `4.26.1`
### Who can help?
_No response_
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (suc... | cc @sanchit-gandhi @ArthurZucker
Hey @LWprogramming! Thanks for the comprehensive issue description - I agree that the logic for checking if the input `is_batched` is broken when the input is a batched numpy array, e.g. the feature extractor **should** set `is_batched=True` when the numpy array is 2-d, but currently d... | 2023-05-09 03:36:11+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_maximum_encoding_length_pair_input', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_training_new_tokenizer', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2CTCTokenizerTest:test_right_an... | ['tests/models/wav2vec2/test_feature_extraction_wav2vec2.py:Wav2Vec2FeatureExtractionTest:test_call', 'tests/models/wav2vec2/test_tokenization_wav2vec2.py:Wav2Vec2TokenizerTest:test_call'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/wav2vec2/test_feature_extraction_wav2vec2.py /testbed/tests/models/wav2vec2/test_tokenization_wav2vec2.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py->module->class_definition:Wav2Vec2FeatureExtractor->function_definition:__call__", "src/transformers/models/wav2vec2/tokenization_wav2vec2.py->module->class_definition:Wav2Vec2Tokenizer->function_definition:__call__", "src/transformers/feature_extraction... |
huggingface/transformers | 23,796 | huggingface__transformers-23796 | ['23764'] | de9255de27abfcae4a1f816b904915f0b1e23cd9 | diff --git a/src/transformers/models/whisper/tokenization_whisper.py b/src/transformers/models/whisper/tokenization_whisper.py
--- a/src/transformers/models/whisper/tokenization_whisper.py
+++ b/src/transformers/models/whisper/tokenization_whisper.py
@@ -721,7 +721,7 @@ def _decode_asr(self, model_outputs, *, return_ti... | diff --git a/tests/models/whisper/test_tokenization_whisper.py b/tests/models/whisper/test_tokenization_whisper.py
--- a/tests/models/whisper/test_tokenization_whisper.py
+++ b/tests/models/whisper/test_tokenization_whisper.py
@@ -213,6 +213,16 @@ def test_skip_special_tokens_skips_prompt_ids(self):
rust_t... | Whisper `get_prompt_ids` throws error when used with a 'FastTokenizer'
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: macOS-13.0-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.12.0
- Safetensors version: 0.2.8
- PyTorch version (GPU?): 1.13.1 (False)
- Tensorflow versi... | Related issue #17391 mentions that `add_prefix_space` can only be specified for fast tokenizers upon init, so it seems like just the manual `" " + text` replacement for this param would be the appropriate fix.
Hey! Thanks for reporting. Indeed I think you can easily fix this for a single model (in the fast tokenizer yo... | 2023-05-26 14:20:42+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_tokenization_whisper.py:WhisperTokenizerTest:test_padding_different_model_input_name', 'tests/models/whisper/test_tokenization_whisper.py:WhisperTokenizerTest:test_added_token_serializable', 'tests/models/whisper/test_tokenization_whisper.py:WhisperTokenizerTest:test_sentencepiece_tokenize_a... | ['tests/models/whisper/test_tokenization_whisper.py:WhisperTokenizerTest:test_fast_tokenizer_get_prompt_ids'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/whisper/test_tokenization_whisper.py --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/models/whisper/tokenization_whisper.py->module->class_definition:WhisperTokenizer->function_definition:get_prompt_ids", "src/transformers/models/whisper/tokenization_whisper_fast.py->module->class_definition:WhisperTokenizerFast->function_definition:get_prompt_ids"] |
huggingface/transformers | 24,238 | huggingface__transformers-24238 | ['24104'] | d7389cd20168052e5fc7abe0cf31cd1eb960fbc9 | diff --git a/src/transformers/generation/configuration_utils.py b/src/transformers/generation/configuration_utils.py
--- a/src/transformers/generation/configuration_utils.py
+++ b/src/transformers/generation/configuration_utils.py
@@ -288,7 +288,8 @@ def __init__(self, **kwargs):
# Additional attributes with... | diff --git a/tests/generation/test_configuration_utils.py b/tests/generation/test_configuration_utils.py
--- a/tests/generation/test_configuration_utils.py
+++ b/tests/generation/test_configuration_utils.py
@@ -93,6 +93,31 @@ def test_initialize_new_kwargs(self):
generation_config = GenerationConfig.from_model... | Error when overriding generation config: GenerationConfig() got multiple values for keyword argument 'num_beams'
### System Info
- `transformers` version: 4.30.0.dev0 (commit: 4aa13224a5bca560147a29c06b2e0597137caf3e)
- Platform: Linux-5.15.0-1013-oracle-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface... | Hey @Taytay 👋
Thank you for raising this issue! This is indeed a bug, I'll open a PR ASAP | 2023-06-13 11:16:39+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_configuration_utils.py:GenerationConfigTest:test_save_load_config_1_foo_json', 'tests/generation/test_configuration_utils.py:GenerationConfigTest:test_update', 'tests/generation/test_configuration_utils.py:GenerationConfigTest:test_from_model_config', 'tests/generation/test_configuration_utils.p... | ['tests/generation/test_configuration_utils.py:GenerationConfigTest:test_kwarg_init'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/generation/test_configuration_utils.py --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["src/transformers/generation/configuration_utils.py->module->class_definition:GenerationConfig->function_definition:from_dict", "src/transformers/generation/configuration_utils.py->module->class_definition:GenerationConfig->function_definition:__init__"] |
huggingface/transformers | 24,510 | huggingface__transformers-24510 | ['16136'] | b52a03cd3bec92d0ee84f0b1f7edee0d5117200a | diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -3477,6 +3477,36 @@ def reverse_bettertransformer(self):
return BetterTransformer.reverse(self)
+ def warn_if_padding_and_no_attention... | diff --git a/tests/models/bert/test_modeling_bert.py b/tests/models/bert/test_modeling_bert.py
--- a/tests/models/bert/test_modeling_bert.py
+++ b/tests/models/bert/test_modeling_bert.py
@@ -18,7 +18,7 @@
from transformers import BertConfig, is_torch_available
from transformers.models.auto import get_values
-from t... | Add warning message if model uses `input_ids` that include padding tokens, but no `attention_mask` is provided.
## **First good issue**
A current error is that a user forwards a batched tensor of `input_ids` that include a padding token, e.g. ```input_ids = torch.tensor([["hello", "this", "is", "a", "long", "string... | Models usually don't know the right pad token ID as pointed out in the issue (I'm also not sure that community-contributed models or models not as heavily used as BERT have the right pas token ID in their configs), so I'm not in favor of this. Plus, the check of the inputs at each forward pass would slow down performan... | 2023-06-27 01:44:15+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/bert/test_modeling_bert.py:BertModelTest:test_greedy_generate', 'tests/models/bert/test_modeling_bert.py:BertModelTest:test_model_common_attributes', 'tests/models/bert/test_modeling_bert.py:BertModelTest:test_beam_sample_generate_dict_output', 'tests/models/bert/test_modeling_bert.py:BertModelTest:test_... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_warn_if_padding_and_no_attention_mask', 'tests/models/bert/test_modeling_bert.py:BertModelTest:test_for_warning_if_padding_and_no_attention_mask'] | null | pytest -v --tb=short --show-capture=no --json-report --json-report-file=test_results.json /testbed/tests/models/bert/test_modeling_bert.py /testbed/tests/test_modeling_utils.py | Feature | false | true | false | false | 10 | 0 | 10 | false | false | ["src/transformers/models/bridgetower/modeling_bridgetower.py->module->class_definition:BridgeTowerTextModel->function_definition:forward", "src/transformers/modeling_utils.py->module->class_definition:PreTrainedModel->function_definition:warn_if_padding_and_no_attention_mask", "src/transformers/models/bert/modeling_be... |
huggingface/transformers | 25,358 | huggingface__transformers-25358 | ['25357'] | 080a97119c0dabfd0fb5c3e26a872ad2958e4f77 | diff --git a/src/transformers/utils/generic.py b/src/transformers/utils/generic.py
--- a/src/transformers/utils/generic.py
+++ b/src/transformers/utils/generic.py
@@ -248,6 +248,21 @@ class ModelOutput(OrderedDict):
</Tip>
"""
+ def __init_subclass__(cls) -> None:
+ """Register subclasses as pytre... | diff --git a/tests/utils/test_model_output.py b/tests/utils/test_model_output.py
--- a/tests/utils/test_model_output.py
+++ b/tests/utils/test_model_output.py
@@ -17,6 +17,7 @@
from dataclasses import dataclass
from typing import Optional
+from transformers.testing_utils import require_torch
from transformers.util... | DDP grads not synced when static_graph=True
### System Info
Related: https://github.com/pytorch/pytorch/issues/106690
This behavior seems to be a quirk of `DistributedDataParallel.forward` and how it chooses to handle serializing and deserializing model output types. Even though `ModelOutput` is a subclass of a sup... | null | 2023-08-07 20:09:18+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/utils/test_model_output.py:ModelOutputTester:test_dict_like_properties', 'tests/utils/test_model_output.py:ModelOutputTester:test_index_with_ints_and_slices', 'tests/utils/test_model_output.py:ModelOutputTester:test_set_keys', 'tests/utils/test_model_output.py:ModelOutputTester:test_set_attributes', 'tests/util... | ['tests/utils/test_model_output.py:ModelOutputTester:test_torch_pytree'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/utils/test_model_output.py -rA --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["src/transformers/utils/generic.py->module->class_definition:ModelOutput", "src/transformers/utils/generic.py->module->class_definition:ModelOutput->function_definition:__init_subclass__"] |
huggingface/transformers | 25,429 | huggingface__transformers-25429 | ['24898'] | d0c1aebea467af499331234e7b285a6bf91ea073 | diff --git a/src/transformers/models/nllb_moe/modeling_nllb_moe.py b/src/transformers/models/nllb_moe/modeling_nllb_moe.py
--- a/src/transformers/models/nllb_moe/modeling_nllb_moe.py
+++ b/src/transformers/models/nllb_moe/modeling_nllb_moe.py
@@ -126,7 +126,6 @@ def create_position_ids_from_input_ids(input_ids, padding... | diff --git a/tests/models/nllb_moe/test_modeling_nllb_moe.py b/tests/models/nllb_moe/test_modeling_nllb_moe.py
--- a/tests/models/nllb_moe/test_modeling_nllb_moe.py
+++ b/tests/models/nllb_moe/test_modeling_nllb_moe.py
@@ -337,6 +337,16 @@ def test_generate_fp16(self):
model.generate(input_ids, attention_mask=... | NLLB MoE router_state referenced before assignment
### System Info
- `transformers` version: 4.29.2
- Platform: Linux-5.15.0-69-generic-x86_64-with-glibc2.17
- Python version: 3.8.17
- Huggingface_hub version: 0.15.1
- Safetensors version: not installed
- PyTorch version (GPU?): 2.0.1 (True)
- Tensorflow version... | cc @ArthurZucker
Hey! Thanks for reporting! I remember working on a bug where NLLB-MoE was not being torch compiled because None values were returned. Will push a fix!
Glad to see that Nllb-MoE is being used 🤗 | 2023-08-10 07:09:39+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/nllb_moe/test_modeling_nllb_moe.py:NllbMoeModelTest:test_hidden_states_output', 'tests/models/nllb_moe/test_modeling_nllb_moe.py:NllbMoeModelTest:test_save_load', 'tests/models/nllb_moe/test_modeling_nllb_moe.py:NllbMoeModelTest:test_group_beam_search_generate_dict_output', 'tests/models/nllb_moe/test_mo... | ['tests/models/nllb_moe/test_modeling_nllb_moe.py:NllbMoeModelTest:test_get_loss'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/nllb_moe/test_modeling_nllb_moe.py -rA --junitxml=test-results.xml | Bug Fix | false | false | false | true | 5 | 1 | 6 | false | false | ["src/transformers/models/nllb_moe/modeling_nllb_moe.py->module->class_definition:NllbMoeDecoderLayer->function_definition:forward", "src/transformers/models/nllb_moe/modeling_nllb_moe.py->module->class_definition:NllbMoeForConditionalGeneration->function_definition:forward", "src/transformers/models/nllb_moe/modeling_... |
huggingface/transformers | 25,636 | huggingface__transformers-25636 | ['25634'] | 021887682224daf29264f98c759a45e88c82e244 | diff --git a/src/transformers/models/gpt2/modeling_flax_gpt2.py b/src/transformers/models/gpt2/modeling_flax_gpt2.py
--- a/src/transformers/models/gpt2/modeling_flax_gpt2.py
+++ b/src/transformers/models/gpt2/modeling_flax_gpt2.py
@@ -753,7 +753,9 @@ def prepare_inputs_for_generation(self, input_ids, max_length, attent... | diff --git a/tests/models/gpt2/test_modeling_flax_gpt2.py b/tests/models/gpt2/test_modeling_flax_gpt2.py
--- a/tests/models/gpt2/test_modeling_flax_gpt2.py
+++ b/tests/models/gpt2/test_modeling_flax_gpt2.py
@@ -187,6 +187,26 @@ def check_use_cache_forward_with_attn_mask(self, model_class_name, config, input
di... | Problem caused by boolean attention mask in `pretrained_model.generate` of Flax GPT2
Hi!
I notice that the usage of a boolean attention mask in `pretrained_model.generate` of Flax GPT2 can cause an error. Here is a short, self-contained code block to showcase the problem; I also prepared a [colab notebook here](htt... | cc @sanchit-gandhi
Hey @liutianlin0121! Thanks for the comprehensive issue description! That's a good spot - we actually covert the `attention_mask` to `"i4"` dtype under-the-hood when we call the Flax module:
https://github.com/huggingface/transformers/blob/450a181d8b963b4e896be4aac701815aa554a6bb/src/transformers/m... | 2023-08-21 17:41:40+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_model_outputs_equivalence', 'tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_beam_search_generate_num_return_sequences', 'tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_no_automatic_init', 'tests/models/gpt2/t... | ['tests/models/gpt2/test_modeling_flax_gpt2.py:FlaxGPT2ModelTest:test_bool_attention_mask_in_generation'] | null | pytest -v --tb=short /testbed/tests/models/gpt2/test_modeling_flax_gpt2.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/gpt2/modeling_flax_gpt2.py->module->class_definition:FlaxGPT2LMHeadModel->function_definition:prepare_inputs_for_generation"] |
huggingface/transformers | 25,765 | huggingface__transformers-25765 | ['23331'] | d0354e5e86842b757cec1ecb7de314a1f2421c1e | diff --git a/src/transformers/models/mega/modeling_mega.py b/src/transformers/models/mega/modeling_mega.py
--- a/src/transformers/models/mega/modeling_mega.py
+++ b/src/transformers/models/mega/modeling_mega.py
@@ -1542,6 +1542,9 @@ def forward(
else:
raise ValueError("You have to specify either i... | diff --git a/tests/models/mega/test_modeling_mega.py b/tests/models/mega/test_modeling_mega.py
--- a/tests/models/mega/test_modeling_mega.py
+++ b/tests/models/mega/test_modeling_mega.py
@@ -313,6 +313,34 @@ def create_and_check_decoder_model_past_large_inputs(
# test that outputs are equal for slice
... | RuntimeError: The size of tensor a (16) must match the size of tensor b (16000) at non-singleton dimension 2
### System Info
- `transformers` version: 4.30.0.dev0
- Platform: Linux-5.10.147+-x86_64-with-glibc2.31
- Python version: 3.10.11
- Huggingface_hub version: 0.14.1
- Safetensors version: not installed
- Py... | Hi @Tylersuard, thanks for reporting this issue.
So that we can best try and help you, could you update the notebook so that it contains the minimal logic to replicate the error and can be run out-of-the-box? As it stands, there's many blocks with comments; references to loading / processing data we don't have acce... | 2023-08-25 17:48:04+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/mega/test_modeling_mega.py:MegaModelTest:test_for_token_classification', 'tests/models/mega/test_modeling_mega.py:MegaModelTest:test_head_pruning_save_load_from_pretrained', 'tests/models/mega/test_modeling_mega.py:MegaModelTest:test_model_as_decoder', 'tests/models/mega/test_modeling_mega.py:MegaModelTe... | ['tests/models/mega/test_modeling_mega.py:MegaModelTest:test_decoder_model_with_chunking'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/mega/test_modeling_mega.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/mega/modeling_mega.py->module->class_definition:MegaModel->function_definition:forward"] |
huggingface/transformers | 25,793 | huggingface__transformers-25793 | ['25769'] | 686c68f64c9d0181bd54d4d2e2446543c3eca1fa | diff --git a/README.md b/README.md
--- a/README.md
+++ b/README.md
@@ -318,7 +318,7 @@ Current number of checkpoints: ** (from OpenAI) released with the paper [Learning Transferable Visual Models From ... | diff --git a/tests/models/llama/test_tokenization_llama.py b/tests/models/llama/test_tokenization_llama.py
--- a/tests/models/llama/test_tokenization_llama.py
+++ b/tests/models/llama/test_tokenization_llama.py
@@ -555,6 +555,25 @@ def test_some_edge_cases(self):
self.assertNotEqual(sp_tokens, tokens)
... | Local variable 'tokens' referenced before assignment error in tokenization_llama.py
### System Info
- `transformers` version: 4.33.0.dev0
- Platform: macOS-12.4-arm64-arm-64bit
- Python version: 3.9.16
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.3
- Accelerate version: not installed
- Accelerate... | +1
Btw `LlamaTokenizerFast` seems to be fine with an empty string
```py
tokenizer = LlamaTokenizerFast.from_pretrained("meta-llama/Llama-2-7b-chat-hf")
tokenizer.tokenize("") # returns []
```
but `LlamaTokenizer` returns this error:
```
---------------------------------------------------------------------... | 2023-08-28 06:47:18+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_special_tokens_initialization_with_non_empty_additional_special_tokens', 'tests/models/llama/test_tokenization_llama.py:LlamaTokenizationTest:test_offsets_mapping', 'tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_num_special_tokens_to_ad... | ['tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_some_edge_cases', 'tests/models/t5/test_tokenization_t5.py:CommonSpmIntegrationTests:test_add_dummy_prefix', 'tests/models/llama/test_tokenization_llama.py:CommonSpmIntegrationTests:test_add_dummy_prefix'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/llama/test_tokenization_llama.py /testbed/tests/models/t5/test_tokenization_t5.py -rA --junitxml=test-results.xml | Bug Fix | false | false | false | true | 2 | 1 | 3 | false | false | ["src/transformers/models/t5/tokenization_t5.py->module->class_definition:T5Tokenizer->function_definition:tokenize", "src/transformers/models/t5/tokenization_t5.py->module->class_definition:T5Tokenizer", "src/transformers/models/llama/tokenization_llama.py->module->class_definition:LlamaTokenizer->function_definition:... |
huggingface/transformers | 25,884 | huggingface__transformers-25884 | ['25804'] | 716bb2e3910fd4872064c55b0d8bc3dad754d129 | diff --git a/src/transformers/pipelines/base.py b/src/transformers/pipelines/base.py
--- a/src/transformers/pipelines/base.py
+++ b/src/transformers/pipelines/base.py
@@ -872,6 +872,9 @@ def save_pretrained(self, save_directory: str, safe_serialization: bool = False)
if self.feature_extractor is not None:
... | diff --git a/tests/pipelines/test_pipelines_image_segmentation.py b/tests/pipelines/test_pipelines_image_segmentation.py
--- a/tests/pipelines/test_pipelines_image_segmentation.py
+++ b/tests/pipelines/test_pipelines_image_segmentation.py
@@ -13,6 +13,7 @@
# limitations under the License.
import hashlib
+import tem... | OSError: /home/datascience/huggingface does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co//home/datascience/huggingface/None' for available files.
### System Info
import transformers
transformers.__version__
'4.31.0'
### Who can help?
_No response_
### Inform... | Hey! Thanks for reporting! Yep I thing we should make sure the `image_processor`is also saved! Would you like to open a PR? 🤗 | 2023-08-31 07:29:21+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_small_model_pt_no_panoptic', 'tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_small_model_pt', 'tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_sma... | ['tests/pipelines/test_pipelines_image_segmentation.py:ImageSegmentationPipelineTests:test_save_load'] | null | pytest -v --tb=short /testbed/tests/pipelines/test_pipelines_image_segmentation.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/pipelines/base.py->module->class_definition:Pipeline->function_definition:save_pretrained"] |
huggingface/transformers | 26,164 | huggingface__transformers-26164 | ['25422'] | 7c63e6fc8c34dcf8b0121eaee776f41ccf3b1137 | diff --git a/src/transformers/models/whisper/modeling_whisper.py b/src/transformers/models/whisper/modeling_whisper.py
--- a/src/transformers/models/whisper/modeling_whisper.py
+++ b/src/transformers/models/whisper/modeling_whisper.py
@@ -1719,13 +1719,22 @@ def generate(
decoder_start_token_id, *text_prom... | diff --git a/tests/models/whisper/test_modeling_whisper.py b/tests/models/whisper/test_modeling_whisper.py
--- a/tests/models/whisper/test_modeling_whisper.py
+++ b/tests/models/whisper/test_modeling_whisper.py
@@ -1075,6 +1075,29 @@ def test_generate_with_prompt_ids_and_forced_decoder_ids(self):
for row in ou... | Whisper Prompting max_new_tokens
### System Info
- `transformers` version: 4.31.0
- Platform: Linux-5.15.109+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.2
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU... | Hi @Helene-Maxcici! Thanks for writing this issue, there’s definitely an out of bounds issue here.
Appreciate you catching the precedence issue that the slicing doesn’t quite match OpenAI’s, we should change that in the fix PR so its slicing one less than half the max_length instead one one more than half. Ultimate... | 2023-09-14 14:02:14+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_model_is_small', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_contrastive_generate_low_memory', 'tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_group_beam_search_generate', 'tests/models/whisper/test_model... | ['tests/models/whisper/test_modeling_whisper.py:WhisperModelTest:test_generate_with_prompt_ids_max_length'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/whisper/test_modeling_whisper.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/whisper/modeling_whisper.py->module->class_definition:WhisperForConditionalGeneration->function_definition:generate"] |
huggingface/transformers | 26,386 | huggingface__transformers-26386 | ['24602'] | 546e7679e7f692ebeefcfc5063cec271a55bae20 | diff --git a/src/transformers/models/esm/modeling_esm.py b/src/transformers/models/esm/modeling_esm.py
--- a/src/transformers/models/esm/modeling_esm.py
+++ b/src/transformers/models/esm/modeling_esm.py
@@ -690,6 +690,7 @@ class EsmPreTrainedModel(PreTrainedModel):
config_class = EsmConfig
base_model_prefix... | diff --git a/tests/models/esm/test_modeling_esm.py b/tests/models/esm/test_modeling_esm.py
--- a/tests/models/esm/test_modeling_esm.py
+++ b/tests/models/esm/test_modeling_esm.py
@@ -151,6 +151,24 @@ def create_and_check_for_token_classification(
result = model(input_ids, attention_mask=input_mask, labels=toke... | Support gradient checkpointing for ESM models
Would you please add `gradient_checkpointing_enable()` feature for ESM models?
These models currently are the best available pre-trained protein language models for researchers.
Many thanks.
| cc @Rocketknight1
Any updates?
It's on the to-do list, but I'm afraid there are competing priorities at the moment!
Let's open it up for anyone in the community who might want to tackle it :)
Hi @amyeroberts @Rocketknight1 I would like to work on this
@sanjeevk-os Great! Once you have the code ready, open a PR and p... | 2023-09-25 14:22:07+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y build... | ['tests/models/esm/test_modeling_esm.py:EsmModelTest:test_head_pruning_save_load_from_pretrained', 'tests/models/esm/test_modeling_esm.py:EsmModelTest:test_tied_weights_keys', 'tests/models/esm/test_modeling_esm.py:EsmModelTest:test_gradient_checkpointing_backward_compatibility', 'tests/models/esm/test_modeling_esm.py:... | ['tests/models/esm/test_modeling_esm.py:EsmModelTest:test_esm_gradient_checkpointing'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/esm/test_modeling_esm.py -rA --junitxml=test-results.xml | Feature | false | false | false | true | 2 | 2 | 4 | false | false | ["src/transformers/models/esm/modeling_esm.py->module->class_definition:EsmPreTrainedModel", "src/transformers/models/esm/modeling_esm.py->module->class_definition:EsmModel->function_definition:_set_gradient_checkpointing", "src/transformers/models/esm/modeling_esm.py->module->class_definition:EsmModel", "src/transform... |
huggingface/transformers | 26,568 | huggingface__transformers-26568 | ['26566', '26566'] | bd6205919aad4d3a2300a39a98a642f1cc3a5348 | diff --git a/src/transformers/models/swin2sr/configuration_swin2sr.py b/src/transformers/models/swin2sr/configuration_swin2sr.py
--- a/src/transformers/models/swin2sr/configuration_swin2sr.py
+++ b/src/transformers/models/swin2sr/configuration_swin2sr.py
@@ -44,6 +44,8 @@ class Swin2SRConfig(PretrainedConfig):
... | diff --git a/tests/models/swin2sr/test_modeling_swin2sr.py b/tests/models/swin2sr/test_modeling_swin2sr.py
--- a/tests/models/swin2sr/test_modeling_swin2sr.py
+++ b/tests/models/swin2sr/test_modeling_swin2sr.py
@@ -46,6 +46,7 @@ def __init__(
image_size=32,
patch_size=1,
num_channels=3,
+ ... | SWIN2SR: Allow to choose number of in_channels and out_channels
### Feature request
I'd like to be able to specify a different number of output and input channels for the Swin2sr superresolution model. The current [SWIN2SR](https://github.com/huggingface/transformers/blob/v4.33.3/src/transformers/models/swin2sr/mode... | 2023-10-03 16:27:03+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_head_pruning_save_load_from_pretrained', 'tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_headmasking', 'tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_can_use_safetensors', 'tests/models/swin2sr/test_modeling... | ['tests/models/swin2sr/test_modeling_swin2sr.py:Swin2SRModelTest:test_model_for_image_super_resolution'] | null | pytest -v --tb=short /testbed/tests/models/swin2sr/test_modeling_swin2sr.py -rA --junitxml=test-results.xml | Feature | false | false | true | false | 0 | 8 | 8 | false | false | ["src/transformers/models/swin2sr/modeling_swin2sr.py->module->class_definition:UpsampleOneStep", "src/transformers/models/swin2sr/modeling_swin2sr.py->module->class_definition:Swin2SRModel->function_definition:__init__", "src/transformers/models/swin2sr/modeling_swin2sr.py->module->class_definition:Swin2SRForImageSupe... | |
huggingface/transformers | 26,678 | huggingface__transformers-26678 | ['27900'] | 98dda8ed03ac3f4af5733bdddaa1dab6a81e15c1 | diff --git a/src/transformers/convert_slow_tokenizer.py b/src/transformers/convert_slow_tokenizer.py
--- a/src/transformers/convert_slow_tokenizer.py
+++ b/src/transformers/convert_slow_tokenizer.py
@@ -552,15 +552,22 @@ def tokenizer(self, proto):
def normalizer(self, proto):
precompiled_charsmap = pro... | diff --git a/tests/models/t5/test_tokenization_t5.py b/tests/models/t5/test_tokenization_t5.py
--- a/tests/models/t5/test_tokenization_t5.py
+++ b/tests/models/t5/test_tokenization_t5.py
@@ -424,6 +424,41 @@ def test_some_edge_cases(self):
self.assertEqual(tokens, [])
self.assertEqual(tokens, tokenize... | Weird Tokenization when Training New Tokenizer from Llama 2 Tokenizer using `train_new_from_iterator`
### System Info
- `transformers` version: 4.35.2
- Platform: Linux-5.4.0-105-generic-x86_64-with-glibc2.31
- Python version: 3.10.13
- Huggingface_hub version: 0.19.4
- Safetensors version: 0.4.1
- Accelerate v... | null | 2023-10-08 20:51:17+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_special_tokens_initialization_with_non_empty_additional_special_tokens', 'tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_special_token_addition', 'tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_num_special_tokens_to_add_... | ['tests/models/t5/test_tokenization_t5.py:T5TokenizationTest:test_fast_slow_edge_cases'] | null | pytest -v --tb=short /testbed/tests/models/t5/test_tokenization_t5.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 2 | 0 | 2 | false | false | ["src/transformers/convert_slow_tokenizer.py->module->class_definition:SpmConverter->function_definition:normalizer", "src/transformers/convert_slow_tokenizer.py->module->class_definition:SpmConverter->function_definition:pre_tokenizer"] |
huggingface/transformers | 26,752 | huggingface__transformers-26752 | ['25271'] | 3bc65505fc0801e3d9ff741ec725fb0cb4d863d6 | diff --git a/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py b/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
--- a/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
+++ b/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py
@@ -620,6 +620,8 @@ d... | diff --git a/tests/models/encoder_decoder/test_modeling_encoder_decoder.py b/tests/models/encoder_decoder/test_modeling_encoder_decoder.py
--- a/tests/models/encoder_decoder/test_modeling_encoder_decoder.py
+++ b/tests/models/encoder_decoder/test_modeling_encoder_decoder.py
@@ -17,8 +17,8 @@
import tempfile
import un... | EncoderDecoder does not automatically create decoder_attention_mask to match decoder_input_ids
### System Info
```
- `transformers` version: 4.31.0
- Platform: Linux-4.15.0-192-generic-x86_64-with-glibc2.27
- Python version: 3.11.4
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.3.1
- Accelerate versi... | somewhat related, it seems like in the notebook, the `decoder_input_ids` nor the `labels` are shifted; Patrick claims it's because:
> `"labels"` are shifted automatically to the left for language modeling training.
but I don't see any evidence of this in the implementation. Was this behavior changed at some point? ... | 2023-10-12 08:20:35+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/encoder_decoder/test_modeling_encoder_decoder.py:BertGenerationEncoderDecoderModelTest:test_encoder_decoder_model', 'tests/models/encoder_decoder/test_modeling_encoder_decoder.py:RoBertaEncoderDecoderModelTest:test_encoder_decoder_model_generate', 'tests/models/encoder_decoder/test_modeling_encoder_decod... | ['tests/models/encoder_decoder/test_modeling_encoder_decoder.py:BertEncoderDecoderModelTest:test_bert2bert_default_decoder_attention_mask'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/encoder_decoder/test_modeling_encoder_decoder.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/encoder_decoder/modeling_encoder_decoder.py->module->class_definition:EncoderDecoderModel->function_definition:forward"] |
huggingface/transformers | 26,839 | huggingface__transformers-26839 | ['26428'] | d7cb5e138ec1ccc848a554574b1a89f0dfaf0e90 | diff --git a/src/transformers/models/idefics/modeling_idefics.py b/src/transformers/models/idefics/modeling_idefics.py
--- a/src/transformers/models/idefics/modeling_idefics.py
+++ b/src/transformers/models/idefics/modeling_idefics.py
@@ -875,16 +875,20 @@ def forward(
attention_mask: Optional[torch.Tensor] = ... | diff --git a/tests/models/idefics/test_modeling_idefics.py b/tests/models/idefics/test_modeling_idefics.py
--- a/tests/models/idefics/test_modeling_idefics.py
+++ b/tests/models/idefics/test_modeling_idefics.py
@@ -71,6 +71,7 @@ def __init__(
type_vocab_size=16,
type_sequence_label_size=2,
in... | IDEFICS Cross Attention: Text tokens appearing before images still attend to image embeddings
### System Info
- `transformers` version: 4.33.1
- Platform: Linux-5.4.0-153-generic-x86_64-with-glibc2.31
- Python version: 3.9.18
- Huggingface_hub version: 0.17.1
- Safetensors version: 0.3.3
- Accelerate version: 0.2... | What do you think @leot13 @VictorSanh ?
Thank you for noticing! It's not easy to detect. We are aware but did training this way. In practice that means the few first tokens with no image are attending to every image instead of none of them, so there's a small information leak.
To fix this, we could apply the image_att... | 2023-10-16 14:26:33+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
... | ['tests/models/idefics/test_modeling_idefics.py:IdeficsForVisionText2TextTest:test_training', 'tests/models/idefics/test_modeling_idefics.py:IdeficsModelTest:test_config', 'tests/models/idefics/test_modeling_idefics.py:IdeficsForVisionText2TextTest:test_resize_embeddings_untied', 'tests/models/idefics/test_modeling_ide... | ['tests/models/idefics/test_modeling_idefics.py:IdeficsForVisionText2TextTest:test_cross_attention_gates', 'tests/models/idefics/test_modeling_idefics.py:IdeficsModelTest:test_cross_attention_gates'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/idefics/test_modeling_idefics.py -rA --junitxml=test-results.xml | Bug Fix | false | true | false | false | 3 | 0 | 3 | false | false | ["src/transformers/models/idefics/modeling_idefics.py->module->class_definition:IdeficsModel->function_definition:forward->function_definition:vblock", "src/transformers/models/idefics/modeling_idefics.py->module->class_definition:IdeficsModel->function_definition:forward", "src/transformers/models/idefics/modeling_ide... |
huggingface/transformers | 27,114 | huggingface__transformers-27114 | ['27050'] | 7e9f10ac94c626780cf9e17485e73aec2c644bf2 | diff --git a/src/transformers/modeling_attn_mask_utils.py b/src/transformers/modeling_attn_mask_utils.py
--- a/src/transformers/modeling_attn_mask_utils.py
+++ b/src/transformers/modeling_attn_mask_utils.py
@@ -11,11 +11,13 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the Licens... | diff --git a/tests/test_modeling_utils.py b/tests/test_modeling_utils.py
--- a/tests/test_modeling_utils.py
+++ b/tests/test_modeling_utils.py
@@ -1266,6 +1266,9 @@ def check_to_4d(self, mask_converter, q_len, kv_len, additional_mask=None, bsz=3
assert mask_4d.shape == (bsz, 1, q_len, kv_len)
+ # ma... | Difference in LlamaAttention & LlamaFlashAttention2 attn_output
### System Info
- `transformers` version: 4.34.1
- Platform: Linux-5.15.0-86-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: 0.23.0
- Accelerate config: n... | Hey, I think this is related to flash attention version, could you have a look at #26697?
We are currently using `flash-attn==2.3.2`. There was a minor version release of flash attention literally yesterday.
The problem persists with `flash-attn==2.3.3`.
Are you able to reproduce on your end with the supplied sc... | 2023-10-27 16:19:01+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Install system dependencies
RUN apt-get update && apt-get install -y \
... | ['tests/test_modeling_utils.py:ModelUtilsTest:test_shard_checkpoint', 'tests/test_modeling_utils.py:AttentionMaskTester:test_causal_mask_sliding', 'tests/test_modeling_utils.py:ModelUtilsTest:test_unexpected_keys_warnings', 'tests/test_modeling_utils.py:ModelUtilsTest:test_no_super_init_config_and_model', 'tests/test_m... | ['tests/test_modeling_utils.py:AttentionMaskTester:test_2d_to_4d_causal', 'tests/test_modeling_utils.py:AttentionMaskTester:test_2d_to_4d_causal_sliding'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/test_modeling_utils.py -rA --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["src/transformers/modeling_attn_mask_utils.py->module->class_definition:AttentionMaskConverter", "src/transformers/modeling_attn_mask_utils.py->module->class_definition:AttentionMaskConverter->function_definition:to_4d"] |
huggingface/transformers | 27,463 | huggingface__transformers-27463 | ['27361'] | 3cefac1d974db5e2825a0cb2b842883a628be7a0 | diff --git a/docs/source/en/model_doc/sam.md b/docs/source/en/model_doc/sam.md
--- a/docs/source/en/model_doc/sam.md
+++ b/docs/source/en/model_doc/sam.md
@@ -66,6 +66,34 @@ masks = processor.image_processor.post_process_masks(
scores = outputs.iou_scores
```
+You can also process your own masks alongside the input... | diff --git a/tests/models/sam/test_processor_sam.py b/tests/models/sam/test_processor_sam.py
--- a/tests/models/sam/test_processor_sam.py
+++ b/tests/models/sam/test_processor_sam.py
@@ -58,13 +58,18 @@ def prepare_image_inputs(self):
"""This function prepares a list of PIL images, or a list of numpy arrays if... | Add how to preprocess mask for finetuning with SAM
### Feature request
The [SAM image processor](https://github.com/huggingface/transformers/blob/main/src/transformers/models/sam/image_processing_sam.py) takes images as input and resizes them so that the longest edge is 1024 (using default values). This is the size ex... | Hi @rwood-97, thanks for raising this issue!
Agreed - being able to pass in the masks to the image processor would be ideal! Feel free to ping me on a PR for review if you'd like to open one :) | 2023-11-13 11:52:42+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/sam/test_processor_sam.py:TFSamProcessorTest:test_post_process_masks', 'tests/models/sam/test_processor_sam.py:SamProcessorEquivalenceTest:test_post_process_masks_equivalence', 'tests/models/sam/test_processor_sam.py:TFSamProcessorTest:test_save_load_pretrained_additional_features', 'tests/models/sam/tes... | ['tests/models/sam/test_processor_sam.py:SamProcessorTest:test_image_processor_with_masks'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/sam/test_processor_sam.py -rA --junitxml=test-results.xml | Feature | false | false | false | true | 5 | 2 | 7 | false | false | ["src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamImageProcessor", "src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamImageProcessor->function_definition:_preprocess_mask", "src/transformers/models/sam/image_processing_sam.py->module->class_definition:SamI... |
huggingface/transformers | 27,561 | huggingface__transformers-27561 | ['27537'] | 5330b83bc5637b8e7eafe095c22ef19e21baff2d | diff --git a/docs/source/en/model_doc/dinov2.md b/docs/source/en/model_doc/dinov2.md
--- a/docs/source/en/model_doc/dinov2.md
+++ b/docs/source/en/model_doc/dinov2.md
@@ -25,6 +25,37 @@ The abstract from the paper is the following:
This model was contributed by [nielsr](https://huggingface.co/nielsr).
The original co... | diff --git a/tests/models/dinov2/test_modeling_dinov2.py b/tests/models/dinov2/test_modeling_dinov2.py
--- a/tests/models/dinov2/test_modeling_dinov2.py
+++ b/tests/models/dinov2/test_modeling_dinov2.py
@@ -221,7 +221,7 @@ class Dinov2ModelTest(ModelTesterMixin, PipelineTesterMixin, unittest.TestCase):
if is_t... | Allow script tracing DINOv2
I found PR to dinov2 "Pass scale factor as a tuple of floats to F.interpolate() to allow tracing."
https://github.com/facebookresearch/dinov2/pull/247
https://github.com/huggingface/transformers/blob/85fde09c97213bf7e8625f83096bb2a9e183f987/src/transformers/models/dinov2/modeling_dinov2.... | I have exception now:
<img width="1153" alt="image" src="https://github.com/huggingface/transformers/assets/11178882/ce61c11a-9247-4045-8da4-5fdd9d3bb899">
Hi @Danil328, thanks for raising this issue!
Could you make sure to follow the [issue template](https://github.com/huggingface/transformers/blob/main/.github... | 2023-11-17 13:44:45+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/dinov2/test_modeling_dinov2.py:Dinov2ModelTest:test_equivalence_flax_to_pt', 'tests/models/dinov2/test_modeling_dinov2.py:Dinov2ModelTest:test_keep_in_fp32_modules', 'tests/models/dinov2/test_modeling_dinov2.py:Dinov2ModelTest:test_model', 'tests/models/dinov2/test_modeling_dinov2.py:Dinov2BackboneTest:t... | ['tests/models/dinov2/test_modeling_dinov2.py:Dinov2ModelTest:test_torch_fx', 'tests/models/dinov2/test_modeling_dinov2.py:Dinov2ModelTest:test_torch_fx_output_loss'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/dinov2/test_modeling_dinov2.py -rA --junitxml=test-results.xml | Feature | false | true | false | false | 1 | 0 | 1 | true | false | ["src/transformers/models/dinov2/modeling_dinov2.py->module->class_definition:Dinov2Embeddings->function_definition:interpolate_pos_encoding"] |
huggingface/transformers | 27,663 | huggingface__transformers-27663 | ['27381'] | 45b70384a7d6692a8304f34a981a5ff020918b82 | diff --git a/src/transformers/models/detr/image_processing_detr.py b/src/transformers/models/detr/image_processing_detr.py
--- a/src/transformers/models/detr/image_processing_detr.py
+++ b/src/transformers/models/detr/image_processing_detr.py
@@ -82,6 +82,7 @@
SUPPORTED_ANNOTATION_FORMATS = (AnnotationFormat.COCO_DETE... | diff --git a/tests/models/yolos/test_image_processing_yolos.py b/tests/models/yolos/test_image_processing_yolos.py
--- a/tests/models/yolos/test_image_processing_yolos.py
+++ b/tests/models/yolos/test_image_processing_yolos.py
@@ -86,18 +86,28 @@ def get_expected_values(self, image_inputs, batched=False):
if n... | `YolosImageProcessor` violates `longest_edge` constraint for certain images
### System Info
- `transformers` version: 4.35.0
- Platform: Linux-5.15.120+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.17.3
- Safetensors version: 0.4.0
- Accelerate version: not installed
- Accelerat... | Hi @xenova, thanks for reporting!
Looking into it 🕵️♀️ | 2023-11-22 20:44:08+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_image_processor_from_and_save_pretrained', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_equivalence_padding', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_init_withou... | ['tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_call_numpy_4_channels', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_resize_max_size_respected', 'tests/models/yolos/test_image_processing_yolos.py:YolosImageProcessingTest:test_call_pil', 'tests/models... | null | pytest -v --tb=short /testbed/tests/models/yolos/test_image_processing_yolos.py -rA --junitxml=test-results.xml | Bug Fix | true | false | false | false | 0 | 0 | 0 | false | false | ["src/transformers/models/yolos/image_processing_yolos.py->module->function_definition:get_size_with_aspect_ratio"] |
huggingface/transformers | 27,717 | huggingface__transformers-27717 | ['26497'] | ef5ab72f4b538d6f9ea032ac307b75b40ceef42e | diff --git a/src/transformers/convert_slow_tokenizer.py b/src/transformers/convert_slow_tokenizer.py
--- a/src/transformers/convert_slow_tokenizer.py
+++ b/src/transformers/convert_slow_tokenizer.py
@@ -800,8 +800,6 @@ def vocab(self, proto):
("<unk>", 0.0),
]
vocab += [(piece.piece, piec... | diff --git a/tests/models/nllb/test_tokenization_nllb.py b/tests/models/nllb/test_tokenization_nllb.py
--- a/tests/models/nllb/test_tokenization_nllb.py
+++ b/tests/models/nllb/test_tokenization_nllb.py
@@ -24,6 +24,7 @@
NllbTokenizerFast,
is_torch_available,
)
+from transformers.models.nllb.tokenization_nll... | NllbTokenizer: optionally list language codes in the config, to enable updating it more smoothly
### Feature request
Currently, `NllbTokenizer` during initialization takes the list of language codes from a hardcoded constant FAIRSEQ_LANGUAGE_CODES.
I propose enable overriding this list with a field in the tokeniz... | WDYT @ArthurZucker?
Mmm I guess for now this can make sense, but think when refactoring NLLB, the FAIRSEQ_LANGUAGE_CODES will be the default of `additional_special_tokens` in the correct order, removing the need to change this. You can also already add language codes using `additional_special_tokens`
Thanks @ArthurZuck... | 2023-11-27 07:16:03+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_embeded_special_tokens', 'tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_num_special_tokens_to_add_equal', 'tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_tokenizers_special_tokens_properties_unset_1', ... | ['tests/models/nllb/test_tokenization_nllb.py:NllbTokenizationTest:test_new_language_codes'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/models/nllb/test_tokenization_nllb.py -rA --junitxml=test-results.xml | Feature | false | false | false | true | 11 | 4 | 15 | false | false | ["src/transformers/convert_slow_tokenizer.py->module->class_definition:NllbConverter->function_definition:vocab", "src/transformers/models/nllb/tokenization_nllb_fast.py->module->class_definition:NllbTokenizerFast->function_definition:lang_code_to_id", "src/transformers/models/nllb/tokenization_nllb.py->module->class_d... |
huggingface/transformers | 27,757 | huggingface__transformers-27757 | ['27704'] | af8acc4760d44e48f953e075e3b13a43843d5f91 | diff --git a/src/transformers/generation/configuration_utils.py b/src/transformers/generation/configuration_utils.py
--- a/src/transformers/generation/configuration_utils.py
+++ b/src/transformers/generation/configuration_utils.py
@@ -497,6 +497,24 @@ def validate(self, is_init=False):
f"({self.num... | diff --git a/tests/generation/test_configuration_utils.py b/tests/generation/test_configuration_utils.py
--- a/tests/generation/test_configuration_utils.py
+++ b/tests/generation/test_configuration_utils.py
@@ -120,6 +120,34 @@ def test_kwarg_init(self):
self.assertEqual(loaded_config.do_sample, True)
... | Stopping criteria does not work for Llama-2-13B
### System Info
- `transformers` version: 4.35.0
- Platform: Linux-5.15.0-89-generic-x86_64-with-glibc2.35
- Python version: 3.9.0
- Huggingface_hub version: 0.16.4
- Safetensors version: 0.4.0
- Accelerate version: 0.24.1
- Accelerate config: not found
- PyTorch... | Hey! 🤗
I don't have access to `StoppingCriteriaSub` (missing form the reproducer) but this is very similar to #23852, and #26959, which most probably has the answers you are looking for.
Now what you need to check thoroughly is not the strings that are decoder, but the token ids that you feed to logit processor. ... | 2023-11-29 13:45:13+00:00 | Python | # Use an official Python runtime as a parent image
FROM public.ecr.aws/docker/library/python:3.10-slim
RUN apt-get update && apt-get install -y git && rm -rf /var/lib/apt/lists/*
# Set the working directory in the container
WORKDIR /testbed
# Copy the current directory contents into the container at /testbed
COPY . ... | ['tests/generation/test_configuration_utils.py:GenerationConfigTest:test_save_load_config_1_foo_json', 'tests/generation/test_configuration_utils.py:GenerationConfigTest:test_update', 'tests/generation/test_configuration_utils.py:GenerationConfigTest:test_from_model_config', 'tests/generation/test_configuration_utils.p... | ['tests/generation/test_configuration_utils.py:GenerationConfigTest:test_validate'] | null | pytest -v --tb=short --show-capture=no /testbed/tests/generation/test_configuration_utils.py -rA --junitxml=test-results.xml | Bug Fix | false | false | false | true | 1 | 1 | 2 | false | false | ["src/transformers/generation/configuration_utils.py->module->class_definition:GenerationConfig", "src/transformers/generation/configuration_utils.py->module->class_definition:GenerationConfig->function_definition:validate"] |
Subsets and Splits
Top Repos by Test Count
Lists the top 1000 repositories by the number of entries, providing a basic count of entries per repository.
Unique Repo Selection
Lists unique repository names from the dataset, providing a basic overview of the repositories present.