file_path stringlengths 3 280 | file_language stringclasses 66 values | content stringlengths 1 1.04M | repo_name stringlengths 5 92 | repo_stars int64 0 154k | repo_description stringlengths 0 402 | repo_primary_language stringclasses 108 values | developer_username stringlengths 1 25 | developer_name stringlengths 0 30 | developer_company stringlengths 0 82 |
|---|---|---|---|---|---|---|---|---|---|
docs/examples/name-property/index.html | HTML |
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title><my-element> ⌲ Examples ⌲ Name Property</title>
<link rel="stylesheet" href="../../docs.css">
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Open+Sans:300,400,600|Roboto+Mono">
<link href="../../prism-okaidia.css" rel="stylesheet" />
<script src="/node_modules/@webcomponents/webcomponentsjs/webcomponents-loader.js"></script>
<script src="/node_modules/lit/polyfill-support.js"></script>
<script type="module" src="../../my-element.bundled.js"></script>
</head>
<body>
<header>
<h1><my-element></h1>
<h2>A web component just for me.</h2>
</header>
<nav>
<a href="../../">Home</a>
<a href="../">Examples</a>
<a href="../../api/">API</a>
<a href="../../install/">Install</a>
</nav>
<div id="main-wrapper">
<main>
<h1>Example: Name Property</h1>
<section class="examples">
<nav class="collection">
<ul>
<li class=selected>
<a href="">Setting the name property</a>
</li>
<li class=>
<a href="../">A basic example</a>
</li>
</ul>
</nav>
<div>
<p><my-element name="Earth"></my-element></p>
<h3>HTML</h3>
<pre class="language-html"><code class="language-html"><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>my-element</span> <span class="token attr-name">name</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>Earth<span class="token punctuation">"</span></span><span class="token punctuation">></span></span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>my-element</span><span class="token punctuation">></span></span></code></pre>
</div>
</section>
</main>
</div>
<footer>
<p>
Made with
<a href="https://github.com/PolymerLabs/lit-starter-ts">lit-starter-ts</a>
</p>
</footer>
</body>
</html> | xiekw2010/lit-component-play | 0 | lit component play | JavaScript | xiekw2010 | David Tse | Alipay |
docs/index.html | HTML |
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title><my-element> ⌲ Home</title>
<link rel="stylesheet" href="docs.css">
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Open+Sans:300,400,600|Roboto+Mono">
<link href="prism-okaidia.css" rel="stylesheet" />
<script src="/node_modules/@webcomponents/webcomponentsjs/webcomponents-loader.js"></script>
<script src="/node_modules/lit/polyfill-support.js"></script>
<script type="module" src="my-element.bundled.js"></script>
</head>
<body>
<header>
<h1><my-element></h1>
<h2>A web component just for me.</h2>
</header>
<nav>
<a href="">Home</a>
<a href="examples/">Examples</a>
<a href="api/">API</a>
<a href="install/">Install</a>
</nav>
<div id="main-wrapper">
<main>
<h1><my-element></h1>
<p><code><my-element></code> is an awesome element. It's a great introduction to building web components with LitElement, with nice documentation site as well.</p>
<h2>As easy as HTML</h2>
<section class="columns">
<div>
<p><code><my-element></code> is just an HTML element. You can it anywhere you can use HTML!</p>
<pre class="language-html"><code class="language-html"><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>my-element</span><span class="token punctuation">></span></span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>my-element</span><span class="token punctuation">></span></span></code></pre>
</div>
<div>
<p><my-element></my-element></p>
</div>
</section>
<h2>Configure with attributes</h2>
<section class="columns">
<div>
<p><code><my-element></code> can be configured with attributed in plain HTML.</p>
<pre class="language-html"><code class="language-html"><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>my-element</span> <span class="token attr-name">name</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>HTML<span class="token punctuation">"</span></span><span class="token punctuation">></span></span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>my-element</span><span class="token punctuation">></span></span></code></pre>
</div>
<div>
<p><my-element name="HTML"></my-element></p>
</div>
</section>
<h2>Declarative rendering</h2>
<section class="columns">
<div>
<p><code><my-element></code> can be used with declarative rendering libraries like Angular, React, Vue, and lit-html</p>
<pre class="language-js"><code class="language-js"><span class="token keyword">import</span> <span class="token punctuation">{</span>html<span class="token punctuation">,</span> render<span class="token punctuation">}</span> <span class="token keyword">from</span> <span class="token string">'lit-html'</span><span class="token punctuation">;</span><br><br><span class="token keyword">const</span> name <span class="token operator">=</span> <span class="token string">'lit-html'</span><span class="token punctuation">;</span><br><br><span class="token function">render</span><span class="token punctuation">(</span><br> html<span class="token template-string"><span class="token template-punctuation string">`</span><span class="token string"><br> <h2>This is a &lt;my-element&gt;</h2><br> <my-element .name=</span><span class="token interpolation"><span class="token interpolation-punctuation punctuation">${</span>name<span class="token interpolation-punctuation punctuation">}</span></span><span class="token string">></my-element><br> </span><span class="token template-punctuation string">`</span></span><span class="token punctuation">,</span><br> document<span class="token punctuation">.</span>body<br><span class="token punctuation">)</span><span class="token punctuation">;</span></code></pre>
</div>
<div>
<h2>This is a <my-element></h2>
<my-element name="lit-html"></my-element>
</div>
</section>
</main>
</div>
<footer>
<p>
Made with
<a href="https://github.com/PolymerLabs/lit-starter-ts">lit-starter-ts</a>
</p>
</footer>
</body>
</html> | xiekw2010/lit-component-play | 0 | lit component play | JavaScript | xiekw2010 | David Tse | Alipay |
docs/install/index.html | HTML |
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title><my-element> ⌲ Install</title>
<link rel="stylesheet" href="../docs.css">
<link rel="stylesheet" href="https://fonts.googleapis.com/css?family=Open+Sans:300,400,600|Roboto+Mono">
<link href="../prism-okaidia.css" rel="stylesheet" />
<script src="/node_modules/@webcomponents/webcomponentsjs/webcomponents-loader.js"></script>
<script src="/node_modules/lit/polyfill-support.js"></script>
<script type="module" src="../my-element.bundled.js"></script>
</head>
<body>
<header>
<h1><my-element></h1>
<h2>A web component just for me.</h2>
</header>
<nav>
<a href="../">Home</a>
<a href="../examples/">Examples</a>
<a href="../api/">API</a>
<a href="">Install</a>
</nav>
<div id="main-wrapper">
<main>
<h1>Install</h1>
<p><code><my-element></code> is distributed on npm, so you can install it locally or use it via npm CDNs like unpkg.com.</p>
<h2>Local Installation</h2>
<pre class="language-bash"><code class="language-bash"><span class="token function">npm</span> i my-element</code></pre>
<h2>CDN</h2>
<p>npm CDNs like <a href="">unpkg.com</a> can directly serve files that have been published to npm. This works great for standard JavaScript modules that the browser can load natively.</p>
<p>For this element to work from unpkg.com specifically, you need to include the <code>?module</code> query parameter, which tells unpkg.com to rewrite "bare" module specificers to full URLs.</p>
<h3>HTML</h3>
<pre class="language-html"><code class="language-html"><span class="token tag"><span class="token tag"><span class="token punctuation"><</span>script</span> <span class="token attr-name">type</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>module<span class="token punctuation">"</span></span> <span class="token attr-name">src</span><span class="token attr-value"><span class="token punctuation attr-equals">=</span><span class="token punctuation">"</span>https://unpkg.com/my-element?module<span class="token punctuation">"</span></span><span class="token punctuation">></span></span><span class="token script"></span><span class="token tag"><span class="token tag"><span class="token punctuation"></</span>script</span><span class="token punctuation">></span></span></code></pre>
<h3>JavaScript</h3>
<pre class="language-html"><code class="language-html">import {MyElement} from 'https://unpkg.com/my-element?module';</code></pre>
</main>
</div>
<footer>
<p>
Made with
<a href="https://github.com/PolymerLabs/lit-starter-ts">lit-starter-ts</a>
</p>
</footer>
</body>
</html> | xiekw2010/lit-component-play | 0 | lit component play | JavaScript | xiekw2010 | David Tse | Alipay |
docs/my-element.bundled.js | JavaScript | /**
* @license
* Copyright 2017 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/
var t,i,s,e;const o=globalThis.trustedTypes,n=o?o.createPolicy("lit-html",{createHTML:t=>t}):void 0,h=`lit$${(Math.random()+"").slice(9)}$`,l="?"+h,r=`<${l}>`,u=document,d=(t="")=>u.createComment(t),c=t=>null===t||"object"!=typeof t&&"function"!=typeof t,a=Array.isArray,v=/<(?:(!--|\/[^a-zA-Z])|(\/?[a-zA-Z][^>\s]*)|(\/?$))/g,f=/-->/g,p=/>/g,y=/>|[ \n\r](?:([^\s"'>=/]+)([ \n\r]*=[ \n\r]*(?:[^ \n\r"'`<>=]|("|')|))|$)/g,b=/'/g,g=/"/g,m=/^(?:script|style|textarea)$/i,w=(t=>(i,...s)=>({_$litType$:t,strings:i,values:s}))(1),S=Symbol.for("lit-noChange"),$=Symbol.for("lit-nothing"),C=new WeakMap,k=u.createTreeWalker(u,129,null,!1),x=(t,i)=>{const s=t.length-1,e=[];let o,l=2===i?"<svg>":"",u=v;for(let i=0;i<s;i++){const s=t[i];let n,d,c=-1,a=0;for(;a<s.length&&(u.lastIndex=a,d=u.exec(s),null!==d);)a=u.lastIndex,u===v?"!--"===d[1]?u=f:void 0!==d[1]?u=p:void 0!==d[2]?(m.test(d[2])&&(o=RegExp("</"+d[2],"g")),u=y):void 0!==d[3]&&(u=y):u===y?">"===d[0]?(u=null!=o?o:v,c=-1):void 0===d[1]?c=-2:(c=u.lastIndex-d[2].length,n=d[1],u=void 0===d[3]?y:'"'===d[3]?g:b):u===g||u===b?u=y:u===f||u===p?u=v:(u=y,o=void 0);const w=u===y&&t[i+1].startsWith("/>")?" ":"";l+=u===v?s+r:c>=0?(e.push(n),s.slice(0,c)+"$lit$"+s.slice(c)+h+w):s+h+(-2===c?(e.push(void 0),i):w)}const d=l+(t[s]||"<?>")+(2===i?"</svg>":"");return[void 0!==n?n.createHTML(d):d,e]};class T{constructor({strings:t,_$litType$:i},s){let e;this.parts=[];let n=0,r=0;const u=t.length-1,c=this.parts,[a,v]=x(t,i);if(this.el=T.createElement(a,s),k.currentNode=this.el.content,2===i){const t=this.el.content,i=t.firstChild;i.remove(),t.append(...i.childNodes)}for(;null!==(e=k.nextNode())&&c.length<u;){if(1===e.nodeType){if(e.hasAttributes()){const t=[];for(const i of e.getAttributeNames())if(i.endsWith("$lit$")||i.startsWith(h)){const s=v[r++];if(t.push(i),void 0!==s){const t=e.getAttribute(s.toLowerCase()+"$lit$").split(h),i=/([.?@])?(.*)/.exec(s);c.push({type:1,index:n,name:i[2],strings:t,ctor:"."===i[1]?E:"?"===i[1]?N:"@"===i[1]?U:A})}else c.push({type:6,index:n})}for(const i of t)e.removeAttribute(i)}if(m.test(e.tagName)){const t=e.textContent.split(h),i=t.length-1;if(i>0){e.textContent=o?o.emptyScript:"";for(let s=0;s<i;s++)e.append(t[s],d()),k.nextNode(),c.push({type:2,index:++n});e.append(t[i],d())}}}else if(8===e.nodeType)if(e.data===l)c.push({type:2,index:n});else{let t=-1;for(;-1!==(t=e.data.indexOf(h,t+1));)c.push({type:7,index:n}),t+=h.length-1}n++}}static createElement(t,i){const s=u.createElement("template");return s.innerHTML=t,s}}function M(t,i,s=t,e){var o,n,h,l;if(i===S)return i;let r=void 0!==e?null===(o=s.Σi)||void 0===o?void 0:o[e]:s.Σo;const u=c(i)?void 0:i._$litDirective$;return(null==r?void 0:r.constructor)!==u&&(null===(n=null==r?void 0:r.O)||void 0===n||n.call(r,!1),void 0===u?r=void 0:(r=new u(t),r.T(t,s,e)),void 0!==e?(null!==(h=(l=s).Σi)&&void 0!==h?h:l.Σi=[])[e]=r:s.Σo=r),void 0!==r&&(i=M(t,r.S(t,i.values),r,e)),i}class O{constructor(t,i){this.l=[],this.N=void 0,this.D=t,this.M=i}u(t){var i;const{el:{content:s},parts:e}=this.D,o=(null!==(i=null==t?void 0:t.creationScope)&&void 0!==i?i:u).importNode(s,!0);k.currentNode=o;let n=k.nextNode(),h=0,l=0,r=e[0];for(;void 0!==r;){if(h===r.index){let i;2===r.type?i=new j(n,n.nextSibling,this,t):1===r.type?i=new r.ctor(n,r.name,r.strings,this,t):6===r.type&&(i=new R(n,this,t)),this.l.push(i),r=e[++l]}h!==(null==r?void 0:r.index)&&(n=k.nextNode(),h++)}return o}v(t){let i=0;for(const s of this.l)void 0!==s&&(void 0!==s.strings?(s.I(t,s,i),i+=s.strings.length-2):s.I(t[i])),i++}}class j{constructor(t,i,s,e){this.type=2,this.N=void 0,this.A=t,this.B=i,this.M=s,this.options=e}setConnected(t){var i;null===(i=this.P)||void 0===i||i.call(this,t)}get parentNode(){return this.A.parentNode}get startNode(){return this.A}get endNode(){return this.B}I(t,i=this){t=M(this,t,i),c(t)?t===$||null==t||""===t?(this.H!==$&&this.R(),this.H=$):t!==this.H&&t!==S&&this.m(t):void 0!==t._$litType$?this._(t):void 0!==t.nodeType?this.$(t):(t=>{var i;return a(t)||"function"==typeof(null===(i=t)||void 0===i?void 0:i[Symbol.iterator])})(t)?this.g(t):this.m(t)}k(t,i=this.B){return this.A.parentNode.insertBefore(t,i)}$(t){this.H!==t&&(this.R(),this.H=this.k(t))}m(t){const i=this.A.nextSibling;null!==i&&3===i.nodeType&&(null===this.B?null===i.nextSibling:i===this.B.previousSibling)?i.data=t:this.$(u.createTextNode(t)),this.H=t}_(t){var i;const{values:s,_$litType$:e}=t,o="number"==typeof e?this.C(t):(void 0===e.el&&(e.el=T.createElement(e.h,this.options)),e);if((null===(i=this.H)||void 0===i?void 0:i.D)===o)this.H.v(s);else{const t=new O(o,this),i=t.u(this.options);t.v(s),this.$(i),this.H=t}}C(t){let i=C.get(t.strings);return void 0===i&&C.set(t.strings,i=new T(t)),i}g(t){a(this.H)||(this.H=[],this.R());const i=this.H;let s,e=0;for(const o of t)e===i.length?i.push(s=new j(this.k(d()),this.k(d()),this,this.options)):s=i[e],s.I(o),e++;e<i.length&&(this.R(s&&s.B.nextSibling,e),i.length=e)}R(t=this.A.nextSibling,i){var s;for(null===(s=this.P)||void 0===s||s.call(this,!1,!0,i);t&&t!==this.B;){const i=t.nextSibling;t.remove(),t=i}}}class A{constructor(t,i,s,e,o){this.type=1,this.H=$,this.N=void 0,this.V=void 0,this.element=t,this.name=i,this.M=e,this.options=o,s.length>2||""!==s[0]||""!==s[1]?(this.H=Array(s.length-1).fill($),this.strings=s):this.H=$}get tagName(){return this.element.tagName}I(t,i=this,s,e){const o=this.strings;let n=!1;if(void 0===o)t=M(this,t,i,0),n=!c(t)||t!==this.H&&t!==S,n&&(this.H=t);else{const e=t;let h,l;for(t=o[0],h=0;h<o.length-1;h++)l=M(this,e[s+h],i,h),l===S&&(l=this.H[h]),n||(n=!c(l)||l!==this.H[h]),l===$?t=$:t!==$&&(t+=(null!=l?l:"")+o[h+1]),this.H[h]=l}n&&!e&&this.W(t)}W(t){t===$?this.element.removeAttribute(this.name):this.element.setAttribute(this.name,null!=t?t:"")}}class E extends A{constructor(){super(...arguments),this.type=3}W(t){this.element[this.name]=t===$?void 0:t}}class N extends A{constructor(){super(...arguments),this.type=4}W(t){t&&t!==$?this.element.setAttribute(this.name,""):this.element.removeAttribute(this.name)}}class U extends A{constructor(){super(...arguments),this.type=5}I(t,i=this){var s;if((t=null!==(s=M(this,t,i,0))&&void 0!==s?s:$)===S)return;const e=this.H,o=t===$&&e!==$||t.capture!==e.capture||t.once!==e.once||t.passive!==e.passive,n=t!==$&&(e===$||o);o&&this.element.removeEventListener(this.name,this,e),n&&this.element.addEventListener(this.name,this,t),this.H=t}handleEvent(t){var i,s;"function"==typeof this.H?this.H.call(null!==(s=null===(i=this.options)||void 0===i?void 0:i.host)&&void 0!==s?s:this.element,t):this.H.handleEvent(t)}}class R{constructor(t,i,s){this.element=t,this.type=6,this.N=void 0,this.V=void 0,this.M=i,this.options=s}I(t){M(this,t)}}null===(i=(t=globalThis).litHtmlPlatformSupport)||void 0===i||i.call(t,T,j),(null!==(s=(e=globalThis).litHtmlVersions)&&void 0!==s?s:e.litHtmlVersions=[]).push("2.0.0-rc.2");
/**
* @license
* Copyright 2019 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/
const _=window.ShadowRoot&&(void 0===window.ShadyCSS||window.ShadyCSS.nativeShadow)&&"adoptedStyleSheets"in Document.prototype&&"replace"in CSSStyleSheet.prototype,z=Symbol();class P{constructor(t,i){if(i!==z)throw Error("CSSResult is not constructable. Use `unsafeCSS` or `css` instead.");this.cssText=t}get styleSheet(){return _&&void 0===this.t&&(this.t=new CSSStyleSheet,this.t.replaceSync(this.cssText)),this.t}toString(){return this.cssText}}const I=new Map,W=_?t=>t:t=>t instanceof CSSStyleSheet?(t=>{let i="";for(const s of t.cssRules)i+=s.cssText;return(t=>new P(t+"",z))(i)})(t):t
/**
* @license
* Copyright 2017 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/;var L,H,q,B;const D={toAttribute(t,i){switch(i){case Boolean:t=t?"":null;break;case Object:case Array:t=null==t?t:JSON.stringify(t)}return t},fromAttribute(t,i){let s=t;switch(i){case Boolean:s=null!==t;break;case Number:s=null===t?null:Number(t);break;case Object:case Array:try{s=JSON.parse(t)}catch(t){s=null}}return s}},J=(t,i)=>i!==t&&(i==i||t==t),K={attribute:!0,type:String,converter:D,reflect:!1,hasChanged:J};class Z extends HTMLElement{constructor(){super(),this.Πi=new Map,this.Πo=void 0,this.Πl=void 0,this.isUpdatePending=!1,this.hasUpdated=!1,this.Πh=null,this.u()}static addInitializer(t){var i;null!==(i=this.v)&&void 0!==i||(this.v=[]),this.v.push(t)}static get observedAttributes(){this.finalize();const t=[];return this.elementProperties.forEach(((i,s)=>{const e=this.Πp(s,i);void 0!==e&&(this.Πm.set(e,s),t.push(e))})),t}static createProperty(t,i=K){if(i.state&&(i.attribute=!1),this.finalize(),this.elementProperties.set(t,i),!i.noAccessor&&!this.prototype.hasOwnProperty(t)){const s="symbol"==typeof t?Symbol():"__"+t,e=this.getPropertyDescriptor(t,s,i);void 0!==e&&Object.defineProperty(this.prototype,t,e)}}static getPropertyDescriptor(t,i,s){return{get(){return this[i]},set(e){const o=this[t];this[i]=e,this.requestUpdate(t,o,s)},configurable:!0,enumerable:!0}}static getPropertyOptions(t){return this.elementProperties.get(t)||K}static finalize(){if(this.hasOwnProperty("finalized"))return!1;this.finalized=!0;const t=Object.getPrototypeOf(this);if(t.finalize(),this.elementProperties=new Map(t.elementProperties),this.Πm=new Map,this.hasOwnProperty("properties")){const t=this.properties,i=[...Object.getOwnPropertyNames(t),...Object.getOwnPropertySymbols(t)];for(const s of i)this.createProperty(s,t[s])}return this.elementStyles=this.finalizeStyles(this.styles),!0}static finalizeStyles(t){const i=[];if(Array.isArray(t)){const s=new Set(t.flat(1/0).reverse());for(const t of s)i.unshift(W(t))}else void 0!==t&&i.push(W(t));return i}static Πp(t,i){const s=i.attribute;return!1===s?void 0:"string"==typeof s?s:"string"==typeof t?t.toLowerCase():void 0}u(){var t;this.Πg=new Promise((t=>this.enableUpdating=t)),this.L=new Map,this.Π_(),this.requestUpdate(),null===(t=this.constructor.v)||void 0===t||t.forEach((t=>t(this)))}addController(t){var i,s;(null!==(i=this.ΠU)&&void 0!==i?i:this.ΠU=[]).push(t),void 0!==this.renderRoot&&this.isConnected&&(null===(s=t.hostConnected)||void 0===s||s.call(t))}removeController(t){var i;null===(i=this.ΠU)||void 0===i||i.splice(this.ΠU.indexOf(t)>>>0,1)}Π_(){this.constructor.elementProperties.forEach(((t,i)=>{this.hasOwnProperty(i)&&(this.Πi.set(i,this[i]),delete this[i])}))}createRenderRoot(){var t;const i=null!==(t=this.shadowRoot)&&void 0!==t?t:this.attachShadow(this.constructor.shadowRootOptions);return((t,i)=>{_?t.adoptedStyleSheets=i.map((t=>t instanceof CSSStyleSheet?t:t.styleSheet)):i.forEach((i=>{const s=document.createElement("style");s.textContent=i.cssText,t.appendChild(s)}))})(i,this.constructor.elementStyles),i}connectedCallback(){var t;void 0===this.renderRoot&&(this.renderRoot=this.createRenderRoot()),this.enableUpdating(!0),null===(t=this.ΠU)||void 0===t||t.forEach((t=>{var i;return null===(i=t.hostConnected)||void 0===i?void 0:i.call(t)})),this.Πl&&(this.Πl(),this.Πo=this.Πl=void 0)}enableUpdating(t){}disconnectedCallback(){var t;null===(t=this.ΠU)||void 0===t||t.forEach((t=>{var i;return null===(i=t.hostDisconnected)||void 0===i?void 0:i.call(t)})),this.Πo=new Promise((t=>this.Πl=t))}attributeChangedCallback(t,i,s){this.K(t,s)}Πj(t,i,s=K){var e,o;const n=this.constructor.Πp(t,s);if(void 0!==n&&!0===s.reflect){const h=(null!==(o=null===(e=s.converter)||void 0===e?void 0:e.toAttribute)&&void 0!==o?o:D.toAttribute)(i,s.type);this.Πh=t,null==h?this.removeAttribute(n):this.setAttribute(n,h),this.Πh=null}}K(t,i){var s,e,o;const n=this.constructor,h=n.Πm.get(t);if(void 0!==h&&this.Πh!==h){const t=n.getPropertyOptions(h),l=t.converter,r=null!==(o=null!==(e=null===(s=l)||void 0===s?void 0:s.fromAttribute)&&void 0!==e?e:"function"==typeof l?l:null)&&void 0!==o?o:D.fromAttribute;this.Πh=h,this[h]=r(i,t.type),this.Πh=null}}requestUpdate(t,i,s){let e=!0;void 0!==t&&(((s=s||this.constructor.getPropertyOptions(t)).hasChanged||J)(this[t],i)?(this.L.has(t)||this.L.set(t,i),!0===s.reflect&&this.Πh!==t&&(void 0===this.Πk&&(this.Πk=new Map),this.Πk.set(t,s))):e=!1),!this.isUpdatePending&&e&&(this.Πg=this.Πq())}async Πq(){this.isUpdatePending=!0;try{for(await this.Πg;this.Πo;)await this.Πo}catch(t){Promise.reject(t)}const t=this.performUpdate();return null!=t&&await t,!this.isUpdatePending}performUpdate(){var t;if(!this.isUpdatePending)return;this.hasUpdated,this.Πi&&(this.Πi.forEach(((t,i)=>this[i]=t)),this.Πi=void 0);let i=!1;const s=this.L;try{i=this.shouldUpdate(s),i?(this.willUpdate(s),null===(t=this.ΠU)||void 0===t||t.forEach((t=>{var i;return null===(i=t.hostUpdate)||void 0===i?void 0:i.call(t)})),this.update(s)):this.Π$()}catch(t){throw i=!1,this.Π$(),t}i&&this.E(s)}willUpdate(t){}E(t){var i;null===(i=this.ΠU)||void 0===i||i.forEach((t=>{var i;return null===(i=t.hostUpdated)||void 0===i?void 0:i.call(t)})),this.hasUpdated||(this.hasUpdated=!0,this.firstUpdated(t)),this.updated(t)}Π$(){this.L=new Map,this.isUpdatePending=!1}get updateComplete(){return this.getUpdateComplete()}getUpdateComplete(){return this.Πg}shouldUpdate(t){return!0}update(t){void 0!==this.Πk&&(this.Πk.forEach(((t,i)=>this.Πj(i,this[i],t))),this.Πk=void 0),this.Π$()}updated(t){}firstUpdated(t){}}
/**
* @license
* Copyright 2017 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/
var V,F,G,Q,X,Y;Z.finalized=!0,Z.shadowRootOptions={mode:"open"},null===(H=(L=globalThis).reactiveElementPlatformSupport)||void 0===H||H.call(L,{ReactiveElement:Z}),(null!==(q=(B=globalThis).reactiveElementVersions)&&void 0!==q?q:B.reactiveElementVersions=[]).push("1.0.0-rc.1"),(null!==(V=(Y=globalThis).litElementVersions)&&void 0!==V?V:Y.litElementVersions=[]).push("3.0.0-rc.1");class tt extends Z{constructor(){super(...arguments),this.renderOptions={host:this},this.Φt=void 0}createRenderRoot(){var t,i;const s=super.createRenderRoot();return null!==(t=(i=this.renderOptions).renderBefore)&&void 0!==t||(i.renderBefore=s.firstChild),s}update(t){const i=this.render();super.update(t),this.Φt=((t,i,s)=>{var e,o;const n=null!==(e=null==s?void 0:s.renderBefore)&&void 0!==e?e:i;let h=n._$litPart$;if(void 0===h){const t=null!==(o=null==s?void 0:s.renderBefore)&&void 0!==o?o:null;n._$litPart$=h=new j(i.insertBefore(d(),t),t,void 0,s)}return h.I(t),h})(i,this.renderRoot,this.renderOptions)}connectedCallback(){var t;super.connectedCallback(),null===(t=this.Φt)||void 0===t||t.setConnected(!0)}disconnectedCallback(){var t;super.disconnectedCallback(),null===(t=this.Φt)||void 0===t||t.setConnected(!1)}render(){return S}}tt.finalized=!0,tt._$litElement$=!0,null===(G=(F=globalThis).litElementHydrateSupport)||void 0===G||G.call(F,{LitElement:tt}),null===(X=(Q=globalThis).litElementPlatformSupport)||void 0===X||X.call(Q,{LitElement:tt});
/**
* @license
* Copyright 2017 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/
const it=(t,i)=>"method"===i.kind&&i.descriptor&&!("value"in i.descriptor)?{...i,finisher(s){s.createProperty(i.key,t)}}:{kind:"field",key:Symbol(),placement:"own",descriptor:{},originalKey:i.key,initializer(){"function"==typeof i.initializer&&(this[i.key]=i.initializer.call(this))},finisher(s){s.createProperty(i.key,t)}};
/**
* @license
* Copyright 2017 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/function st(t){return(i,s)=>void 0!==s?((t,i,s)=>{i.constructor.createProperty(s,t)})(t,i,s):it(t,i)
/**
* @license
* Copyright 2019 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/}var et=function(t,i,s,e){for(var o,n=arguments.length,h=n<3?i:null===e?e=Object.getOwnPropertyDescriptor(i,s):e,l=t.length-1;l>=0;l--)(o=t[l])&&(h=(n<3?o(h):n>3?o(i,s,h):o(i,s))||h);return n>3&&h&&Object.defineProperty(i,s,h),h};let ot=class extends tt{constructor(){super(...arguments),this.name="World",this.count=0}render(){return w`
<h1>Hello, ${this.name}!</h1>
<button @click=${this._onClick} part="button">
Click Count: ${this.count}
</button>
<slot></slot>
`}_onClick(){this.count++}foo(){return"foo"}};ot.styles=((t,...i)=>{const s=i.reduce(((i,s,e)=>i+(t=>{if(t instanceof P)return t.cssText;if("number"==typeof t)return t;throw Error(`Value passed to 'css' function must be a 'css' function result: ${t}. Use 'unsafeCSS' to pass non-literal values, but\n take care to ensure page security.`)})(s)+t[e+1]),t[0]);let e=I.get(s);return void 0===e&&I.set(s,e=new P(s,z)),e})`
:host {
display: block;
border: solid 1px gray;
padding: 16px;
max-width: 800px;
}
`,et([st()],ot.prototype,"name",void 0),et([st({type:Number})],ot.prototype,"count",void 0),ot=et([(t=>i=>"function"==typeof i?((t,i)=>(window.customElements.define(t,i),i))(t,i):((t,i)=>{const{kind:s,elements:e}=i;return{kind:s,elements:e,finisher(i){window.customElements.define(t,i)}}})(t,i))("my-element")],ot);export{ot as MyElement};
| xiekw2010/lit-component-play | 0 | lit component play | JavaScript | xiekw2010 | David Tse | Alipay |
docs/prism-okaidia.css | CSS | /**
* okaidia theme for JavaScript, CSS and HTML
* Loosely based on Monokai textmate theme by http://www.monokai.nl/
* @author ocodia
*/
code[class*="language-"],
pre[class*="language-"] {
color: #f8f8f2;
background: none;
text-shadow: 0 1px rgba(0, 0, 0, 0.3);
font-family: Consolas, Monaco, 'Andale Mono', 'Ubuntu Mono', monospace;
font-size: 1em;
text-align: left;
white-space: pre;
word-spacing: normal;
word-break: normal;
word-wrap: normal;
line-height: 1.5;
-moz-tab-size: 4;
-o-tab-size: 4;
tab-size: 4;
-webkit-hyphens: none;
-moz-hyphens: none;
-ms-hyphens: none;
hyphens: none;
}
/* Code blocks */
pre[class*="language-"] {
padding: 1em;
margin: .5em 0;
overflow: auto;
border-radius: 0.3em;
}
:not(pre) > code[class*="language-"],
pre[class*="language-"] {
background: #272822;
}
/* Inline code */
:not(pre) > code[class*="language-"] {
padding: .1em;
border-radius: .3em;
white-space: normal;
}
.token.comment,
.token.prolog,
.token.doctype,
.token.cdata {
color: #8292a2;
}
.token.punctuation {
color: #f8f8f2;
}
.token.namespace {
opacity: .7;
}
.token.property,
.token.tag,
.token.constant,
.token.symbol,
.token.deleted {
color: #f92672;
}
.token.boolean,
.token.number {
color: #ae81ff;
}
.token.selector,
.token.attr-name,
.token.string,
.token.char,
.token.builtin,
.token.inserted {
color: #a6e22e;
}
.token.operator,
.token.entity,
.token.url,
.language-css .token.string,
.style .token.string,
.token.variable {
color: #f8f8f2;
}
.token.atrule,
.token.attr-value,
.token.function,
.token.class-name {
color: #e6db74;
}
.token.keyword {
color: #66d9ef;
}
.token.regex,
.token.important {
color: #fd971f;
}
.token.important,
.token.bold {
font-weight: bold;
}
.token.italic {
font-style: italic;
}
.token.entity {
cursor: help;
}
| xiekw2010/lit-component-play | 0 | lit component play | JavaScript | xiekw2010 | David Tse | Alipay |
rollup.config.js | JavaScript | /**
* @license
* Copyright 2018 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/
import summary from 'rollup-plugin-summary';
import {terser} from 'rollup-plugin-terser';
import resolve from '@rollup/plugin-node-resolve';
import replace from '@rollup/plugin-replace';
export default {
input: 'my-element.js',
output: {
file: 'my-element.bundled.js',
format: 'esm',
},
onwarn(warning) {
if (warning.code !== 'THIS_IS_UNDEFINED') {
console.error(`(!) ${warning.message}`);
}
},
plugins: [
replace({'Reflect.decorate': 'undefined'}),
resolve(),
terser({
ecma: 2017,
module: true,
warnings: true,
mangle: {
properties: {
regex: /^__/,
},
},
}),
summary(),
],
};
| xiekw2010/lit-component-play | 0 | lit component play | JavaScript | xiekw2010 | David Tse | Alipay |
src/my-element.ts | TypeScript | /**
* @license
* Copyright 2019 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/
import {LitElement, html, css} from 'lit';
import {customElement, property} from 'lit/decorators.js';
/**
* An example element.
*
* @slot - This element has a slot
* @csspart button - The button
*/
@customElement('my-element')
export class MyElement extends LitElement {
static styles = css`
:host {
display: block;
border: solid 1px gray;
padding: 16px;
max-width: 800px;
}
`;
/**
* The name to say "Hello" to.
*/
@property()
name = 'World';
/**
* The number of times the button has been clicked.
*/
@property({type: Number})
count = 0;
render() {
return html`
<h1>Hello, ${this.name}!</h1>
<button @click=${this._onClick} part="button">
Click Count: ${this.count}
</button>
<slot></slot>
`;
}
private _onClick() {
this.count++;
}
foo(): string {
return 'foo';
}
}
declare global {
interface HTMLElementTagNameMap {
'my-element': MyElement;
}
}
| xiekw2010/lit-component-play | 0 | lit component play | JavaScript | xiekw2010 | David Tse | Alipay |
src/test/my-element_test.ts | TypeScript | /**
* @license
* Copyright 2021 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/
import {MyElement} from '../my-element.js';
import {fixture, html} from '@open-wc/testing';
const assert = chai.assert;
suite('my-element', () => {
test('is defined', () => {
const el = document.createElement('my-element');
assert.instanceOf(el, MyElement);
});
test('renders with default values', async () => {
const el = await fixture(html`<my-element></my-element>`);
assert.shadowDom.equal(
el,
`
<h1>Hello, World!</h1>
<button part="button">Click Count: 0</button>
<slot></slot>
`
);
});
test('renders with a set name', async () => {
const el = await fixture(html`<my-element name="Test"></my-element>`);
assert.shadowDom.equal(
el,
`
<h1>Hello, Test!</h1>
<button part="button">Click Count: 0</button>
<slot></slot>
`
);
});
test('handles a click', async () => {
const el = (await fixture(html`<my-element></my-element>`)) as MyElement;
const button = el.shadowRoot!.querySelector('button')!;
button.click();
await el.updateComplete;
assert.shadowDom.equal(
el,
`
<h1>Hello, World!</h1>
<button part="button">Click Count: 1</button>
<slot></slot>
`
);
});
test('styling applied', async () => {
const el = (await fixture(html`<my-element></my-element>`)) as MyElement;
await el.updateComplete;
assert.equal(getComputedStyle(el).paddingTop, '16px');
});
});
| xiekw2010/lit-component-play | 0 | lit component play | JavaScript | xiekw2010 | David Tse | Alipay |
web-dev-server.config.js | JavaScript | /**
* @license
* Copyright 2021 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/
import {legacyPlugin} from '@web/dev-server-legacy';
export default {
nodeResolve: true,
preserveSymlinks: true,
plugins: [
legacyPlugin({
polyfills: {
// Manually imported in index.html file
webcomponents: false,
},
}),
],
};
| xiekw2010/lit-component-play | 0 | lit component play | JavaScript | xiekw2010 | David Tse | Alipay |
web-test-runner.config.js | JavaScript | /**
* @license
* Copyright 2021 Google LLC
* SPDX-License-Identifier: BSD-3-Clause
*/
import {legacyPlugin} from '@web/dev-server-legacy';
import {playwrightLauncher} from '@web/test-runner-playwright';
// Uncomment for testing on Sauce Labs
// Must run `npm i --save-dev @web/test-runner-saucelabs` and set
// SAUCE_USERNAME and SAUCE_USERNAME environment variables
// ===========
// import {createSauceLabsLauncher} from '@web/test-runner-saucelabs';
// const sauceLabsLauncher = createSauceLabsLauncher(
// {
// user: process.env.SAUCE_USERNAME,
// key: process.env.SAUCE_USERNAME,
// },
// {
// 'sauce:options': {
// name: 'unit tests',
// build: `${process.env.GITHUB_REF ?? 'local'} build ${
// process.env.GITHUB_RUN_NUMBER ?? ''
// }`,
// },
// }
// );
// Uncomment for testing on BrowserStack
// Must run `npm i --save-dev @web/test-runner-browserstack` and set
// BROWSER_STACK_USERNAME and BROWSER_STACK_ACCESS_KEY environment variables
// ===========
// import {browserstackLauncher as createBrowserstackLauncher} from '@web/test-runner-browserstack';
// const browserstackLauncher = (config) => createBrowserstackLauncher({
// capabilities: {
// 'browserstack.user': process.env.BROWSER_STACK_USERNAME,
// 'browserstack.key': process.env.BROWSER_STACK_ACCESS_KEY,
// project: 'my-element',
// name: 'unit tests',
// build: `${process.env.GITHUB_REF ?? 'local'} build ${
// process.env.GITHUB_RUN_NUMBER ?? ''
// }`,
// ...config,
// }
// });
const browsers = {
// Local browser testing via playwright
// ===========
chromium: playwrightLauncher({product: 'chromium'}),
firefox: playwrightLauncher({product: 'firefox'}),
webkit: playwrightLauncher({product: 'webkit'}),
// Uncomment example launchers for running on Sauce Labs
// ===========
// chromium: sauceLabsLauncher({browserName: 'chrome', browserVersion: 'latest', platformName: 'Windows 10'}),
// firefox: sauceLabsLauncher({browserName: 'firefox', browserVersion: 'latest', platformName: 'Windows 10'}),
// edge: sauceLabsLauncher({browserName: 'MicrosoftEdge', browserVersion: 'latest', platformName: 'Windows 10'}),
// ie11: sauceLabsLauncher({browserName: 'internet explorer', browserVersion: '11.0', platformName: 'Windows 10'}),
// safari: sauceLabsLauncher({browserName: 'safari', browserVersion: 'latest', platformName: 'macOS 10.15'}),
// Uncomment example launchers for running on Sauce Labs
// ===========
// chromium: browserstackLauncher({browserName: 'Chrome', os: 'Windows', os_version: '10'}),
// firefox: browserstackLauncher({browserName: 'Firefox', os: 'Windows', os_version: '10'}),
// edge: browserstackLauncher({browserName: 'MicrosoftEdge', os: 'Windows', os_version: '10'}),
// ie11: browserstackLauncher({browserName: 'IE', browser_version: '11.0', os: 'Windows', os_version: '10'}),
// safari: browserstackLauncher({browserName: 'Safari', browser_version: '14.0', os: 'OS X', os_version: 'Big Sur'}),
};
// Prepend BROWSERS=x,y to `npm run test` to run a subset of browsers
// e.g. `BROWSERS=chromium,firefox npm run test`
const noBrowser = (b) => {
throw new Error(`No browser configured named '${b}'; using defaults`);
};
let commandLineBrowsers;
try {
commandLineBrowsers = process.env.BROWSERS?.split(',').map(
(b) => browsers[b] ?? noBrowser(b)
);
} catch (e) {
console.warn(e);
}
// https://modern-web.dev/docs/test-runner/cli-and-configuration/
export default {
rootDir: '.',
files: ['./test/**/*_test.js'],
nodeResolve: true,
preserveSymlinks: true,
browsers: commandLineBrowsers ?? Object.values(browsers),
testFramework: {
// https://mochajs.org/api/mocha
config: {
ui: 'tdd',
},
},
plugins: [
// Detect browsers without modules (e.g. IE11) and transform to SystemJS
// (https://modern-web.dev/docs/dev-server/plugins/legacy/).
legacyPlugin({
polyfills: {
webcomponents: true,
// Inject lit's polyfill-support module into test files, which is required
// for interfacing with the webcomponents polyfills
custom: [
{
name: 'lit-polyfill-support',
path: 'node_modules/lit/polyfill-support.js',
test:
"!('attachShadow' in Element.prototype) || !('getRootNode' in Element.prototype) || window.ShadyDOM && window.ShadyDOM.force",
module: false,
},
],
},
}),
],
};
| xiekw2010/lit-component-play | 0 | lit component play | JavaScript | xiekw2010 | David Tse | Alipay |
controllers/api/looks.js | JavaScript | /*!
* mojing - controllers/task.js
* Copyright(c) 2014 ju.taobao.com
* Author: jianhui.fjh <jianhui.fjh@alibaba-inc.com>
*/
'use strict';
exports.allLooks = function* () {
this.body = {
"hi": 'xiekw'
}
};
| xiekw2010/wechatlook | 0 | A koa crawler for wechat look | JavaScript | xiekw2010 | David Tse | Alipay |
crawler.js | JavaScript | /**
* Created by xiekaiwei on 16/6/23.
*/
"use strict";
const Crawler = require('simplecrawler');
const cheerio = require('cheerio');
const fs = require('fs');
const path = require('path');
const redis = require('./storage/redis');
const TEMP_HOT_PATH = 'http://www.wxcha.com/biaoqing/hot_';
const TEMP_RECENT_PATH = 'http://www.wxcha.com/biaoqing/update_';
function crawl() {
crawlWXCHA(TEMP_HOT_PATH, 5, path.join(__dirname, '/test/fixtures/wx_hot_mocks.json'));
crawlWXCHA(TEMP_RECENT_PATH, 5, path.join(__dirname, '/test/fixtures/wx_recent_mocks.json'));
}
function crawlWXCHA(base, range, outputs) {
let WXCHA_CRAWLEDURLS = new Set();
function crawledURLS(base, range) {
const end = range;
let arr = [];
for (var i = 1; i < end; i++) {
arr.push(i);
}
return arr.map(p => base + p + '.html');
}
function condition(parsedURL, queueItem) {
return parsedURL.path.match(/^\/biaoqing\/\d+.html$/i)
}
function crawler(url, condition) {
return new Promise((resolve, reject) => {
const crawler = Crawler.crawl(url);
crawler.maxDepth = 2;
let result = [];
crawler.on("fetchcomplete", function(queueItem, responseBuffer, response) {
console.log("I just received %s (%d bytes)", queueItem.url, responseBuffer.length);
console.log("It was a resource of type %s", response.headers['content-type']);
let ctn = this.wait();
discoveryHotLinks(queueItem.url, responseBuffer, function(res) {
result = result.concat(res);
ctn();
});
});
crawler.on('complete', () => resolve(result));
crawler.addFetchCondition(condition);
crawler.start();
})
}
function discoveryHotLinks(url, data, fn) {
let res = [];
if (WXCHA_CRAWLEDURLS.has(url)) {
fn && fn(res);
return;
}
WXCHA_CRAWLEDURLS.add(url);
const titleClass = 'div.h1_tit';
const desc = 'div.daoyubox';
const tupian = 'ul.tupian3_ul';
const $ = cheerio.load(data.toString('utf8'));
const title = $(titleClass).find('h1').text();
if (title) {
const descText = $(desc).find('p').text();
const pics = $(tupian).children().find('img').get().map(p => p.attribs['data-original']);
res.push({
title: title,
desc: descText,
pics: pics,
fromURL: url
})
}
fn && fn(res);
}
const hotURLs = crawledURLS(base, range);
const allCrawlHots = hotURLs.map(p => crawler(p, condition));
Promise.all(allCrawlHots)
.then(res => {
console.log('crawl done');
const result = res.reduce((a, b) => a.concat(b));
fs.writeFileSync(outputs, JSON.stringify(result));
})
.catch(err => console.log('load hot failed', err))
}
module.exports = {
crawl: crawl
}; | xiekw2010/wechatlook | 0 | A koa crawler for wechat look | JavaScript | xiekw2010 | David Tse | Alipay |
index.js | JavaScript | 'use strict';
const koa = require('koa');
const logger = require('koa-logger');
const onerror = require('koa-onerror');
const routes = require('./routes');
const crawler = require('./crawler');
require('./storage/redis');
const app = koa();
// middlewares
app.use(logger());
onerror(app);
routes(app);
// listen
app.listen(3000);
console.log('listening on port 3000');
crawler.crawl();
setInterval(crawler.crawl, 12 * 60 * 60 * 1000);
| xiekw2010/wechatlook | 0 | A koa crawler for wechat look | JavaScript | xiekw2010 | David Tse | Alipay |
localStart.sh | Shell | #!/usr/bin/env bash
/Users/xiekaiwei/Downloads/redis-3.2.1/src/redis-server
| xiekw2010/wechatlook | 0 | A koa crawler for wechat look | JavaScript | xiekw2010 | David Tse | Alipay |
routes.js | JavaScript | /*!
* mojing - routes.js
* Copyright(c) 2014 ju.taobao.com
* Author: jianhui.fjh <jianhui.fjh@alibaba-inc.com>
*/
'use strict';
/**
* Module dependencies.
*/
const route = require('koa-route');
const looks = require('./controllers/api/looks');
module.exports = function (app) {
app.use(route.get('/api/allLooks', looks.allLooks));
};
| xiekw2010/wechatlook | 0 | A koa crawler for wechat look | JavaScript | xiekw2010 | David Tse | Alipay |
storage/redis.js | JavaScript | /**
* Created by xiekaiwei on 16/6/24.
*/
"use strict";
var redis = require("redis");
// if you'd like to select database 3, instead of 0 (default), call
// client.select(3, function() { /* ... */ });
var client = redis.createClient();
client.on("error", function (err) {
console.log("Error " + err);
});
//client.set("string key", "string val", redis.print);
//client.hset("hash key", "hashtest 1", "some value", redis.print);
//client.hset(["hash key", "hashtest 2", "some other value"], redis.print);
//client.hkeys("hash key", function (err, replies) {
// console.log(replies.length + " replies:");
// replies.forEach(function (reply, i) {
// console.log(" " + i + ": " + reply);
// });
//});
module.exports = client; | xiekw2010/wechatlook | 0 | A koa crawler for wechat look | JavaScript | xiekw2010 | David Tse | Alipay |
test/index-spec.js | JavaScript | import expect from 'expect.js';
describe('index', () => {
it('normal', () => {
expect(1).be.equal(1);
});
});
| xiekw2010/wechatlook | 0 | A koa crawler for wechat look | JavaScript | xiekw2010 | David Tse | Alipay |
datafeeder.py | Python | import difflib,os
import numpy as np
import scipy.io.wavfile as wav
from tqdm import tqdm
from scipy.fftpack import fft
from random import shuffle
from keras import backend as K
from utils import audio
from utils.mylogger import log
from hparams import hparams as hp
import librosa
class DataFeeder():
'''
属性:
wav_lst : ['data_aishell/wav/dev/S0724/BAC009S0724W0121.wav' , ...]
pny_lst : [['guang3','zhou1','shi4','fang2','di4','chan3','zhong1','jie4','xie2','hui4','fen1','xi1'] , ...]
han_lst : ['广州市房地产中介协会分析']
am_vocab : ['_', 'yi1', ...]
pny_vocab : ['<PAD>', 'yi1', ...]
han_vocab : ['<PAD>', '一', ...]
'''
def __init__(self, args):
self.data_type = args.data_type # train test dev
self.data_path = args.data_path # 存放数据的顶层目录
self.thchs30 = args.thchs30 # 是否使用thchs30
self.aishell = args.aishell
self.prime = args.prime
self.stcmd = args.stcmd
self.data_length = args.data_length # 使用多少数据训练 None表示全部
self.batch_size = args.batch_size # batch大小
self.shuffle = args.shuffle # 是否打乱训练数据
self.feature_type = args.feature_type
self.AM = args.AM
self.LM = args.LM
self.source_init()
def source_init(self):
print('get source list...')
read_files = []
if self.data_type == 'train':
if self.thchs30 == True:
read_files.append('thchs_train.txt')
if self.aishell == True:
read_files.append('aishell_train.txt')
if self.prime == True:
read_files.append('prime.txt')
if self.stcmd == True:
read_files.append('stcmd.txt')
elif self.data_type == 'dev':
if self.thchs30 == True:
read_files.append('thchs_dev.txt')
if self.aishell == True:
read_files.append('aishell_dev.txt')
elif self.data_type == 'test':
if self.thchs30 == True:
read_files.append('thchs_test.txt')
if self.aishell == True:
read_files.append('aishell_test.txt')
self.wav_lst = []
self.pny_lst = []
self.han_lst = []
for file in read_files:
print('load ', file, ' data...')
sub_file = 'datasets/' + file
with open(sub_file, 'r', encoding='utf8') as f:
data = f.readlines()
for line in tqdm(data):
wav_file, pny, han = line.split('\t')
self.wav_lst.append(wav_file)
self.pny_lst.append(pny.split(' '))
self.han_lst.append(han.strip('\n'))
if self.data_length:
self.wav_lst = self.wav_lst[:self.data_length]
self.pny_lst = self.pny_lst[:self.data_length]
self.han_lst = self.han_lst[:self.data_length]
if self.AM:
print('make am vocab...')
self.am_vocab = self.mk_am_vocab(self.pny_lst)
if self.LM:
print('make lm pinyin vocab...')
self.pny_vocab = self.mk_lm_pny_vocab(self.pny_lst)
print('make lm hanzi vocab...')
self.han_vocab = self.mk_lm_han_vocab(self.han_lst)
def get_am_batch(self):
# shuffle只是对index打乱,没有对原始数据打乱,所以wav_lst[i]和pny_lst[i]还是一一对应的
shuffle_list = [i for i in range(len(self.wav_lst))]
while 1:
if self.shuffle == True:
shuffle(shuffle_list)
# len(self.wav_lst) // self.batch_size的值 表示一个epoch里有多少step才能把所有数据过一遍
for i in range(len(self.wav_lst) // self.batch_size):
wav_data_lst = [] # wav_data_lst里放的是batch_size个频谱图 wav_lst里放的是音频文件地址
label_data_lst = []
begin = i * self.batch_size
end = begin + self.batch_size
sub_list = shuffle_list[begin:end]
for index in sub_list:
# TODO:计算频谱图
if self.feature_type == 'spec':
fbank,n_frames = compute_spec(self.data_path + self.wav_lst[index])
elif self.feature_type == 'mel':
fbank, n_frames = compute_mel2(self.data_path + self.wav_lst[index])
else:
fbank, n_frames = compute_mfcc2(self.data_path + self.wav_lst[index])
# TODO:把语谱图时间维度pad成8的倍数
pad_fbank = np.zeros((fbank.shape[0] // 8 * 8 + 8, fbank.shape[1])) # 保证是8的倍数,因为几层cnn后要求维度能被8整除
pad_fbank[:fbank.shape[0], :] = fbank
label = self.pny2id(self.pny_lst[index], self.am_vocab)
label_ctc_len = self.ctc_len(label)
# 假设num_mel =90 那么最多只能预测十个字的句子?
if pad_fbank.shape[0] // 8 >= label_ctc_len:# ctc要求解码长度必须小于等于原始输入长度
wav_data_lst.append(pad_fbank)
label_data_lst.append(label)
else:
print(self.data_path + self.wav_lst[index])
raise Exception('data not allowed')
# TODO:对语谱图时间维度进行第二次pad,pad成本次batch中最长的长度
pad_wav_data, input_length = self.wav_padding(wav_data_lst)
pad_label_data, label_length = self.label_padding(label_data_lst)
inputs = {'the_inputs': pad_wav_data,
'the_labels': pad_label_data,
'input_length': input_length.reshape(-1,1),
# 注意这个input_length不是原始频谱图的长度而是经过几层cnn后隐层输出的长度(传给ctc)
# 而且是语谱图第一次pad后的长度//8而不是第二次最长pad后//8,尽量保证靠近真实长度//8的值
'label_length': label_length.reshape(-1,1),# batch中的每个句子的真实长度
}
# outputs = {'ctc': np.zeros(pad_wav_data.shape[0], )} # 没看懂为什么有这个
print('genarate one batch mel data')
yield inputs
pass
def get_lm_batch(self):
batch_num = len(self.pny_lst) // self.batch_size
for k in range(batch_num):
begin = k * self.batch_size
end = begin + self.batch_size
input_batch = self.pny_lst[begin:end]
label_batch = self.han_lst[begin:end]
max_len = max([len(line) for line in input_batch])
input_batch = np.array(
[self.pny2id(line, self.pny_vocab) + [0] * (max_len - len(line)) for line in input_batch])
label_batch = np.array(
[self.han2id(line, self.han_vocab) + [0] * (max_len - len(line)) for line in label_batch])
yield (input_batch, label_batch)
def pny2id(self, line, vocab):
ids = []
for pny in line :
if pny in vocab:
ids.append(vocab.index(pny))
else:
if 'UnkTok' in vocab:
ids.append(vocab.index('UnkTok'))
else:
ids.append(vocab.index('_')-1)
return ids
def han2id(self, line, vocab):
ids = []
for han in line:
if han in vocab:
ids.append(vocab.index(han))
else:
ids.append(vocab.index('UnkTok'))
return ids
def wav_padding(self, wav_data_lst):
# wav_data_lst里data是pad_fbank不是原始fbank所以后面//8一定能整除,这个训练的时候为什么不能直接在网络中获取呢??
wav_lens = [len(data) for data in wav_data_lst] # len(data)实际上就是求语谱图的第一维的长度,也就是n_frames
# print(wav_lens)
wav_max_len = max(wav_lens)
wav_lens = np.array([leng // 8 for leng in wav_lens])
new_wav_data_lst = np.zeros((len(wav_data_lst), wav_max_len, hp.num_mels, 1)) # 之前固定200是n-fft,len(wav_data_lst)是batch_size
for i in range(len(wav_data_lst)):
new_wav_data_lst[i, :wav_data_lst[i].shape[0], :, 0] = wav_data_lst[i]
# print('new_wav_data_lst',new_wav_data_lst.shape,wav_lens.shape)
return new_wav_data_lst, wav_lens
def label_padding(self, label_data_lst):
label_lens = np.array([len(label) for label in label_data_lst])
max_label_len = max(label_lens)
new_label_data_lst = np.zeros((len(label_data_lst), max_label_len))
for i in range(len(label_data_lst)):
new_label_data_lst[i][:len(label_data_lst[i])] = label_data_lst[i]
return new_label_data_lst, label_lens
def mk_am_vocab(self, data):
vocab = []
for line in tqdm(data):
line = line
for pny in line:
if pny not in vocab:
vocab.append(pny)
vocab.append('_')
return vocab
def mk_lm_pny_vocab(self, data):
vocab = ['<PAD>']
for line in tqdm(data):
for pny in line:
if pny not in vocab:
vocab.append(pny)
vocab.append('UnkTok')
return vocab
def mk_lm_han_vocab(self, data):
vocab = ['<PAD>']
for line in tqdm(data):
line = ''.join(line.split(' '))#能将‘你好吗’直接拆成三个字
for han in line:
if han not in vocab:
vocab.append(han)
vocab.append('UnkTok')
return vocab
def ctc_len(self, label):
add_len = 0
label_len = len(label)
for i in range(label_len - 1):
if label[i] == label[i + 1]:
add_len += 1 # 这里+1是因为ctc会在重复字符之间填充
return label_len + add_len
class DataFeeder_wavnet(DataFeeder):
def __init__(self,args):
super().__init__(args)
def get_am_batch(self):
shuffle_list = [i for i in range(len(self.wav_lst))]
while 1:
if self.shuffle == True:
shuffle(shuffle_list)
# len(self.wav_lst) // self.batch_size的值 表示一个epoch里有多少step才能把所有数据过一遍
for i in range(len(self.wav_lst) // self.batch_size):
wav_data_lst = [] # wav_data_lst里放的是batch_size个频谱图 wav_lst里放的是音频文件地址
label_data_lst = []
begin = i * self.batch_size
end = begin + self.batch_size
sub_list = shuffle_list[begin:end]
for index in sub_list:
# TODO:计算频谱图
if self.feature_type == 'spec':
fbank, n_frames = compute_spec(self.data_path + self.wav_lst[index])
elif self.feature_type == 'mel':
fbank, n_frames = compute_mel(self.data_path + self.wav_lst[index])
else:
fbank, n_frames = compute_mfcc2(self.data_path + self.wav_lst[index])
pad_fbank = fbank
label = self.pny2id(self.pny_lst[index], self.am_vocab)
label_ctc_len = self.ctc_len(label)
# 假设num_mel =90 那么最多只能预测十个字的句子?
if pad_fbank.shape[0] >= label_ctc_len: # ctc要求解码长度必须小于等于原始输入长度
wav_data_lst.append(pad_fbank)
label_data_lst.append(label)
else:
print(self.data_path + self.wav_lst[index])
raise Exception('data not allowed')
# TODO:对mfcc时间维度进行pad,pad成本次batch中最长的长度
pad_wav_data, input_length = self.wav_padding(wav_data_lst)
pad_label_data, label_length = self.label_padding(label_data_lst)
inputs = {'the_inputs': pad_wav_data,
'the_labels': pad_label_data,
'input_length': input_length.reshape(-1, 1),
# 注意这个input_length不是原始频谱图的长度而是经过几层cnn后隐层输出的长度(传给ctc)
# 而且是语谱图第一次pad后的长度//8而不是第二次最长pad后//8,尽量保证靠近真实长度//8的值
'label_length': label_length.reshape(-1, 1), # batch中的每个句子的真实长度
}
# outputs = {'ctc': np.zeros(pad_wav_data.shape[0], )} # 没看懂为什么有这个
print('genarate one batch mfcc data')
yield inputs
pass
def wav_padding(self, wav_data_lst):
wav_lens = np.asarray([len(data) for data in wav_data_lst]) # len(data)实际上就是求语谱图的第一维的长度,也就是n_frames
wav_max_len = max(wav_lens)
new_wav_data_lst = np.zeros(
(len(wav_data_lst), wav_max_len, hp.num_mfccs)) # 之前固定200是n-fft,len(wav_data_lst)是batch_size
for i in range(len(wav_data_lst)):
new_wav_data_lst[i, :wav_data_lst[i].shape[0], :] = wav_data_lst[i]
return new_wav_data_lst, wav_lens
class DataFeeder_transformer(DataFeeder):
def __init__(self,args):
super().__init__(args)
def mk_lm_han_vocab(self,data):
vocab = ['<PAD>','<GO>','<EOS>']
for line in tqdm(data):
line = ''.join(line.split(' ')) # 能将‘你好吗’直接拆成三个字
for han in line:
if han not in vocab:
vocab.append(han)
return vocab
def get_lm_batch(self):
encoder_inputs = [[self.pny_vocab.index(word) for word in line] for line in self.pny_lst]
decoder_inputs = [[self.han_vocab.index('<GO>')] + [self.han_vocab.index(word) for word in ''.join(line.split(' '))] for line in self.han_lst]
decoder_targets = [[self.han_vocab.index(word) for word in ''.join(line.split(' '))] + [self.han_vocab.index('<EOS>')] for line in self.han_lst]
batch_num = len(encoder_inputs) // self.batch_size
for k in range(batch_num):
begin = k * self.batch_size
end = begin + self.batch_size
en_input_batch = encoder_inputs[begin:end]
de_input_batch = decoder_inputs[begin:end]
de_label_batch = decoder_targets[begin:end]
max_en_len = max([len(line) for line in en_input_batch])
max_de_len = max([len(line) for line in de_input_batch])
en_input_batch = np.array(
[line + [0] * (max_en_len - len(line)) for line in en_input_batch]
)
de_input_batch = np.array(
[line + [0] * (max_de_len - len(line)) for line in de_input_batch]
)
de_label_batch = np.array(
[line + [0] * (max_de_len - len(line)) for line in de_label_batch]
)
yield en_input_batch, de_input_batch, de_label_batch
def compute_mfcc2(file):
wav = audio.load_wav(file)
# mfcc = p.mfcc(wav,numcep=hp.num_mfccs) # n_frames*n_mfcc
mfcc = librosa.feature.mfcc(wav,sr=hp.sample_rate,n_mfcc=hp.num_mfccs) # n_mfcc * n_frames
n_frames = mfcc.shape[1]
return (mfcc.T,n_frames)
def compute_mfcc(file):
wav = audio.load_wav(file)
mfcc = audio.mfcc(wav).astype(np.float32)
n_frames = mfcc.shape[1]
return (mfcc.T,n_frames)
def compute_mel2(file):
wav = audio.load_wav(file)
# mel = audio.melspectrogram(wav).astype(np.float32)
mel = librosa.feature.melspectrogram(wav,sr=hp.sample_rate,n_mels=hp.num_mels,hop_length=256) # [shape=(n_mels, t)]
n_frames = mel.shape[1]
return (mel.T,n_frames)
def compute_mel(file):
wav = audio.load_wav(file)
mel = audio.melspectrogram(wav).astype(np.float32)
n_frames = mel.shape[1]
return (mel.T,n_frames)
def compute_spec(file):
wav = audio.load_wav(file) # np.ndarray [shape=(n,) or (2, n)] 2表示双声道
spectrogram = audio.spectrogram(wav).astype(np.float32) # np.ndarray [shape=(num_freq, num_frames), dtype=float32]
n_frames = spectrogram.shape[1]
return (spectrogram.T, n_frames) # 注意转置后[shape=( num_frames, num_freq), dtype=float32]
# # 获取信号的时频图
# def compute_fbank(file):
# x = np.linspace(0, 400 - 1, 400, dtype=np.int64)
# w = 0.54 - 0.46 * np.cos(2 * np.pi * (x) / (400 - 1)) # 汉明窗
# fs, wavsignal = wav.read(file) # fs是采样率
# # wav波形 加时间窗以及时移10ms
# time_window = 25 # 单位ms 假设采样率16000那么采样点就是400,对应下面p_end = p_start + 400
# preemphasis = 0.97
# wav_arr = np.array(wavsignal)
# range0_end = int(len(wavsignal) / fs * 1000 - time_window) // 10 + 1 # 计算循环终止的位置,也就是最终生成的窗数
# data_input = np.zeros((range0_end, 200), dtype=np.float) # 用于存放最终的频率特征数据
# data_line = np.zeros((1, 400), dtype=np.float)
# for i in range(0, range0_end):
# p_start = i * 160 # 窗移是每次160个点,也就是10ms
# p_end = p_start + 400 # 400就是n-fft,如果采样率是16000那么n-fft就等于窗长
# data_line = wav_arr[p_start:p_end]
# data_line = data_line * w # 加窗
# data_line = np.abs(fft(data_line))
# data_input[i] = data_line[0:200] # 设置为400除以2的值(即200)是取一半数据,因为是对称的
# data_input = np.log(data_input + 1)
# # data_input = data_input[::]
# # TODO:增加norm和预加重
# return data_input # [shape=( num_frames, num_freq), dtype=float32]
# word error rate------------------------------------
def GetEditDistance(str1, str2):
leven_cost = 0
s = difflib.SequenceMatcher(None, str1, str2)
for tag, i1, i2, j1, j2 in s.get_opcodes():
if tag == 'replace':
leven_cost += max(i2-i1, j2-j1)
elif tag == 'insert':
leven_cost += (j2-j1)
elif tag == 'delete':
leven_cost += (i2-i1)
return leven_cost
# 定义解码器------------------------------------
def decode_ctc(num_result, num2word):
result = num_result[:, :, :]
in_len = np.zeros((1), dtype = np.int32)
in_len[0] = result.shape[1]
r = K.ctc_decode(result, in_len, greedy = True, beam_width=10, top_paths=1)
r1 = K.get_value(r[0][0])
r1 = r1[0]
text = []
for i in r1:
text.append(num2word[i])
return r1, text
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
hparams.py | Python | from tensorflow.contrib.training.python.training.hparam import HParams
import os
# Default hyperparameters:
hparams = HParams(
# TODO: audio
# num_freq=2048, # 使用librosa默认值 n-fft 256 一般等于窗长(一个窗口有多少个采样点),也就是16000/1000*25ms
num_mels=40, # 通常设为 20-40
num_mfccs = 26,#一般至少39啊
# 如果使用快速傅里叶变换(fft)需要保证窗长是2的倍数,上面400会自动扩展为512
# max_num_frames = None, # max_num_frames
sample_rate =16000,
# frame_length_ms=25, # 25 50 使用librosa默认值
# frame_shift_ms=10, # 10 12.5 使用librosa默认值
preemphasis=0.97,
min_level_db=-100,
ref_level_db=20,
# griffin_lim_iters=60,
# power=1.5, # Power to raise magnitudes to prior to Griffin-Lim
# TODO: am
AM =True,
data_type='train', # train test dev
feature_type = 'mel', # spec, mel, mfcc
data_path= os.path.expanduser('~/corpus_zn/'), # wav文件顶层目录
thchs30=True, # 是否使用thchs30数据
aishell=True,
prime=False,
stcmd=False,
data_length=None, # 总共使用条语音数据来训练,None表示全部
shuffle=True,
wavnet_filters = 192,
# TODO: lm
LM = True,
num_heads = 8, # 多头注意力机制
max_seq_length = 100,
num_blocks = 6,
# vocab
input_vocab_size = None, # 数据中有多少不同发音,一般读取所有train.txt获得
label_vocab_size = None, # 数据中有多少不同字,一般读取所有train.txt获得
# embedding size
word_embedding_size=None,
transformer_dropout = 0.1,
transformer_depth = None,
embedding_dropout = 0.6,
l2_reg_penalty = 1e-6,
confidence_penalty_weight = 0.1,
hidden_units = 512,
# TODO: training
is_training = True, # 注意测试的时候或者inference的时候设置为False
wavenet_filters = 192, # 128
batch_size=64,#32,
adam_beta1=0.9,
adam_beta2=0.999,
initial_learning_rate=0.002,#0.004, # 0.001
steps_per_epoch = None, # 283600//16 (283600是总音频数,16为batch_size)
decay_learning_rate=False,
max_iters=100,
final_output_dim = 1292
)
def hparams_debug_string(hpa):
values = hpa.values()
hp = [' %s: %s' % (name, values[name]) for name in sorted(values)]
return 'Hyperparameters:\n' + '\n'.join(hp) | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/ASR_DFCNN.py | Python | import tensorflow as tf
from .modules import Conv2dBlockWithMaxPool,post_net
from utils.mylogger import log
from keras import backend as K
from tensorflow.python.ops import ctc_ops as ctc
from .base_model import _learning_rate_decay,batch_wer,Base_Model
class ASR(Base_Model):
def __init__(self,hparams,name='ASR'):
super().__init__(hparams,name=name)
self._hparams = hparams
self.name = name
def build_graph(self):
'''
placeholders:
inputs : [batch_size, max_frames_in_current_batch, n-fft, 1]
labels : [batch_size, max_length_in_current_batch]
label_lengths : [batch_size]
input_lengths : [batch_size]
'''
with tf.variable_scope('NET') as scope:
with tf.variable_scope('NET_Input') as scope:
self.inputs = tf.placeholder(tf.float32,[None, None, self._hparams.num_mels, 1], 'spec_inputs')
is_training = self._hparams.is_training
# print(tf.shape(inputs))
block = Conv2dBlockWithMaxPool(num_conv=2)
# TODO: Net architecture
# block1 [batch_size, max_frames_in_current_batch/2, n-fft/2, 32]
block1 = block(inputs=self.inputs,kernal_size=3,channels=32,pool_size=2,is_training=is_training,scope='block1',dropout=0.05)
# block2 [batch_size, max_frames_in_current_batch/4, n-fft/4, 64]
block2 = block(block1, 3, 64, 2, is_training, scope='block2', dropout=0.1)
# block3 [batch_size, max_frames_in_current_batch/8, n-fft/8, 128]
block3 = block(block2, 3, 128, 2, is_training, scope='block3', dropout=0.15)
# block4 [batch_size, max_frames_in_current_batch/8, n-fft/8, 128]
block4 = block(block3, 3, 128, 1, is_training, scope='block4', dropout=0.2)
# block5 [batch_size, max_frames_in_current_batch/8, n-fft/8 = 50, 128]
block5 = block(block4, 3, 128, 1, is_training, scope='block5', dropout=None)
# post-net [batch_size, max_frames_in_current_batch/8, hparams.final_output_dim ]
y_pred = post_net(block5,is_training)
self.y_pred2 = tf.argmax(y_pred,2)
# TODO: Set attr and log info
self.pred_logits = tf.identity(y_pred,name='pred_logits')
log('Initialized ASR model. Dimensions: ')
log(' block1: ' + ''.join(str(block1.shape)))
log(' block2: ' + ''.join(str(block2.shape)))
log(' block3: ' + ''.join(str(block3.shape)))
log(' block4: ' + ''.join(str(block4.shape)))
log(' block5: ' + ''.join(str(block5.shape)))
log(' postnet out: ' + ''.join(str(y_pred.shape)))
def add_loss(self):
with tf.variable_scope('loss') as scope:
with tf.variable_scope('CTC_Input') as scope:
self.labels = tf.placeholder(tf.int32, [None, None], 'labels')
self.input_lengths = tf.placeholder(tf.int32, [None, 1], 'input_lengths') # 刚开始忘了写 1,表示batch_size*1的tensor
self.label_lengths = tf.placeholder(tf.int32, [None, 1], 'label_lengths')
# input_length实际上就是y_pred的中间那个维度,为什么不能直接取出来呢?可能构建图的时候不是定值(input的帧数那个维度是None,所以这里直接用的话也是没有值
# 但是如果decode的时候实际可以通过直接取出来做的,因为decode传入的input是个实值不是placeholer那种tensor了
# 注意input_length必须小于等于真实的label_length
self.ctc_loss = K.ctc_batch_cost(y_true=self.labels,y_pred=self.pred_logits
,input_length=self.input_lengths,label_length=self.label_lengths)
self.batch_loss = tf.reduce_mean(self.ctc_loss,name='batch_loss')
# 直接使用tf.nn.ctc_loss 需要传入的labels是sparsetensor,使用keras的ctc会帮助你将dense转成sparse
# self.ctc_loss = tf.nn.ctc_loss(labels=self.labels,inputs=self.pred_labels,
# sequence_length=self.label_lengths,time_major=False)
# return [batch_size] 其中值为概率 P(Y | X)
def add_optimizer(self, global_step):
'''Adds optimizer. Sets "gradients" and "optimize" fields. add_loss must have been called.
Args:
global_step: int32 scalar Tensor representing current global step in training
'''
with tf.variable_scope('optimizer') as scope:
hp = self._hparams
# TODO: Learning rate decay
if hp.decay_learning_rate:
self.learning_rate = _learning_rate_decay(hp.initial_learning_rate, global_step)
# self.learning_rate = tf.train.exponential_decay(hp.initial_learning_rate,global_step,hp.steps_per_epoch//4, 0.96, staircase=False)
else:
self.learning_rate = tf.convert_to_tensor(hp.initial_learning_rate)
# TODO: Set optimizer
optimizer = tf.train.AdamOptimizer(self.learning_rate,hp.adam_beta1,hp.adam_beta2)
# optimizer = tf.train.AdadeltaOptimizer()
# TODO: Compute gradients
gradients, variables = zip(*optimizer.compute_gradients(self.ctc_loss))
'''
optimizer.minimize()
This method simply combines calls `compute_gradients()` and
`apply_gradients()`. If you want to process the gradient before applying
them call `compute_gradients()` and `apply_gradients()` explicitly instead
of using this function.
'''
self.gradients = gradients
# TODO: Clip gradients
clipped_gradients, _ = tf.clip_by_global_norm(gradients, 1.0)
# TODO: Apply gradients
# Add dependency on UPDATE_OPS; otherwise batchnorm won't work correctly. See:
# https://github.com/tensorflow/tensorflow/issues/1122
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
self.optimize = optimizer.apply_gradients(zip(clipped_gradients, variables),
global_step=global_step)
def add_decoder(self):
'''Adds ctc_decoder to the model. Sets "decode" field. add_loss must have been called.'''
with tf.variable_scope('decode') as scope:
# input_length实际上可以直接取pred_labels的第一维?表示有多少帧
self.decoded, self.log_probabilities = K.ctc_decode(y_pred=self.pred_logits,input_length=tf.squeeze(self.input_lengths)) # input_length=不能从logits里直接获得
self.decoded2 = tf.squeeze(tf.identity(self.decoded,name='decoded_labels'))
self.WER = batch_wer(self.labels,self.decoded2,self.input_lengths,self.label_lengths)
# 如果没有这一句,那么tf.saved_model.utils.build_tensor_info(model.decoded)这里会报错
# AttributeError: 'list' object has no attribute 'dtype',因为decoded是一个tensor的list不是一个tensor
# 这里用identity转成tensor
# self.decoded, self.log_probabilities = tf.nn.ctc_beam_search_decoder(
# inputs=tf.transpose(self.pred_labels,perm=[1,0,2]),
# sequence_length=tf.reshape(tensor=self.label_lengths,shape=[-1]))
# 注意这里self.label_lengths按照ctc_batch_loss的要求在placeholder里设置成了[None,1]二维tensor
# 而在ctc_beam_search_decoder中需要reshape成[None]
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/ASR_transformer.py | Python | import tensorflow as tf
from keras import regularizers
from keras.layers import Softmax
# noinspection PyPep8Naming
from keras import backend as K
from utils.mylogger import log
from .base_model import Base_Model,_learning_rate_decay
from modulesLib.extras import ReusableEmbedding, TiedOutputEmbedding
from modulesLib.position import TransformerCoordinateEmbedding
from modulesLib.transformer import TransformerACT, TransformerBlock
class ASR_transformer(Base_Model):
"""
A model which is similar to the one described by OpenAI in paper
"Improving Language Understanding by Generative Pre-Training", except
that it relies L2 regularization of the word embedding matrix
(instead of the dropout), and uses Universal Transformer architecture.
"""
def __init__(self,hparams,name='ASR_transformer'):
super().__init__(hparams,name)
def build_graph(self):
hp = self._hparams
with tf.variable_scope('NET') as scope:
with tf.variable_scope('NET_input') as scopes:
self.word_ids = tf.placeholder(dtype=tf.int32,shape=[None,hp.max_seq_length,],name='input_word_id')
# TODO: init
l2_regularizer = (regularizers.l2(hp.l2_reg_penalty) if hp.l2_reg_penalty
else None)
embedding_layer = ReusableEmbedding(
hp.input_vocab_size, hp.word_embedding_size,
input_length=hp.max_seq_length,
name='bpe_embeddings',
# Regularization is based on paper "A Comparative Study on
# Regularization Strategies for Embedding-based Neural Networks"
# https://arxiv.org/pdf/1508.03721.pdf
embeddings_regularizer=l2_regularizer)
output_layer = TiedOutputEmbedding(
projection_regularizer=l2_regularizer,
projection_dropout=hp.embedding_dropout,
name='word_prediction_logits')
coordinate_embedding_layer = TransformerCoordinateEmbedding(
hp.transformer_depth,
name='coordinate_embedding')
transformer_act_layer = TransformerACT(name='adaptive_computation_time')
transformer_block = TransformerBlock(
name='transformer', num_heads=hp.num_heads,
residual_dropout=hp.transformer_dropout,
attention_dropout=hp.transformer_dropout,
use_masking=True, vanilla_wiring=False)
output_softmax_layer = Softmax(name='word_predictions')
# TODO: call
next_step_input, embedding_matrix = embedding_layer(self.word_ids)
act_output = next_step_input
for i in range(hp.transformer_depth):
next_step_input = coordinate_embedding_layer(next_step_input, step=i)
next_step_input = transformer_block(next_step_input)
next_step_input, act_output = transformer_act_layer(next_step_input)
transformer_act_layer.finalize()
next_step_input = act_output
word_predictions = output_softmax_layer(
output_layer([next_step_input, embedding_matrix]))
self.output_softmax = tf.identity(word_predictions,name='output_softmax')
def add_loss(self):
hp = self._hparams
with tf.variable_scope('loss') as scope:
# Penalty for confidence of the output distribution, as described in
# "Regularizing Neural Networks by Penalizing Confident
# Output Distributions" (https://arxiv.org/abs/1701.06548)
self.confidence_penalty = K.mean(
hp.confidence_penalty_weight *
K.sum(self.output_softmax * K.log(self.output_softmax), axis=-1))
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/ASR_transformer2.py | Python | from utils.mylogger import log
from .base_model import Base_Model,_learning_rate_decay
from .modules import normalize,embedding,multihead_attention,feedforward,label_smoothing
import tensorflow as tf
class ASR_transformer2(Base_Model):
def __init__(self,hparams,name='ASR_transformer2',is_training = True):
super().__init__(hparams,name)
self.is_training = is_training
def build_graph(self):
hp = self._hparams
with tf.variable_scope("NET") :
with tf.variable_scope("NET_input") :
# input placeholder
self.x = tf.placeholder(tf.int32, shape=(None, None))
self.y = tf.placeholder(tf.int32, shape=(None, None))
self.de_inp = tf.placeholder(tf.int32, shape=(None, None))
# Encoder
with tf.variable_scope("encoder"):
# embedding
self.en_emb = embedding(self.x, vocab_size=hp.input_vocab_size, num_units=hp.hidden_units,
scale=True, scope="enc_embed")
self.enc = self.en_emb + embedding(
tf.tile(tf.expand_dims(tf.range(tf.shape(self.x)[1]), 0), [tf.shape(self.x)[0], 1]),
vocab_size=hp.max_seq_length, num_units=hp.hidden_units, zero_pad=False, scale=False,
scope="enc_pe")
## Dropout
self.enc = tf.layers.dropout(self.enc,
rate=hp.embedding_ropout,
training=tf.convert_to_tensor(self.is_training))
## Blocks
for i in range(hp.num_blocks):
with tf.variable_scope("num_blocks_{}".format(i)):
### Multihead Attention
self.enc = multihead_attention(key_emb=self.en_emb,
que_emb=self.en_emb,
queries=self.enc,
keys=self.enc,
num_units=hp.hidden_units,
num_heads=hp.num_heads,
dropout_rate=hp.transformer_dropout,
is_training=self.is_training,
causality=False)
### Feed Forward
self.enc = feedforward(self.enc, num_units=[4 * hp.hidden_units, hp.hidden_units])
# Decoder
with tf.variable_scope("decoder"):
# embedding
self.de_emb = embedding(self.de_inp, vocab_size=hp.label_vocab_size, num_units=hp.hidden_units,
scale=True, scope="dec_embed")
self.dec = self.de_emb + embedding(
tf.tile(tf.expand_dims(tf.range(tf.shape(self.de_inp)[1]), 0), [tf.shape(self.de_inp)[0], 1]),
vocab_size=hp.max_length, num_units=hp.hidden_units, zero_pad=False, scale=False,
scope="dec_pe")
## Dropout
self.dec = tf.layers.dropout(self.dec,
rate=hp.embedding_dropout,
training=tf.convert_to_tensor(self.is_training))
## Multihead Attention ( self-attention)
for i in range(hp.num_blocks):
with tf.variable_scope("num_blocks_{}".format(i)):
### Multihead Attention
self.dec = multihead_attention(key_emb=self.de_emb,
que_emb=self.de_emb,
queries=self.dec,
keys=self.dec,
num_units=hp.hidden_units,
num_heads=hp.num_heads,
dropout_rate=hp.transformer_dropout,
is_training=self.is_training,
causality=True,
scope='self_attention')
## Multihead Attention ( vanilla attention)
for i in range(hp.num_blocks):
with tf.variable_scope("num_blocks_{}".format(i)):
### Multihead Attention
self.dec = multihead_attention(key_emb=self.en_emb,
que_emb=self.de_emb,
queries=self.dec,
keys=self.enc,
num_units=hp.hidden_units,
num_heads=hp.num_heads,
dropout_rate=hp.transformer_dropout,
is_training=self.is_training,
causality=True,
scope='vanilla_attention')
### Feed Forward
self.outputs = feedforward(self.dec, num_units=[4 * hp.hidden_units, hp.hidden_units])
# Final linear projection
self.logits = tf.layers.dense(self.outputs, hp.label_vocab_size)
self.preds = tf.to_int32(tf.argmax(self.logits, axis=-1))
self.istarget = tf.to_float(tf.not_equal(self.y, 0))
self.acc = tf.reduce_sum(tf.to_float(tf.equal(self.preds, self.y)) * self.istarget) / (
tf.reduce_sum(self.istarget))
def add_loss(self):
hp = self._hparams
with tf.variable_scope('loss'):
# Loss
self.y_smoothed = label_smoothing(tf.one_hot(self.y, depth=hp.label_vocab_size))
self.loss = tf.nn.softmax_cross_entropy_with_logits_v2(logits=self.logits, labels=self.y_smoothed)
self.mean_loss = tf.reduce_sum(self.loss * self.istarget) / (tf.reduce_sum(self.istarget))
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/ASR_transformer_encoder.py | Python | from utils.mylogger import log
from .base_model import Base_Model,_learning_rate_decay
from .modules import normalize,embedding,multihead_attention,feedforward,label_smoothing
import tensorflow as tf
class ASR_transformer_encoder(Base_Model):
def __init__(self,hparams,name='ASR_transformer_encoder',is_training = True):
super().__init__(hparams,name)
self.is_training = is_training
def build_graph(self):
hp = self._hparams
with tf.variable_scope("NET") :
with tf.variable_scope("NET_input") :
# input placeholder
self.x = tf.placeholder(tf.int32, shape=(None, None))
self.y = tf.placeholder(tf.int32, shape=(None, None))
# Encoder
with tf.variable_scope("encoder"):
# embedding
self.en_emb = embedding(self.x, vocab_size=hp.input_vocab_size, num_units=hp.hidden_units,
scale=True, scope="enc_embed")
self.enc = self.en_emb + embedding(
tf.tile(tf.expand_dims(tf.range(tf.shape(self.x)[1]), 0), [tf.shape(self.x)[0], 1]),
vocab_size=hp.max_seq_length, num_units=hp.hidden_units, zero_pad=False, scale=False,
scope="enc_pe")
## Dropout
self.enc = tf.layers.dropout(self.enc,
rate=hp.embedding_dropout,
training=tf.convert_to_tensor(self.is_training))
## Blocks
for i in range(hp.num_blocks):
with tf.variable_scope("num_blocks_{}".format(i)):
### Multihead Attention
self.enc = multihead_attention(key_emb=self.en_emb,
que_emb=self.en_emb,
queries=self.enc,
keys=self.enc,
num_units=hp.hidden_units,
num_heads=hp.num_heads,
dropout_rate=hp.transformer_dropout,
is_training=self.is_training,
causality=False)
### Feed Forward
self.outputs = feedforward(self.enc, num_units=[4 * hp.hidden_units, hp.hidden_units])
# Final linear projection
self.logits = tf.layers.dense(self.outputs, hp.label_vocab_size)
self.preds = tf.to_int32(tf.argmax(self.logits, axis=-1))
self.istarget = tf.to_float(tf.not_equal(self.y, 0))
self.acc = tf.reduce_sum(tf.to_float(tf.equal(self.preds, self.y)) * self.istarget) / (
tf.reduce_sum(self.istarget))
def add_loss(self):
hp = self._hparams
with tf.variable_scope('loss'):
# Loss
self.y_smoothed = label_smoothing(tf.one_hot(self.y, depth=hp.label_vocab_size))
self.loss = tf.nn.softmax_cross_entropy_with_logits_v2(logits=self.logits, labels=self.y_smoothed)
self.mean_loss = tf.reduce_sum(self.loss * self.istarget) / (tf.reduce_sum(self.istarget))
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/ASR_wavnet.py | Python | import tensorflow as tf
from .modules import casual_layer,post,res_block,dilated_stack
from utils.mylogger import log
from keras import backend as K
from .base_model import _learning_rate_decay,batch_wer
from tensorflow.python.ops import ctc_ops as ctc
class ASR_wavnet(object):
def __init__(self,hparams,name=None):
super().__init__()
self._hparams = hparams
self.name = name
def build_graph(self):
'''
placeholders:
inputs : [batch_size, max_frames_in_current_batch, n-mfcc]
labels : [batch_size, max_length_in_current_batch]
label_lengths : [batch_size]
input_lengths : [batch_size]
'''
with tf.variable_scope('NET') as scope:
with tf.variable_scope('NET_Input') as scope:
self.inputs = tf.placeholder(tf.float32,[None, None, self._hparams.num_mfccs], 'mfcc_inputs')
is_training = self._hparams.is_training
dim = self._hparams.wavnet_filters
output_size = self._hparams.final_output_dim
# TODO: Net architecture
# casual [batch_size, max_frames_in_current_batch, dim]
casual = casual_layer(inputs=self.inputs,filters=dim,
is_training=is_training)
# skip_tensor [batch_size, max_frames_in_current_batch,dim]
skip_tensor = dilated_stack(inputs=casual,num_blocks=3,
is_training=is_training,dim=dim)
# logits [batch_size, max_frames_in_current_batch, dim]
# pred [batch_size, max_frames_in_current_batch, output_size]
logits, pred = post(inputs=skip_tensor,dim=dim,
is_training=is_training,output_size=output_size)
# TODO: Set attr and log info
self.pred_logits = tf.identity(logits,name='pred_logits')
self.pred_softmax = tf.identity(pred,name = 'pred_softmax')
self.pred_labels = tf.identity(tf.argmax(pred,2),name='pred_labels')
log('Initialized ASR_wavnet model. Dimensions: ')
log(' casual: ' + ''.join(str(casual.shape)))
log(' skip_tensor: ' + ''.join(str(skip_tensor.shape)))
log(' logits: ' + ''.join(str(logits.shape)))
log(' pred: ' + ''.join(str(pred.shape)))
def add_loss(self):
with tf.variable_scope('loss') as scope:
with tf.variable_scope('CTC_Input') as scope:
self.labels = tf.placeholder(tf.int32, [None, None], 'labels')
self.input_lengths = tf.placeholder(tf.int32, [None, 1], 'input_lengths') # 刚开始忘了写 1,表示batch_size*1的tensor
self.label_lengths = tf.placeholder(tf.int32, [None, 1], 'label_lengths')
self.ctc_loss = K.ctc_batch_cost(y_true=self.labels,y_pred=self.pred_softmax
,input_length=self.input_lengths,label_length=self.label_lengths)
self.batch_loss = tf.reduce_mean(self.ctc_loss,name='batch_loss')
def add_optimizer(self, global_step):
'''Adds optimizer. Sets "gradients" and "optimize" fields. add_loss must have been called.
Args:
global_step: int32 scalar Tensor representing current global step in training
'''
with tf.variable_scope('optimizer') as scope:
hp = self._hparams
# TODO: Learning rate decay
if hp.decay_learning_rate:
self.learning_rate = _learning_rate_decay(hp.initial_learning_rate, global_step)
# self.learning_rate = tf.train.exponential_decay(hp.initial_learning_rate,global_step,hp.steps_per_epoch//4, 0.96, staircase=False)
else:
self.learning_rate = tf.convert_to_tensor(hp.initial_learning_rate)
# TODO: Set optimizer
optimizer = tf.train.AdamOptimizer(self.learning_rate, hp.adam_beta1, hp.adam_beta2)
# TODO: Compute gradients
gradients, variables = zip(*optimizer.compute_gradients(self.ctc_loss))
self.gradients = gradients
# TODO: Clip gradients
clipped_gradients, _ = tf.clip_by_global_norm(gradients, 1.0)
# TODO: Apply gradients
# Add dependency on UPDATE_OPS; otherwise batchnorm won't work correctly. See:
# https://github.com/tensorflow/tensorflow/issues/1122
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
self.optimize = optimizer.apply_gradients(zip(clipped_gradients, variables),
global_step=global_step)
def add_decoder(self):
'''Adds ctc_decoder to the model. Sets "decode" field. add_loss must have been called.'''
with tf.variable_scope('decode') as scope:
self.decoded, self.log_probabilities = K.ctc_decode(y_pred=self.pred_softmax,input_length=tf.squeeze(self.input_lengths,squeeze_dims=-1)) # input_length=不能从logits里直接获得
self.decoded1 = tf.convert_to_tensor(self.decoded,dtype=tf.int32,name='decoded_labels')
self.decoded2 = tf.squeeze(tf.identity(self.decoded),squeeze_dims=0)
self.WER = batch_wer(self.labels,self.decoded2,self.input_lengths,self.label_lengths) | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/__init__.py | Python | from .ASR_DFCNN import ASR
from .ASR_wavnet import ASR_wavnet
from .ASR_transformer2 import ASR_transformer2
from .ASR_transformer_encoder import ASR_transformer_encoder
def create_model(name, hparams,is_training =True):
if name == 'ASR':
return ASR(hparams,name=name)
elif name == 'ASR_wavnet':
return ASR_wavnet(hparams,name=name)
elif name == 'ASR_transformer2':
return ASR_transformer2(hparams,name=name,is_training =is_training)
elif name == 'ASR_transformer_encoder':
return ASR_transformer_encoder(hparams,name=name,is_training =is_training)
else:
raise Exception('Unknown model: ' + name) | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/base_model.py | Python | import tensorflow as tf
from keras import backend as K
class Base_Model(object):
def __init__(self,hparams,name=None):
super().__init__()
self._hparams = hparams
self.name = name
def build_graph(self):
raise NotImplementedError()
def add_loss(self):
raise NotImplementedError()
def add_optimizer(self, global_step,loss):
'''Adds optimizer. Sets "gradients" and "optimize" fields. add_loss must have been called.
Args:
global_step: int32 scalar Tensor representing current global step in training
'''
with tf.variable_scope('optimizer') as scope:
hp = self._hparams
# TODO: Learning rate decay
if hp.decay_learning_rate:
self.learning_rate = _learning_rate_decay(hp.initial_learning_rate, global_step)
# self.learning_rate = tf.train.exponential_decay(hp.initial_learning_rate,global_step,hp.steps_per_epoch//4, 0.96, staircase=False)
else:
self.learning_rate = tf.convert_to_tensor(hp.initial_learning_rate)
# TODO: Set optimizer
optimizer = tf.train.AdamOptimizer(self.learning_rate, hp.adam_beta1, hp.adam_beta2)
# TODO: Compute gradients
gradients, variables = zip(*optimizer.compute_gradients(loss))
self.gradients = gradients
# TODO: Clip gradients
clipped_gradients, _ = tf.clip_by_global_norm(gradients, 1.0)
# TODO: Apply gradients
# Add dependency on UPDATE_OPS; otherwise batchnorm won't work correctly. See:
# https://github.com/tensorflow/tensorflow/issues/1122
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
self.optimize = optimizer.apply_gradients(zip(clipped_gradients, variables),
global_step=global_step)
def _learning_rate_decay(init_lr, global_step):
# Noam scheme from tensor2tensor:
warmup_steps = 4000.0
step = tf.cast(global_step + 1, dtype=tf.float32)
return init_lr * warmup_steps**0.5 * tf.minimum(step * warmup_steps**-1.5, step**-0.5)
def batch_wer(y_true, y_pred, input_length, label_length):
"""Runs CTC loss algorithm on each batch element.
# Arguments
y_true: tensor `(samples, max_string_length)`
containing the truth labels.
y_pred: tensor `(samples, time_steps, num_categories)` (samples, max_string_length)
containing the prediction, or output of the softmax.
input_length: tensor `(samples, 1)` containing the sequence length for
each batch item in `y_pred`.
label_length: tensor `(samples, 1)` containing the sequence length for
each batch item in `y_true`.
# Returns
"""
label_length = tf.to_int32(tf.squeeze(label_length, axis=-1))
input_length = tf.to_int32(tf.squeeze(input_length, axis=-1))
sparse_labels = tf.to_int32(K.ctc_label_dense_to_sparse(y_true, label_length))
sparse_pred = tf.to_int32(K.ctc_label_dense_to_sparse(y_pred, input_length))
WER = tf.reduce_mean(tf.edit_distance(sparse_pred,sparse_labels,normalize=True))
return WER | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/modules.py | Python | import tensorflow as tf
from six.moves import xrange
from hparams import hparams
################################ DFCnn ###################################################
class Conv2dBlockWithMaxPool(object):
"""Conv2d Block
The output is max_pooled along time.
"""
def __init__(self,num_conv,activation = tf.nn.relu):
self.__num_conv = num_conv
self.__activation = activation
@property
def num_conv(self):
return self.__num_conv
@property
def activation(self):
return self.__activation
def __call__(self,inputs,kernal_size,channels,pool_size,is_training,activation=None,scope=None,dropout=None):
"""
Args:
inputs: with shape -> (batch_size, n-frames, n-fft)
"""
if activation is not None:
self.__activation = activation
with tf.variable_scope(scope or type(self).__name__):
for index in xrange(1,self.__num_conv+1):
with tf.variable_scope('inner_conv_%d' % index):
conv_k = tf.layers.conv2d(inputs=inputs,filters=channels,kernel_size=kernal_size,
padding='same',activation=self.activation,kernel_initializer='he_normal')
norm_k = tf.layers.batch_normalization(inputs=conv_k,training=is_training)
inputs = tf.identity(input=norm_k)
maxpool_output = tf.layers.max_pooling2d(
inputs=norm_k,
pool_size=pool_size,
strides=pool_size,
padding='valid',
name='max_pool'
)
if dropout is not None:
maxpool_output = tf.layers.dropout(inputs=maxpool_output,rate=dropout,training=is_training,name='dropout')
return maxpool_output
def post_net(inputs,is_training):
with tf.variable_scope('post_net'):
# 如果直接从inputs.shape[0]取出来batch-size会报错:(但是后面两个维度是固定的,可以直接取出)
# TypeError: Failed to convert object of type <class 'list'> to Tensor. Contents: [Dimension(None), -1, Dimension(4096)]. Consider casting elements to a supported type.
# 因为此时还没初始化,所以第零维度是None,现在只能把超参传进去
x = tf.reshape(tensor=inputs, shape=[hparams.batch_size, -1 ,inputs.shape[-1]*inputs.shape[-2]]) # [batch_size, ?, 5, 128] --> [batch_size, ?, 5*128]
x = tf.layers.dropout(x,0.3,training=is_training)
x = tf.layers.dense(inputs=x,units=128,activation=tf.nn.relu,use_bias=True,kernel_initializer="he_normal") # [batch_size, ?, 128]
x = tf.layers.dropout(x, 0.3, training=is_training)
x = tf.layers.dense(inputs=x, units=hparams.final_output_dim, activation=tf.nn.relu, use_bias=True, kernel_initializer="he_normal") # [batch_size, ?, final_output_dim]
# 这里?是指T_out(number of steps in the output time series) 和 n-frames(也就是T_in)不一定相同
# y_pred = x
y_pred = tf.nn.softmax(logits=x,axis=-1) # tf.nn.ctc_loss说输入值不要softmax,但是keras的ctc应该要激活??
return y_pred
################################ WavNet ################################################
initializer = tf.contrib.layers.xavier_initializer()
def casual_layer(inputs,filters,is_training):
with tf.variable_scope('casual_layer'):
x = tf.layers.conv1d(inputs,filters=filters,kernel_size=1,
padding='same',name='casual_conv')
x = tf.layers.batch_normalization(x,-1,training=is_training,name='casual_conv_bn')
x = tf.keras.layers.Activation("tanh")(x)
return x
def res_block(inputs,size,rate,block,dim,is_training):
with tf.variable_scope('block_%d_%d'%(block, rate)):
conv_filter = tf.layers.conv1d(inputs,filters=dim,kernel_size=size,padding='same',
dilation_rate=rate,name='conv_filter')
conv_filter = tf.layers.batch_normalization(conv_filter,training=is_training,name='conv_filter_bn',axis=-1)
conv_filter = tf.keras.layers.Activation("tanh")(conv_filter)
# keras.layers.Conv1D()
conv_gate = tf.layers.conv1d(inputs,filters=dim,kernel_size=size,padding='same',
dilation_rate=rate,name='conv_gate')
conv_gate = tf.layers.batch_normalization(conv_gate, training=is_training, name='conv_gate_bn')
conv_gate = tf.keras.layers.Activation('sigmoid')(conv_gate)
out = tf.multiply(conv_filter,conv_gate,name='out')
conv_out = tf.layers.conv1d(out,filters=dim,kernel_size=1,padding='same',
name='conv_out')
conv_out = tf.layers.batch_normalization(conv_out, training=is_training, name='conv_out_bn')
conv_out = tf.keras.layers.Activation('tanh')(conv_out)
residual = tf.add(inputs,conv_out,name='residual_out')
return residual,conv_out
def dilated_stack(inputs,num_blocks,is_training,dim):
with tf.variable_scope('dilated_stack'):
skip = []
res = tf.identity(inputs,name='res_input')
for i in range(num_blocks):
for r in [1,2,4,8,16]:
res, s = res_block(res,size=7,rate=r,block=i,is_training=is_training,dim=dim)
skip.append(s)
ret = tf.keras.layers.Add()([s for s in skip])
return ret
def post(inputs,dim,is_training,output_size):
with tf.variable_scope('post_process'):
logits = tf.layers.conv1d(inputs,filters=dim,kernel_size=1,
padding='same',name='logits')
logits = tf.layers.batch_normalization(logits,training=is_training,name='logits_bn')
logits = tf.keras.layers.Activation('tanh')(logits)
y_pred = tf.layers.conv1d(logits,filters=output_size,kernel_size=1,kernel_regularizer=tf.keras.regularizers.l2(0.2),
padding='same',activation='softmax',name='y_pred')
return logits, y_pred
############################## Transformer ############################################
def normalize(inputs,
epsilon = 1e-8,
scope="ln",
reuse=None):
'''Applies layer normalization.
Args:
inputs: A tensor with 2 or more dimensions, where the first dimension has
`batch_size`.
epsilon: A floating number. A very small number for preventing ZeroDivision Error.
scope: Optional scope for `variable_scope`.
reuse: Boolean, whether to reuse the weights of a previous layer
by the same name.
Returns:
A tensor with the same shape and data dtype as `inputs`.
'''
with tf.variable_scope(scope, reuse=reuse):
inputs_shape = inputs.get_shape()
params_shape = inputs_shape[-1:]
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
beta= tf.Variable(tf.zeros(params_shape))
gamma = tf.Variable(tf.ones(params_shape))
normalized = (inputs - mean) / ( (variance + epsilon) ** (.5) )
outputs = gamma * normalized + beta
return outputs
# TODO: 使用自己训练的emb而不是原文的pos emb
def embedding(inputs,
vocab_size,
num_units,
zero_pad=True,
scale=True,
scope="embedding",
reuse=None):
'''Embeds a given tensor.
Args:
inputs: A `Tensor` with type `int32` or `int64` containing the ids
to be looked up in `lookup table`.
vocab_size: An int. Vocabulary size.
num_units: An int. Number of embedding hidden units.
zero_pad: A boolean. If True, all the values of the fist row (id 0)
should be constant zeros.
scale: A boolean. If True. the outputs is multiplied by sqrt num_units.
scope: Optional scope for `variable_scope`.
reuse: Boolean, whether to reuse the weights of a previous layer
by the same name.
Returns:
A `Tensor` with one more rank than inputs's. The last dimensionality
should be `num_units`.
For example,
```
import tensorflow as tf
inputs = tf.to_int32(tf.reshape(tf.range(2*3), (2, 3)))
outputs = embedding(inputs, 6, 2, zero_pad=True)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print sess.run(outputs)
>>
[[[ 0. 0. ]
[ 0.09754146 0.67385566]
[ 0.37864095 -0.35689294]]
[[-1.01329422 -1.09939694]
[ 0.7521342 0.38203377]
[-0.04973143 -0.06210355]]]
```
```
import tensorflow as tf
inputs = tf.to_int32(tf.reshape(tf.range(2*3), (2, 3)))
outputs = embedding(inputs, 6, 2, zero_pad=False)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print sess.run(outputs)
>>
[[[-0.19172323 -0.39159766]
[-0.43212751 -0.66207761]
[ 1.03452027 -0.26704335]]
[[-0.11634696 -0.35983452]
[ 0.50208133 0.53509563]
[ 1.22204471 -0.96587461]]]
```
'''
with tf.variable_scope(scope, reuse=reuse):
lookup_table = tf.get_variable('lookup_table',
dtype=tf.float32,
shape=[vocab_size, num_units],
initializer=tf.contrib.layers.xavier_initializer())
if zero_pad:
lookup_table = tf.concat((tf.zeros(shape=[1, num_units]),
lookup_table[1:, :]), 0)
outputs = tf.nn.embedding_lookup(lookup_table, inputs)
if scale:
outputs = outputs * (num_units ** 0.5)
return outputs
def multihead_attention(key_emb,
que_emb,
queries,
keys,
num_units=None,
num_heads=8,
dropout_rate=0,
is_training=True,
causality=False,
scope="multihead_attention",
reuse=None):
'''Applies multihead attention.
Args:
queries: A 3d tensor with shape of [N, T_q, C_q].
keys: A 3d tensor with shape of [N, T_k, C_k].
num_units: A scalar. Attention size.
dropout_rate: A floating point number.
is_training: Boolean. Controller of mechanism for dropout.
causality: Boolean. If true, units that reference the future are masked.
num_heads: An int. Number of heads.
scope: Optional scope for `variable_scope`.
reuse: Boolean, whether to reuse the weights of a previous layer
by the same name.
Returns
A 3d tensor with shape of (N, T_q, C)
'''
with tf.variable_scope(scope, reuse=reuse):
# Set the fall back option for num_units
if num_units is None:
num_units = queries.get_shape().as_list[-1]
# Linear projections
Q = tf.layers.dense(queries, num_units, activation=tf.nn.relu) # (N, T_q, C)
K = tf.layers.dense(keys, num_units, activation=tf.nn.relu) # (N, T_k, C)
V = tf.layers.dense(keys, num_units, activation=tf.nn.relu) # (N, T_k, C)
# Split and concat
Q_ = tf.concat(tf.split(Q, num_heads, axis=2), axis=0) # (h*N, T_q, C/h)
K_ = tf.concat(tf.split(K, num_heads, axis=2), axis=0) # (h*N, T_k, C/h)
V_ = tf.concat(tf.split(V, num_heads, axis=2), axis=0) # (h*N, T_k, C/h)
# Multiplication
outputs = tf.matmul(Q_, tf.transpose(K_, [0, 2, 1])) # (h*N, T_q, T_k)
# Scale
outputs = outputs / (K_.get_shape().as_list()[-1] ** 0.5)
# Key Masking
key_masks = tf.sign(tf.abs(tf.reduce_sum(key_emb, axis=-1))) # (N, T_k)
key_masks = tf.tile(key_masks, [num_heads, 1]) # (h*N, T_k)
key_masks = tf.tile(tf.expand_dims(key_masks, 1), [1, tf.shape(queries)[1], 1]) # (h*N, T_q, T_k)
paddings = tf.ones_like(outputs) * (-2 ** 32 + 1)
outputs = tf.where(tf.equal(key_masks, 0), paddings, outputs) # (h*N, T_q, T_k)
# Causality = Future blinding
if causality:
diag_vals = tf.ones_like(outputs[0, :, :]) # (T_q, T_k)
tril = tf.linalg.LinearOperatorLowerTriangular(diag_vals).to_dense() # (T_q, T_k)
masks = tf.tile(tf.expand_dims(tril, 0), [tf.shape(outputs)[0], 1, 1]) # (h*N, T_q, T_k)
paddings = tf.ones_like(masks) * (-2 ** 32 + 1)
outputs = tf.where(tf.equal(masks, 0), paddings, outputs) # (h*N, T_q, T_k)
# Activation
outputs = tf.nn.softmax(outputs) # (h*N, T_q, T_k)
# Query Masking
query_masks = tf.sign(tf.abs(tf.reduce_sum(que_emb, axis=-1))) # (N, T_q)
query_masks = tf.tile(query_masks, [num_heads, 1]) # (h*N, T_q)
query_masks = tf.tile(tf.expand_dims(query_masks, -1), [1, 1, tf.shape(keys)[1]]) # (h*N, T_q, T_k)
outputs *= query_masks # broadcasting. (N, T_q, C)
# Dropouts
outputs = tf.layers.dropout(outputs, rate=dropout_rate, training=tf.convert_to_tensor(is_training))
# Weighted sum
outputs = tf.matmul(outputs, V_) # ( h*N, T_q, C/h)
# Restore shape
outputs = tf.concat(tf.split(outputs, num_heads, axis=0), axis=2) # (N, T_q, C)
# Residual connection
outputs += queries
# Normalize
outputs = normalize(outputs) # (N, T_q, C)
return outputs
def feedforward(inputs,
num_units=[2048, 512],
scope="multihead_attention",
reuse=None):
'''Point-wise feed forward net.
Args:
inputs: A 3d tensor with shape of [N, T, C].
num_units: A list of two integers.
scope: Optional scope for `variable_scope`.
reuse: Boolean, whether to reuse the weights of a previous layer
by the same name.
Returns:
A 3d tensor with the same shape and dtype as inputs
'''
with tf.variable_scope(scope, reuse=reuse):
# Inner layer
params = {"inputs": inputs, "filters": num_units[0], "kernel_size": 1,
"activation": tf.nn.relu, "use_bias": True}
outputs = tf.layers.conv1d(**params)
# Readout layer
params = {"inputs": outputs, "filters": num_units[1], "kernel_size": 1,
"activation": None, "use_bias": True}
outputs = tf.layers.conv1d(**params)
# Residual connection
outputs += inputs
# Normalize
outputs = normalize(outputs)
return outputs
# TODO: label_smoothing 是对ground truth做的不是对logits做的
def label_smoothing(inputs, epsilon=0.1):
'''Applies label smoothing. See https://arxiv.org/abs/1512.00567.
Args:
inputs: A 3d tensor with shape of [N, T, V], where V is the number of vocabulary.
epsilon: Smoothing rate.
For example,
```
import tensorflow as tf
inputs = tf.convert_to_tensor([[[0, 0, 1],
[0, 1, 0],
[1, 0, 0]],
[[1, 0, 0],
[1, 0, 0],
[0, 1, 0]]], tf.float32)
outputs = label_smoothing(inputs)
with tf.Session() as sess:
print(sess.run([outputs]))
>>
[array([[[ 0.03333334, 0.03333334, 0.93333334],
[ 0.03333334, 0.93333334, 0.03333334],
[ 0.93333334, 0.03333334, 0.03333334]],
[[ 0.93333334, 0.03333334, 0.03333334],
[ 0.93333334, 0.03333334, 0.03333334],
[ 0.03333334, 0.93333334, 0.03333334]]], dtype=float32)]
```
'''
K = inputs.get_shape().as_list()[-1] # number of channels
return ((1 - epsilon) * inputs) + (epsilon / K)
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/modulesLib/attention.py | Python | import numpy as np
# noinspection PyPep8Naming
from keras import backend as K
from keras.engine import Layer
from keras.utils import get_custom_objects
class _BaseMultiHeadAttention(Layer):
"""
Base class for two types of Multi-head attention layers:
Self-attention and its more general form used in decoders (the one which
takes values and keys from the encoder).
"""
def __init__(self, num_heads: int, use_masking: bool,
dropout: float = 0.0,
compression_window_size: int = None,
**kwargs):
"""
:param num_heads: number of attention heads
:param use_masking: when True, forbids the attention to see the further
elements in the sequence (particularly important in language
modelling).
:param dropout: dropout that should be applied to the attention
(after the softmax).
:param compression_window_size: an integer value >= 1 controlling
how much we should compress the attention. For more details,
read about memory-compressed self-attention in
"Generating Wikipedia by summarizing long sequences"
(https://arxiv.org/pdf/1801.10198.pdf).
:param kwargs: any extra arguments typical for a Keras layer,
such as name, etc.
"""
self.num_heads = num_heads
self.use_masking = use_masking
self.dropout = dropout
if (compression_window_size is not None
and compression_window_size <= 0):
assert ValueError(
"Too small compression window ({%d})" % compression_window_size)
self.compression_window_size = compression_window_size
super().__init__(**kwargs)
def get_config(self):
config = super().get_config()
config['num_heads'] = self.num_heads
config['use_masking'] = self.use_masking
config['dropout'] = self.dropout
config['compression_window_size'] = self.compression_window_size
return config
# noinspection PyAttributeOutsideInit
def build_output_params(self, d_model):
self.output_weights = self.add_weight(
name='output_weights',
shape=(d_model, d_model),
initializer='glorot_uniform',
trainable=True)
if self.compression_window_size is not None:
self.k_conv_kernel = self.add_weight(
name='k_conv_kernel',
shape=(self.compression_window_size,
d_model // self.num_heads,
d_model // self.num_heads),
initializer='glorot_uniform',
trainable=True)
self.k_conv_bias = self.add_weight(
name='k_conv_bias',
shape=(d_model // self.num_heads,),
initializer='zeros',
trainable=True)
self.v_conv_kernel = self.add_weight(
name='v_conv_kernel',
shape=(self.compression_window_size,
d_model // self.num_heads,
d_model // self.num_heads),
initializer='glorot_uniform',
trainable=True)
self.v_conv_bias = self.add_weight(
name='v_conv_bias',
shape=(d_model // self.num_heads,),
initializer='zeros',
trainable=True)
def validate_model_dimensionality(self, d_model: int):
if d_model % self.num_heads != 0:
raise ValueError(
'The size of the last dimension of the input '
'( %d ) must be evenly divisible by the number'
'of the attention heads {%d}' % (d_model,self.num_heads))
def attention(self, pre_q, pre_v, pre_k, out_seq_len: int, d_model: int,
training=None):
"""
Calculates the output of the attention once the affine transformations
of the inputs are done. Here's the shapes of the arguments:
:param pre_q: (batch_size, q_seq_len, num_heads, d_model // num_heads)
:param pre_v: (batch_size, v_seq_len, num_heads, d_model // num_heads)
:param pre_k: (batch_size, k_seq_len, num_heads, d_model // num_heads)
:param out_seq_len: the length of the output sequence
:param d_model: dimensionality of the model (by the paper)
:param training: Passed by Keras. Should not be defined manually.
Optional scalar tensor indicating if we're in training
or inference phase.
"""
# shaping Q and V into (batch_size, num_heads, seq_len, d_model//heads)
q = K.permute_dimensions(pre_q, [0, 2, 1, 3])
v = K.permute_dimensions(pre_v, [0, 2, 1, 3])
if self.compression_window_size is None:
k_transposed = K.permute_dimensions(pre_k, [0, 2, 3, 1])
else:
# Memory-compressed attention described in paper
# "Generating Wikipedia by Summarizing Long Sequences"
# (https://arxiv.org/pdf/1801.10198.pdf)
# It compresses keys and values using 1D-convolution which reduces
# the size of Q * K_transposed from roughly seq_len^2
# to convoluted_seq_len^2. If we use strided convolution with
# window size = 3 and stride = 3, memory requirements of such
# memory-compressed attention will be 9 times smaller than
# that of the original version.
if self.use_masking:
raise NotImplementedError(
"Masked memory-compressed attention has not "
"been implemented yet")
k = K.permute_dimensions(pre_k, [0, 2, 1, 3])
k, v = [
K.reshape(
# Step 3: Return the result to its original dimensions
# (batch_size, num_heads, seq_len, d_model//heads)
K.bias_add(
# Step 3: ... and add bias
K.conv1d(
# Step 2: we "compress" K and V using strided conv
K.reshape(
# Step 1: we reshape K and V to
# (batch + num_heads, seq_len, d_model//heads)
item,
(-1,
K.int_shape(item)[-2],
d_model // self.num_heads)),
kernel,
strides=self.compression_window_size,
padding='valid', data_format='channels_last'),
bias,
data_format='channels_last'),
# new shape
K.concatenate([
K.shape(item)[:2],
[-1, d_model // self.num_heads]]))
for item, kernel, bias in (
(k, self.k_conv_kernel, self.k_conv_bias),
(v, self.v_conv_kernel, self.v_conv_bias))]
k_transposed = K.permute_dimensions(k, [0, 1, 3, 2])
# shaping K into (batch_size, num_heads, d_model//heads, seq_len)
# for further matrix multiplication
sqrt_d = K.constant(np.sqrt(d_model // self.num_heads),
dtype=K.floatx())
q_shape = K.int_shape(q)
k_t_shape = K.int_shape(k_transposed)
v_shape = K.int_shape(v)
# before performing batch_dot all tensors are being converted to 3D
# shape (batch_size * num_heads, rows, cols) to make sure batch_dot
# performs identically on all backends
attention_heads = K.reshape(
K.batch_dot(
self.apply_dropout_if_needed(
K.softmax(
self.mask_attention_if_needed(
K.batch_dot(
K.reshape(q, (-1,) + q_shape[-2:]),
K.reshape(k_transposed,
(-1,) + k_t_shape[-2:]))
/ sqrt_d)),
training=training),
K.reshape(v, (-1,) + v_shape[-2:])),
(-1, self.num_heads, q_shape[-2], v_shape[-1]))
attention_heads_merged = K.reshape(
K.permute_dimensions(attention_heads, [0, 2, 1, 3]),
(-1, d_model))
attention_out = K.reshape(
K.dot(attention_heads_merged, self.output_weights),
(-1, out_seq_len, d_model))
return attention_out
def apply_dropout_if_needed(self, attention_softmax, training=None):
if 0.0 < self.dropout < 1.0:
def dropped_softmax():
return K.dropout(attention_softmax, self.dropout)
return K.in_train_phase(dropped_softmax, attention_softmax,
training=training)
return attention_softmax
def mask_attention_if_needed(self, dot_product):
"""
Makes sure that (when enabled) each position
(of a decoder's self-attention) cannot attend to subsequent positions.
This is achieved by assigning -inf (or some large negative number)
to all invalid connections. Later softmax will turn them into zeros.
We need this to guarantee that decoder's predictions are based
on what has happened before the position, not after.
The method does nothing if masking is turned off.
:param dot_product: scaled dot-product of Q and K after reshaping them
to 3D tensors (batch * num_heads, rows, cols)
"""
if not self.use_masking:
return dot_product
last_dims = K.int_shape(dot_product)[-2:]
low_triangle_ones = (
np.tril(np.ones(last_dims))
# to ensure proper broadcasting
.reshape((1,) + last_dims))
inverse_low_triangle = 1 - low_triangle_ones
close_to_negative_inf = -1e9
result = (
K.constant(low_triangle_ones, dtype=K.floatx()) * dot_product +
K.constant(close_to_negative_inf * inverse_low_triangle))
return result
class MultiHeadAttention(_BaseMultiHeadAttention):
"""
Multi-head attention which can use two inputs:
First: from the encoder - it's used to project the keys and the values
Second: from the decoder - used to project the queries.
"""
# noinspection PyAttributeOutsideInit
def build(self, input_shape):
if not (isinstance(input_shape, list) and len(input_shape) == 2):
raise ValueError(
'You must call this layer passing a list of two tensors'
'(for keys/values and queries)')
values_dim, query_dim = input_shape[0][-1], input_shape[1][-1]
if query_dim != values_dim:
raise ValueError(
'Both keys/value and query inputs must be '
'of the same dimensionality, instead of '
'{%d } and {%d}.' %(values_dim,query_dim))
d_model = query_dim
self.validate_model_dimensionality(d_model)
# These weights are concatenated matrices W_k and W_v which
# are, in turn, concatenated W matrices of keys, and values
# for each of the heads. So, essentially it's a concatenation of
# W_k1, W_k2,..., W_kh, W_v1, W_v2,..., W_vh
# for all h heads.
self.kv_weights = self.add_weight(
name='kv_weights', shape=(d_model, d_model * 2),
initializer='glorot_uniform', trainable=True)
self.q_weights = self.add_weight(
name='q_weights', shape=(d_model, d_model),
initializer='glorot_uniform', trainable=True)
self.build_output_params(d_model)
return super().build(input_shape)
def call(self, inputs, **kwargs):
if not (isinstance(inputs, list) and len(inputs) == 2):
raise ValueError(
'You can call this layer only with a list of two tensors '
'(for keys/values and queries)')
key_values_input, query_input = inputs
_, value_seq_len, d_model = K.int_shape(key_values_input)
query_seq_len = K.int_shape(inputs[1])[-2]
# The first thing we need to do is to perform affine transformations
# of the inputs to get the Queries, the Keys and the Values.
kv = K.dot(K.reshape(key_values_input, [-1, d_model]), self.kv_weights)
# splitting the keys, the values and the queries before further
# processing
pre_k, pre_v = [
K.reshape(
# K.slice(kv, (0, i * d_model), (-1, d_model)),
kv[:, i * d_model: (i + 1) * d_model],
(-1, value_seq_len,
self.num_heads, d_model // self.num_heads))
for i in range(2)]
pre_q = K.reshape(
K.dot(K.reshape(query_input, [-1, d_model]), self.q_weights),
(-1, query_seq_len, self.num_heads, d_model // self.num_heads))
return self.attention(pre_q, pre_v, pre_k, query_seq_len, d_model,
training=kwargs.get('training'))
class MultiHeadSelfAttention(_BaseMultiHeadAttention):
"""
Multi-head self-attention for both encoders and decoders.
Uses only one input and has implementation which is better suited for
such use case that more general MultiHeadAttention class.
"""
# noinspection PyAttributeOutsideInit
def build(self, input_shape):
if not isinstance(input_shape, tuple):
raise ValueError('Invalid input')
d_model = input_shape[-1]
self.validate_model_dimensionality(d_model)
# These weights are concatenated matrices W_q, W_k and W_v which
# are, in turn, concatenated W matrices of keys, queries and values
# for each of the heads. So, essentially it's a concatenation of
# W_q1, W_q2,..., W_qh, W_k1, W_k2,..., W_kh, W_v1, W_v2,..., W_vh
# for all h heads.
self.qkv_weights = self.add_weight(
name='qkv_weights',
shape=(d_model, d_model * 3), # * 3 for q, k and v
initializer='glorot_uniform',
trainable=True)
self.build_output_params(d_model)
return super().build(input_shape)
def call(self, inputs, **kwargs):
if not K.is_tensor(inputs):
raise ValueError(
'The layer can be called only with one tensor as an argument')
_, seq_len, d_model = K.int_shape(inputs)
# The first thing we need to do is to perform affine transformations
# of the inputs to get the Queries, the Keys and the Values.
qkv = K.dot(K.reshape(inputs, [-1, d_model]), self.qkv_weights)
# splitting the keys, the values and the queries before further
# processing
pre_q, pre_k, pre_v = [
K.reshape(
# K.slice(qkv, (0, i * d_model), (-1, d_model)),
qkv[:, i * d_model:(i + 1) * d_model],
(-1, seq_len, self.num_heads, d_model // self.num_heads))
for i in range(3)]
attention_out = self.attention(pre_q, pre_v, pre_k, seq_len, d_model,
training=kwargs.get('training'))
return attention_out
def compute_output_shape(self, input_shape):
return input_shape
get_custom_objects().update({
'MultiHeadSelfAttention': MultiHeadSelfAttention,
'MultiHeadAttention': MultiHeadAttention,
})
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/modulesLib/bert.py | Python | """
BERT stands for Bidirectional Encoder Representations from Transformers.
It's a way of pre-training Transformer to model a language, described in
paper [BERT: Pre-training of Deep Bidirectional Transformers for
Language Understanding](https://arxiv.org/abs/1810.04805). A quote from it:
> BERT is designed to pre-train deep bidirectional representations
> by jointly conditioning on both left and right context in all layers.
> As a result, the pre-trained BERT representations can be fine-tuned
> with just one additional output layer to create state-of-the art
> models for a wide range of tasks, such as question answering
> and language inference, without substantial task-specific architecture
> modifications.
"""
import random
from itertools import islice, chain
from typing import List, Callable
import numpy as np
# noinspection PyPep8Naming
from keras import backend as K
from keras.utils import get_custom_objects
class BatchGeneratorForBERT:
"""
This class generates batches for a BERT-based language model
in an abstract way, by using an external function sampling
sequences of token IDs of a given length.
"""
reserved_positions = 3
def __init__(self, sampler: Callable[[int], List[int]],
dataset_size: int,
sep_token_id: int,
cls_token_id: int,
mask_token_id: int,
first_normal_token_id: int,
last_normal_token_id: int,
sequence_length: int,
batch_size: int,
sentence_min_span: float = 0.25):
"""
:param sampler: A callable object responsible for uniformly sampling
pieces of the dataset (already turned into token IDs).
It should take one int argument - the sample length, and return
a list of token IDs of the requested size.
:param dataset_size: How big the whole dataset is, measured in number
of token IDs.
:param sep_token_id: ID of a token used as a separator between
the sentences (called "[SEP]" in the paper).
:param cls_token_id: ID of a token marking the node/position
responsible for classification (always the first node).
The token is called "[CLS]" in the original paper.
:param mask_token_id: ID of a token masking the original words
of the sentence, which the network should learn to "restore" using
the context.
:param first_normal_token_id: ID of the first token representing
a normal word/token, not a specialized token, like "[SEP]".
:param last_normal_token_id: ID of the last token representing
a normal word, not a specialized token.
:param sequence_length: a sequence length that can be accepted
by the model being trained / validate.
:param batch_size: how many samples each batch should include.
:param sentence_min_span: A floating number ranging from 0 to 1,
indicating the percentage of words (of the `sequence_length`)
a shortest sentence should occupy. For example,
if the value is 0.25, each sentence will vary in length from 25%
to 75% of the whole `sequence_length` (minus 3 reserved positions
for [CLS] and [SEP] tokens).
"""
self.sampler = sampler
self.steps_per_epoch = (
# We sample the dataset randomly. So we can make only a crude
# estimation of how many steps it should take to cover most of it.
dataset_size // (sequence_length * batch_size))
self.batch_size = batch_size
self.sequence_length = sequence_length
self.sep_token_id = sep_token_id
self.cls_token_id = cls_token_id
self.mask_token_id = mask_token_id
self.first_token_id = first_normal_token_id
self.last_token_id = last_normal_token_id
assert 0.0 < sentence_min_span <= 1.0
self.sentence_min_length = max(
int(sentence_min_span *
(self.sequence_length - self.reserved_positions)),
1)
self.sentence_max_length = (
self.sequence_length
- self.reserved_positions
- self.sentence_min_length)
def generate_batches(self):
"""
Keras-compatible generator of batches for BERT (can be used with
`keras.models.Model.fit_generator`).
Generates tuples of (inputs, targets).
`inputs` is a list of two values:
1. masked_sequence: an integer tensor shaped as
(batch_size, sequence_length), containing token ids of
the input sequence, with some words masked by the [MASK] token.
2. segment id: an integer tensor shaped as
(batch_size, sequence_length),
and containing 0 or 1 depending on which segment (A or B)
each position is related to.
`targets` is also a list of two values:
1. combined_label: an integer tensor of a shape
(batch_size, sequence_length, 2), containing both
- the original token ids
- and the mask (0s and 1s, indicating places where
a word has been replaced).
both stacked along the last dimension.
So combined_label[:, :, 0] would slice only the token ids,
and combined_label[:, :, 1] would slice only the mask.
2. has_next: a float32 tensor (batch_size, 1) containing
1s for all samples where "sentence B" is directly following
the "sentence A", and 0s otherwise.
"""
samples = self.generate_samples()
while True:
next_bunch_of_samples = islice(samples, self.batch_size)
has_next, mask, sequence, segment, masked_sequence = zip(
*list(next_bunch_of_samples))
combined_label = np.stack([sequence, mask], axis=-1)
yield (
[np.array(masked_sequence), np.array(segment)],
[combined_label,
np.expand_dims(np.array(has_next, dtype=np.float32), axis=-1)]
)
def generate_samples(self):
"""
Generates samples, one by one, for later concatenation into batches
by `generate_batches()`.
"""
while True:
# Sentence A has length between 25% and 75% of the whole sequence
a_length = random.randint(
self.sentence_min_length,
self.sentence_max_length)
b_length = (
self.sequence_length - self.reserved_positions - a_length)
# Sampling sentences A and B,
# making sure they follow each other 50% of the time
has_next = random.random() < 0.5
if has_next:
# sentence B is a continuation of A
full_sample = self.sampler(a_length + b_length)
sentence_a = full_sample[:a_length]
sentence_b = full_sample[a_length:]
else:
# sentence B is not a continuation of A
# note that in theory the same or overlapping sentence
# can be selected as B, but it's highly improbable
# and shouldn't affect the performance
sentence_a = self.sampler(a_length)
sentence_b = self.sampler(b_length)
assert len(sentence_a) == a_length
assert len(sentence_b) == b_length
sequence = (
[self.cls_token_id] +
sentence_a + [self.sep_token_id] +
sentence_b + [self.sep_token_id])
masked_sequence = sequence.copy()
output_mask = np.zeros((len(sequence),), dtype=int)
segment_id = np.full((len(sequence),), 1, dtype=int)
segment_id[:a_length + 2] = 0
for word_pos in chain(
range(1, a_length + 1),
range(a_length + 2, a_length + 2 + b_length)):
if random.random() < 0.15:
dice = random.random()
if dice < 0.8:
masked_sequence[word_pos] = self.mask_token_id
elif dice < 0.9:
masked_sequence[word_pos] = random.randint(
self.first_token_id, self.last_token_id)
# else: 10% of the time we just leave the word as is
output_mask[word_pos] = 1
yield (int(has_next), output_mask, sequence,
segment_id, masked_sequence)
def masked_perplexity(y_true, y_pred):
"""
Masked version of popular metric for evaluating performance of
language modelling architectures. It assumes that y_pred has shape
(batch_size, sequence_length, 2), containing both
- the original token ids
- and the mask (0s and 1s, indicating places where
a word has been replaced).
both stacked along the last dimension.
Masked perplexity ignores all but masked words.
More info: http://cs224d.stanford.edu/lecture_notes/LectureNotes4.pdf
"""
y_true_value = y_true[:, :, 0]
mask = y_true[:, :, 1]
cross_entropy = K.sparse_categorical_crossentropy(y_true_value, y_pred)
batch_perplexities = K.exp(
K.sum(mask * cross_entropy, axis=-1) / (K.sum(mask, axis=-1) + 1e-6))
return K.mean(batch_perplexities)
class MaskedPenalizedSparseCategoricalCrossentropy:
"""
Masked cross-entropy (see `masked_perplexity` for more details)
loss function with penalized confidence.
Combines two loss functions: cross-entropy and negative entropy
(weighted by `penalty_weight` parameter), following paper
"Regularizing Neural Networks by Penalizing Confident Output Distributions"
(https://arxiv.org/abs/1701.06548)
how to use:
>>> model.compile(
>>> optimizer,
>>> loss=MaskedPenalizedSparseCategoricalCrossentropy(0.1))
"""
def __init__(self, penalty_weight: float):
self.penalty_weight = penalty_weight
def __call__(self, y_true, y_pred):
y_true_val = y_true[:, :, 0]
mask = y_true[:, :, 1]
# masked per-sample means of each loss
num_items_masked = K.sum(mask, axis=-1) + 1e-6
masked_cross_entropy = (
K.sum(mask * K.sparse_categorical_crossentropy(y_true_val, y_pred),
axis=-1)
/ num_items_masked)
masked_entropy = (
K.sum(mask * -K.sum(y_pred * K.log(y_pred), axis=-1), axis=-1)
/ num_items_masked)
return masked_cross_entropy - self.penalty_weight * masked_entropy
def get_config(self):
return {
'penalty_weight': self.penalty_weight
}
get_custom_objects().update({
'MaskedPenalizedSparseCategoricalCrossentropy':
MaskedPenalizedSparseCategoricalCrossentropy,
'masked_perplexity': masked_perplexity,
})
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/modulesLib/extras.py | Python | """
Tools that are not necessary for the Transformer by itself, but might be
useful in building models with it.
"""
import math
from keras import activations, regularizers
# noinspection PyPep8Naming
from keras import backend as K
from keras.engine import Layer
from keras.layers import Embedding
from keras.utils import get_custom_objects
class ReusableEmbedding(Embedding):
"""
A "reusable" form of the Embedding layer, which returns its
full embedding matrix as one of the outputs.
This is necessary to guarantee correct work of Keras when the matrix
is being re-used again in TiedOutputEmbedding layer.
"""
def call(self, inputs, **kwargs):
result = super().call(inputs, **kwargs)
return [result, self.embeddings]
def compute_output_shape(self, input_shape):
return [super().compute_output_shape(input_shape),
K.int_shape(self.embeddings)]
def compute_mask(self, inputs, mask=None):
return [super().compute_mask(inputs, mask), None]
class TiedOutputEmbedding(Layer):
"""
Allows to reuse the same word embedding matrix both for the input and
the output layers of the network.
This is called Weight Tying and is proven to improve performance
of neural network language models, as well as decrease their number
of parameters (eliminating the need for a separate huge matrix
of output weights).
The layers is supposed to be called with two inputs, like
TiedOutputEmbedding()([main_input, embedding_matrix])
where the `main_input` is the output of the previous layer (like LSTM)
and the `embedding_matrix` coming from the `ReusableEmbedding` layer.
https://arxiv.org/abs/1608.05859
https://arxiv.org/abs/1611.01462
https://blog.openai.com/language-unsupervised/
"""
def __init__(self, activation=None,
add_biases=False, projection_regularizer=None,
projection_dropout: float = 0.0,
scaled_attention=False,
**kwargs):
self.activation = activations.get(activation)
self.add_biases = add_biases
self.projection_regularizer = regularizers.get(projection_regularizer)
self.projection_dropout = projection_dropout
self.scaled_attention = scaled_attention
super().__init__(**kwargs)
def get_config(self):
config = super().get_config()
return dict(
config,
activation=activations.serialize(self.activation),
add_biases=self.add_biases,
projection_regularizer=regularizers.serialize(
self.projection_regularizer),
projection_dropout=self.projection_dropout,
scaled_attention=self.scaled_attention)
# noinspection PyAttributeOutsideInit
def build(self, input_shape):
main_input_shape, embedding_matrix_shape = input_shape
emb_input_dim, emb_output_dim = embedding_matrix_shape
assert len(main_input_shape) == 3
self.projection = self.add_weight(
name='kernel',
shape=(main_input_shape[-1], emb_output_dim),
initializer='glorot_uniform',
regularizer=self.projection_regularizer,
trainable=True)
if self.add_biases:
self.biases = self.add_weight(
name='biases',
shape=(emb_output_dim,),
initializer='zeros',
trainable=True)
return super().build(input_shape)
def call(self, inputs, **kwargs):
main_input, embedding_matrix = inputs
input_shape_tensor = K.shape(main_input)
last_input_dim = K.int_shape(main_input)[-1]
emb_input_dim, emb_output_dim = K.int_shape(embedding_matrix)
projected = K.dot(K.reshape(main_input, (-1, last_input_dim)),
self.projection)
if self.add_biases:
projected = K.bias_add(projected, self.biases,
data_format='channels_last')
if 0 < self.projection_dropout < 1:
projected = K.in_train_phase(
lambda: K.dropout(projected, self.projection_dropout),
projected,
training=kwargs.get('training'))
attention = K.dot(projected, K.transpose(embedding_matrix))
if self.scaled_attention:
# scaled dot-product attention, described in
# "Attention is all you need" (https://arxiv.org/abs/1706.03762)
sqrt_d = K.constant(math.sqrt(emb_output_dim), dtype=K.floatx())
attention = attention / sqrt_d
result = K.reshape(
self.activation(attention),
(input_shape_tensor[0],
input_shape_tensor[1],
emb_input_dim))
return result
def compute_output_shape(self, input_shape):
main_input_shape, embedding_matrix_shape = input_shape
emb_input_dim, emb_output_dim = embedding_matrix_shape
return main_input_shape[0], main_input_shape[1], emb_input_dim
get_custom_objects().update({
'ReusableEmbedding': ReusableEmbedding,
'TiedOutputEmbedding': TiedOutputEmbedding,
})
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/modulesLib/position.py | Python | import numpy as np
# noinspection PyPep8Naming
from keras import backend as K
from keras.engine import Layer
from keras.utils import get_custom_objects
def positional_signal(d_model: int, length: int,
min_timescale: float = 1.0, max_timescale: float = 1e4):
"""
Helper function, constructing basic positional encoding.
The code is partially based on implementation from Tensor2Tensor library
https://github.com/tensorflow/tensor2tensor/blob/master/tensor2tensor/layers/common_attention.py
pos: seq中的位置
i: 某个单词的向量的第i个维度
PE(pos;2i) = sin(pos/10000^(2i/dmodel)) = sin(pos * 10000^(-2i/dmodel)) = sin(pos * exp^(-i * 4 / (dmodel/2)) )
PE(pos;2i+1) = cos(pos/10000^(2i/dmodel)) = cos(pos * 10000^(-2i/dmodel))
"""
if d_model % 2 != 0:
raise ValueError(
"The hidden dimension (d_model) of the model must be divisible by 2."
+"Currently it is { %d }",d_model)
position = K.arange(0, length, dtype=K.floatx()) # [seq_len , ]
num_timescales = d_model // 2
log_timescale_increment = K.constant(
(np.log(float(max_timescale) / float(min_timescale)) / (num_timescales - 1)),
dtype=K.floatx())
inv_timescales = (
min_timescale *
K.exp(K.arange(num_timescales, dtype=K.floatx()) * -log_timescale_increment)) # exp^(-i * 4 / (dmodel/2))
scaled_time = K.expand_dims(position, 1) * K.expand_dims(inv_timescales, 0) # [seq_len, hidden//2]
# 能直接在dmdel维度上拼接吗??不应该隔行插入??
signal = K.concatenate([K.sin(scaled_time), K.cos(scaled_time)], axis=1) # [seq_len, hidden]
return K.expand_dims(signal, axis=0) # [1, seq_len, hidden]
class AddPositionalEncoding(Layer):
"""
Injects positional encoding signal described in section 3.5 of the original
paper "Attention is all you need". Also a base class for more complex
coordinate encoding described in "Universal Transformers".
"""
def __init__(self, min_timescale: float = 1.0,
max_timescale: float = 1.0e4, **kwargs):
self.min_timescale = min_timescale
self.max_timescale = max_timescale
self.signal = None
super().__init__(**kwargs)
def get_config(self):
config = super().get_config()
config['min_timescale'] = self.min_timescale
config['max_timescale'] = self.max_timescale
return config
def build(self, input_shape):
_, length, hidden_size = input_shape
self.signal = positional_signal(
hidden_size, length, self.min_timescale, self.max_timescale)
return super().build(input_shape)
def call(self, inputs, **kwargs):
return inputs + self.signal
class AddCoordinateEncoding(AddPositionalEncoding):
"""
Implements coordinate encoding described in section 2.1
of "Universal Transformers" (https://arxiv.org/abs/1807.03819).
In other words, injects two signals at once: current position in
the sequence, and current step (vertically) in the transformer model.
"""
def build(self, input_shape):
super().build(input_shape)
_, length, hidden_size = input_shape
def call(self, inputs, step=None, **kwargs):
if step is None:
raise ValueError("Please, provide current Transformer's step"
"using 'step' keyword argument.")
pos_encoded_added = super().call(inputs, **kwargs)
step_signal = K.expand_dims(self.signal[:, step, :], axis=1)
return pos_encoded_added + step_signal
class TransformerCoordinateEmbedding(Layer):
"""
Represents trainable positional embeddings for the Transformer model:
1. word position embeddings - one for each position in the sequence.
2. depth embeddings - one for each block of the model
Calling the layer with the Transformer's input will return a new input
with those embeddings added.
"""
def __init__(self, max_transformer_depth: int, **kwargs):
self.max_depth = max_transformer_depth
super().__init__(**kwargs)
def get_config(self):
config = super().get_config()
config['max_transformer_depth'] = self.max_depth
return config
# noinspection PyAttributeOutsideInit
def build(self, input_shape):
sequence_length, d_model = input_shape[-2:]
self.word_position_embeddings = self.add_weight(
shape=(sequence_length, d_model),
initializer='uniform',
name='word_position_embeddings',
trainable=True)
self.depth_embeddings = self.add_weight(
shape=(self.max_depth, d_model),
initializer='uniform',
name='depth_position_embeddings',
trainable=True)
super().build(input_shape)
def call(self, inputs, **kwargs):
depth = kwargs.get('step')
if depth is None:
raise ValueError("Please, provide current Transformer's step"
"using 'step' keyword argument.")
result = inputs + self.word_position_embeddings
if depth is not None:
result = result + self.depth_embeddings[depth]
return result
get_custom_objects().update({
'TransformerCoordinateEmbedding': TransformerCoordinateEmbedding,
'AddCoordinateEncoding': AddCoordinateEncoding,
'AddPositionalEncoding': AddCoordinateEncoding,
}) | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
models/modulesLib/transformer.py | Python | """
Contains implementation of the Transformer model described in papers
"Attention is all you need" (https://arxiv.org/abs/1706.03762) and
"Universal Transformer" (https://arxiv.org/abs/1807.03819)
"""
import math
from typing import Union, Callable, Optional
from keras.layers import Layer, Add, activations, Dropout
from keras import initializers
# noinspection PyPep8Naming
from keras import backend as K
from keras.utils import get_custom_objects
from .attention import MultiHeadSelfAttention
def gelu(x):
"""
GELU activation, described in paper "Gaussian Error Linear Units (GELUs)"
https://arxiv.org/pdf/1606.08415.pdf
"""
c = math.sqrt(2 / math.pi)
return 0.5 * x * (1 + K.tanh(c * (x + 0.044715 * K.pow(x, 3))))
class LayerNormalization(Layer):
"""
Implementation of Layer Normalization (https://arxiv.org/abs/1607.06450).
"Unlike batch normalization, layer normalization performs exactly
the same computation at training and test times."
"""
def __init__(self, axis=-1, **kwargs):
self.axis = axis
super().__init__(**kwargs)
def get_config(self):
config = super().get_config()
config['axis'] = self.axis
return config
# noinspection PyAttributeOutsideInit
def build(self, input_shape):
dim = input_shape[-1]
self.gain = self.add_weight(
name='gain',
shape=(dim,),
initializer='ones',
trainable=True)
self.bias = self.add_weight(
name='bias',
shape=(dim,),
initializer='zeros',
trainable=True)
return super().build(input_shape)
def call(self, inputs, **kwargs):
mean = K.mean(inputs, axis=self.axis, keepdims=True)
variance = K.mean(
K.square(inputs - mean), axis=self.axis, keepdims=True)
epsilon = K.constant(1e-5, dtype=K.floatx())
normalized_inputs = (inputs - mean) / K.sqrt(variance + epsilon)
result = self.gain * normalized_inputs + self.bias
return result
class TransformerTransition(Layer):
"""
Transformer transition function. The same function is used both
in classical in Universal Transformers. Except that in Universal
Transformer it is also shared between time steps.
"""
def __init__(self, activation: Union[str, Callable],
size_multiplier: int = 4, **kwargs):
"""
:param activation: activation function. Must be a string or a callable.
:param size_multiplier: How big the hidden dimension should be.
Most of the implementation use transition functions having 4 times
more hidden units than the model itself.
:param kwargs: Keras-specific layer arguments.
"""
self.activation = activations.get(activation)
self.size_multiplier = size_multiplier
super().__init__(**kwargs)
def get_config(self):
config = super().get_config()
config['activation'] = activations.serialize(self.activation)
config['size_multiplier'] = self.size_multiplier
return config
# noinspection PyAttributeOutsideInit
def build(self, input_shape):
d_model = input_shape[-1]
self.weights1 = self.add_weight(
name='weights1',
shape=(d_model, self.size_multiplier * d_model),
initializer='glorot_uniform',
trainable=True)
self.biases1 = self.add_weight(
name='biases1',
shape=(self.size_multiplier * d_model,),
initializer='zeros',
trainable=True)
self.weights2 = self.add_weight(
name='weights2',
shape=(self.size_multiplier * d_model, d_model),
initializer='glorot_uniform',
trainable=True)
self.biases2 = self.add_weight(
name='biases2',
shape=(d_model,),
initializer='zeros',
trainable=True)
return super().build(input_shape)
def call(self, inputs, **kwargs):
input_shape = K.int_shape(inputs)
d_model = input_shape[-1]
step1 = self.activation(
K.bias_add(
K.dot(K.reshape(inputs, (-1, d_model)),
self.weights1),
self.biases1,
data_format='channels_last'))
step2 = K.bias_add(
K.dot(step1, self.weights2),
self.biases2,
data_format='channels_last')
result = K.reshape(step2, (-1,) + input_shape[-2:])
return result
class TransformerBlock:
"""
A pseudo-layer combining together all nuts and bolts to assemble
a complete section of both the Transformer and the Universal Transformer
models, following description from the "Universal Transformers" paper.
Each such block is, essentially:
- Multi-head self-attention (masked or unmasked, with attention dropout,
but without input dropout)
- Residual connection,
- Dropout
- Layer normalization
- Transition function
- Residual connection
- Dropout
- Layer normalization
Also check TransformerACT class if you need support for ACT (Adaptive
Computation Time).
IMPORTANT: The older Transformer 2017 model ("Attention is all you need")
uses slightly different order of operations. A quote from the paper:
"We apply dropout [33] to the output of each sub-layer,
before it is added to the sub-layer input and normalized"
while the Universal Transformer paper puts dropout one step *after*
the sub-layers's output was added to its input (Figure 4 in the paper).
In this code the order from the Universal Transformer is used, as arguably
more reasonable. You can use classical Transformer's (2017) way of
connecting the pieces by passing vanilla_wiring=True to the constructor.
"""
def __init__(self, name: str, num_heads: int,
residual_dropout: float = 0, attention_dropout: float = 0,
activation: Optional[Union[str, Callable]] = 'gelu',
compression_window_size: int = None,
use_masking: bool = True,
vanilla_wiring=False):
self.attention_layer = MultiHeadSelfAttention(
num_heads, use_masking=use_masking, dropout=attention_dropout,
compression_window_size=compression_window_size,
name='{%s}_self_attention' % name)
self.norm1_layer = LayerNormalization(name='{%s}_normalization1'% name)
self.dropout_layer = (
Dropout(residual_dropout, name='{%s}_dropout'% name)
if residual_dropout > 0
else lambda x: x)
self.norm2_layer = LayerNormalization(name='{%s}_normalization2'% name)
self.transition_layer = TransformerTransition(
name='{%s}_transition'% name, activation=activation)
self.addition_layer = Add(name='{%s}_add'% name)
self.vanilla_wiring = vanilla_wiring
def __call__(self, _input):
output = self.attention_layer(_input)
post_residual1 = (
self.addition_layer([_input, self.dropout_layer(output)])
if self.vanilla_wiring
else self.dropout_layer(self.addition_layer([_input, output])))
norm1_output = self.norm1_layer(post_residual1)
output = self.transition_layer(norm1_output)
post_residual2 = (
self.addition_layer([norm1_output, self.dropout_layer(output)])
if self.vanilla_wiring
else self.dropout_layer(
self.addition_layer([norm1_output, output])))
output = self.norm2_layer(post_residual2)
return output
class TransformerACT(Layer):
"""
Implements Adaptive Computation Time (ACT) for the Transformer model
https://arxiv.org/abs/1603.08983
How to use:
transformer_depth = 8
block = TransformerBlock('Transformer', num_heads=8)
act_layer = TransformerACT()
next_input = input # (batch_size, sequence_length, input_size)
for i in range(transformer_depth):
next_input = block(next_input, step=i)
next_input, act_weighted_output = act_layer(next_input)
act_layer.finalize() # adds loss
result = act_weighted_output
"""
def __init__(self, halt_epsilon=0.01, time_penalty=0.01, **kwargs):
"""
:param halt_epsilon: a small constant that allows computation to halt
after a single update (sigmoid never reaches exactly 1.0)
:param time_penalty: parameter that weights the relative cost
of computation versus error. The larger it is, the less
computational steps the network will try to make and vice versa.
The default value of 0.01 works well for Transformer.
:param kwargs: Any standard parameters for a layer in Keras (like name)
"""
self.halt_epsilon = halt_epsilon
self.time_penalty = time_penalty
self.ponder_cost = None
self.weighted_output = None
self.zeros_like_input = None
self.zeros_like_halting = None
self.ones_like_halting = None
self.halt_budget = None
self.remainder = None
self.active_steps = None
super().__init__(**kwargs)
def get_config(self):
return dict(
super().get_config(),
halt_epsilon=self.halt_epsilon,
time_penalty=self.time_penalty)
# noinspection PyAttributeOutsideInit
def build(self, input_shape):
assert len(input_shape) == 3
_, sequence_length, d_model = input_shape
self.halting_kernel = self.add_weight(
name='halting_kernel',
shape=(d_model, 1),
initializer='glorot_uniform',
trainable=True)
self.halting_biases = self.add_weight(
name='halting_biases',
shape=(1,),
initializer=initializers.Constant(0.1),
trainable=True)
self.time_penalty_t = K.constant(self.time_penalty, dtype=K.floatx())
return super().build(input_shape)
def initialize_control_tensors(self, halting):
"""
Initializes constants and some step-tracking variables
during the first call of the layer (since for the Universal Transformer
all the following calls are supposed to be with inputs of identical
shapes).
"""
self.zeros_like_halting = K.zeros_like(
halting, name='zeros_like_halting')
self.ones_like_halting = K.ones_like(
halting, name='ones_like_halting')
self.remainder = self.ones_like_halting
self.active_steps = self.zeros_like_halting
self.halt_budget = self.ones_like_halting - self.halt_epsilon
def call(self, inputs, **kwargs):
input_shape = K.int_shape(inputs)
sequence_length, d_model = input_shape[-2:]
# output of the "sigmoid halting unit" (not the probability yet)
halting = K.sigmoid(
K.reshape(
K.bias_add(
K.dot(K.reshape(inputs, [-1, d_model]),
self.halting_kernel),
self.halting_biases,
data_format='channels_last'),
[-1, sequence_length]))
if self.zeros_like_halting is None:
self.initialize_control_tensors(halting)
# useful flags
step_is_active = K.greater(self.halt_budget, 0)
no_further_steps = K.less_equal(self.halt_budget - halting, 0)
# halting probability is equal to
# a. halting output if this isn't the last step (we have some budget)
# b. to remainder if it is,
# c. and zero for the steps that shouldn't be executed at all
# (out of budget for them)
halting_prob = K.switch(
step_is_active,
K.switch(
no_further_steps,
self.remainder,
halting),
self.zeros_like_halting)
self.active_steps += K.switch(
step_is_active,
self.ones_like_halting,
self.zeros_like_halting)
# We don't know which step is the last, so we keep updating
# expression for the loss with each call of the layer
self.ponder_cost = (
self.time_penalty_t * K.mean(self.remainder + self.active_steps))
# Updating "the remaining probability" and the halt budget
self.remainder = K.switch(
no_further_steps,
self.remainder,
self.remainder - halting)
self.halt_budget -= halting # OK to become negative
# If none of the inputs are active at this step, then instead
# of zeroing them out by multiplying to all-zeroes halting_prob,
# we can simply use a constant tensor of zeroes, which means that
# we won't even calculate the output of those steps, saving
# some real computational time.
if self.zeros_like_input is None:
self.zeros_like_input = K.zeros_like(
inputs, name='zeros_like_input')
# just because K.any(step_is_active) doesn't work in PlaidML
any_step_is_active = K.greater(
K.sum(K.cast(step_is_active, 'int32')), 0)
step_weighted_output = K.switch(
any_step_is_active,
K.expand_dims(halting_prob, -1) * inputs,
self.zeros_like_input)
if self.weighted_output is None:
self.weighted_output = step_weighted_output
else:
self.weighted_output += step_weighted_output
return [inputs, self.weighted_output]
def compute_output_shape(self, input_shape):
return [input_shape, input_shape]
def finalize(self):
self.add_loss(self.ponder_cost)
get_custom_objects().update({
'LayerNormalization': LayerNormalization,
'TransformerTransition': TransformerTransition,
'TransformerACT': TransformerACT,
'gelu': gelu,
})
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
test.py | Python | import tensorflow as tf
import argparse
import os,math,copy,shutil
from utils.mylogger import *
from utils import ValueWindow , plot
from hparams import hparams as hp
from hparams import hparams_debug_string
from datafeeder import DataFeeder,DataFeeder_wavnet
from models import create_model
import datetime,time
import traceback,random
import numpy as np
from keras import backend as K
def add_stats(model):
with tf.variable_scope('test_stats') as scope:
tf.summary.histogram('pred_labels', model.decoded2)
tf.summary.histogram('labels', model.labels)
tf.summary.scalar('batch_loss', model.batch_loss)
tf.summary.scalar('batch_wer', model.WER)
return tf.summary.merge_all()
def test():
parser = argparse.ArgumentParser()
# TODO: add arguments
parser.add_argument('--log_dir', default=os.path.expanduser('~/my_asr2/logdir/logging'))
parser.add_argument('--serving_dir', default=os.path.expanduser('~/my_asr2/logdir/serving_am/'))
parser.add_argument('--data_dir', default=os.path.expanduser('~/corpus_zn'))
parser.add_argument('--model', default='ASR_wavnet')
# parser.add_argument('--epochs', type=int, help='Max epochs to run.', default=100)
parser.add_argument('--restore_step', type=int, help='Global step to restore from checkpoint.', default=2100)
parser.add_argument('--serving', type=bool, help='', default=False)
# parser.add_argument('--validation_interval', type=int, help='一个epoch验证5次,每次200步共3200条数据', default=7090) # 35450//5
parser.add_argument('--summary_interval', type=int, default=1, help='Steps between running summary ops.')
# parser.add_argument('--checkpoint_interval', type=int, default=100, help='Steps between writing checkpoints.')
parser.add_argument('--hparams', default='',
help='Hyperparameter overrides as a comma-separated list of name=value pairs')
args = parser.parse_args()
run_name = args.model
logdir = os.path.join(args.log_dir, 'logs-%s' % run_name)
init(os.path.join(logdir, 'test.log'), run_name)
hp.parse(args.hparams)
# TODO:parse ckpt,arguments,hparams
checkpoint_path = os.path.join(logdir, 'model.ckpt')
input_path = args.data_dir
log('Checkpoint path: %s' % checkpoint_path)
log('Loading training data from : %s ' % input_path)
log('Using model : %s' % args.model)
# TODO:set up datafeeder
with tf.variable_scope('datafeeder') as scope:
hp.data_type = 'test'
hp.feature_type = 'mfcc'
hp.data_length = None
hp.initial_learning_rate = 0.0005
hp.batch_size = 1
hp.aishell = False
hp.prime = False
hp.stcmd = False
hp.AM = True
hp.LM = False
hp.shuffle = False
hp.is_training = False # TODO: 在infer的时候一定要设置为False否则bn会扰乱所有的值!
feeder = DataFeeder_wavnet(args=hp)
log('num_wavs:' + str(len(feeder.wav_lst)))
feeder.am_vocab = np.load('logdir/am_pinyin_dict.npy').tolist()
hp.input_vocab_size = len(feeder.am_vocab)
hp.final_output_dim = len(feeder.am_vocab)
hp.steps_per_epoch = len(feeder.wav_lst) // hp.batch_size
log('steps_per_epoch:' + str(hp.steps_per_epoch))
log('pinyin_vocab_size:' + str(hp.input_vocab_size))
# TODO: set up model
with tf.variable_scope('model') as scope:
model = create_model(args.model, hp)
model.build_graph()
model.add_loss()
model.add_decoder()
# model.add_optimizer(global_step=global_step)
# TODO: summary
stats = add_stats(model)
# TODO:Set up saver and Bookkeeping
time_window = ValueWindow(100)
loss_window = ValueWindow(100)
wer_window = ValueWindow(100)
saver = tf.train.Saver(max_to_keep=20)
# TODO: test
with tf.Session(graph=tf.get_default_graph()) as sess:
log(hparams_debug_string(hp))
try:
# TODO: Set writer and initializer
summary_writer = tf.summary.FileWriter(logdir + '/test', sess.graph)
sess.run(tf.global_variables_initializer())
# TODO: Restore
if args.restore_step:
# Restore from a checkpoint if the user requested it.
restore_path = '%s-%d' % (checkpoint_path, args.restore_step)
saver.restore(sess, restore_path)
log('Resuming from checkpoint: %s ' % restore_path)
else:
log('Starting new training run ')
# TODO: epochs steps batch
step = 0
batch_data = feeder.get_am_batch()
for j in range(hp.steps_per_epoch):
input_batch = next(batch_data)
feed_dict = {model.inputs: input_batch['the_inputs'],
model.labels: input_batch['the_labels'],
model.input_lengths: input_batch['input_length'],
model.label_lengths: input_batch['label_length']}
# TODO: Run one step
start_time = time.time()
array_loss, batch_loss,wer,label,final_pred_label = sess.run([ model.ctc_loss,model.batch_loss,model.WER,model.labels, model.decoded1],
feed_dict=feed_dict)
time_window.append(time.time() - start_time)
step = step+1
# TODO: Append loss
loss_window.append(batch_loss)
wer_window.append(wer)
message = 'Step %-7d [%.03f sec/step, loss=%.05f, avg_loss=%.05f, wer=%.05f, avg_wer=%.05f]' % (
step, time_window.average, batch_loss, loss_window.average, wer,wer_window.average)
log(message)
# TODO: show pred and write summary
log('label.shape :' + str(label.shape)) # (batch_size , label_length)
log('final_pred_label.shape:' + str(
np.asarray(final_pred_label).shape))
log('label : ' + str(label[0]))
log('final_pred_label: ' + str(np.asarray(final_pred_label)[0][0]))
log('Writing summary at step: %d' % step)
summary_writer.add_summary(sess.run(stats, feed_dict=feed_dict), step)
# TODO: Check loss
if math.isnan(batch_loss):
log('Loss exploded to %.05f at step %d!' % (batch_loss, step))
raise Exception('Loss Exploded')
log('serving step: ' + str(step))
# TODO: Set up serving builder and signature map
serve_dir = args.serving_dir + '0001'
if os.path.exists(serve_dir):
shutil.rmtree(serve_dir)
log('delete exists dirs:' + serve_dir)
builder = tf.saved_model.builder.SavedModelBuilder(export_dir=serve_dir)
input_spec = tf.saved_model.utils.build_tensor_info(model.inputs)
input_len = tf.saved_model.utils.build_tensor_info(model.input_lengths)
output_labels = tf.saved_model.utils.build_tensor_info(model.decoded1)
output_logits = tf.saved_model.utils.build_tensor_info(model.pred_softmax)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'mfcc': input_spec, 'len': input_len},
outputs={'label': output_labels, 'logits': output_logits},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
)
builder.add_meta_graph_and_variables(
sess=sess, tags=[tf.saved_model.tag_constants.SERVING],
signature_def_map={
'predict_AudioSpec2Pinyin':
prediction_signature,
},
main_op=tf.tables_initializer(),
strip_default_attrs=False
)
builder.save()
log('Done store serving-model')
except Exception as e:
log('Exiting due to exception: %s' % e)
traceback.print_exc()
if __name__=='__main__':
test() | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
test/serving_client.py | Python | # import tensorflow as tf # import 严重影响速度
import numpy as np
# Communication to TensorFlow server via gRPC
from grpc.beta import implementations
from tensorflow.contrib.util import make_tensor_proto
# TensorFlow serving stuff to send messages
from tensorflow_serving.apis import predict_pb2
from tensorflow_serving.apis import prediction_service_pb2_grpc
import librosa
import audio
from keras import backend as K
########################################################### AM
server = '219.223.173.213:9001'
model_name = 'ASR_am'
def decode_ctc(num_result, num2word):
result = num_result[:, :, :]
in_len = np.zeros((1), dtype = np.int32)
in_len[0] = result.shape[1]
r = K.ctc_decode(result, in_len, greedy = True, beam_width=10, top_paths=1)
r1 = K.get_value(r[0][0])
r1 = r1[0]
text = []
for i in r1:
text.append(num2word[i])
return r1, text
def compute_mfcc2(file):
wav = audio.load_wav(file)
# mfcc = p.mfcc(wav,numcep=hp.num_mfccs) # n_frames*n_mfcc
mfcc = librosa.feature.mfcc(wav,sr=16000,n_mfcc=26) # n_mfcc * n_frames
n_frames = mfcc.shape[1]
return (mfcc.T,n_frames)
file = 'D32_995.wav'
file1 = 'A12_49.wav'
file2 = 'BAC009S0002W0122.wav'
mfcc,length = compute_mfcc2(file1)
mfcc = np.expand_dims(mfcc,0)
print(mfcc.shape)
length = np.asarray(length).reshape(1,1)
print(length.shape,length) # 313
host, port = server.split(':')
channel = implementations.insecure_channel(host, int(port))
# stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel._channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = model_name
request.model_spec.signature_name = 'predict_AudioSpec2Pinyin'
request.inputs['mfcc'].CopyFrom(make_tensor_proto(mfcc, shape=mfcc.shape, dtype='float')) # 1是batch_size
request.inputs['len'].CopyFrom(make_tensor_proto(length, shape=length.shape, dtype='int32'))
print('begin')
result = stub.Predict(request, 60.0)
am_vocab = np.load('am_pinyin_dict.npy').tolist()
pred_logits = np.array(result.outputs['logits'].float_val).reshape(1,-1,len(am_vocab)).astype(np.float32)
labels = np.asarray(result.outputs['label'].int_val)
print('label.shape:',labels.shape)
print('label :',labels)
print('pred_logits:',pred_logits)
r1 , pinyin = decode_ctc(pred_logits,am_vocab)
print('pinyin1 :',pinyin)
pinyin = [am_vocab[i] for i in labels]
print('pinyin2 :',pinyin)
##################################################### LM
server = '219.223.173.213:9002'
model_name = 'ASR_lm'
lm_pinyin_vocab = np.load('lm_pinyin_dict.npy').tolist()
lm_hanzi_vocab = np.load('lm_hanzi_dict.npy').tolist()
# pinyin = ['jin1','tian1','tian1','qi4','zhen1','hao3']
pinyin = [lm_pinyin_vocab.index(i) for i in pinyin]
pinyin = np.asarray(pinyin).reshape(1,-1)
print(pinyin.shape,pinyin )
host, port = server.split(':')
channel = implementations.insecure_channel(host, int(port))
# stub = prediction_service_pb2.beta_create_PredictionService_stub(channel)
stub = prediction_service_pb2_grpc.PredictionServiceStub(channel._channel)
request = predict_pb2.PredictRequest()
request.model_spec.name = model_name
request.model_spec.signature_name = 'predict_Pinyin2Hanzi'
request.inputs['pinyin'].CopyFrom(
make_tensor_proto(pinyin, shape=pinyin.shape, dtype='int32')) # 1是batch_size
print('begin')
result = stub.Predict(request, 60.0)
pred_label = np.array(result.outputs['hanzi'].int_val)
print(pred_label.shape,pred_label)
hanzi = [lm_hanzi_vocab[i] for i in pred_label]
print(hanzi) | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
test/test_am.py | Python | from hparams import hparams as hp
from models import create_model
from utils.mylogger import *
import os
import tensorflow as tf
import audio,librosa
import numpy as np
from tensorflow.python import pywrap_tensorflow
def compute_mfcc2(file):
wav = audio.load_wav(file)
# mfcc = p.mfcc(wav,numcep=hp.num_mfccs) # n_frames*n_mfcc
mfcc = librosa.feature.mfcc(wav,sr=16000,n_mfcc=26) # n_mfcc * n_frames
n_frames = mfcc.shape[1]
return (mfcc.T,n_frames)
file = 'D32_995.wav'
file1 = 'A2_34.wav'
file2 = 'BAC009S0002W0122.wav'
mfcc,length = compute_mfcc2(file1)
mfcc = np.expand_dims(mfcc,0)
print(mfcc.shape)
length = np.asarray(length).reshape(1,1)
print(length.shape,length) # 313
logdir = 'logging/logs-ASR_wavnet/'
checkpoint_path = os.path.join(logdir,'model.ckpt')
log('Checkpoint path: %s' % checkpoint_path)
step = 2100
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
saver = tf.train.import_meta_graph('logging/logs-ASR_wavnet/model.ckpt-'+str(step)+'.meta')
restore_path = '%s-%d' % (checkpoint_path,step)
saver.restore(sess, restore_path)
log('Resuming from checkpoint: %s ' % restore_path)
graph = tf.get_default_graph() # 或者sess.graph
inputs = graph.get_tensor_by_name('model/NET/NET_Input/mfcc_inputs:0') # 名字一定陶写全否则报错tensor not exist
inputs_len = graph.get_tensor_by_name('model/loss/CTC_Input/input_lengths:0')
feed_dict = {inputs:mfcc,inputs_len:length}
decoded_labels = graph.get_tensor_by_name('model/decode/decoded_labels:0')
pred = sess.run(decoded_labels,feed_dict)
print(pred) | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
train_ASR_DFCNN.py | Python | import tensorflow as tf
import argparse
import os,math,copy
from utils.mylogger import *
from utils import ValueWindow , plot
from hparams import hparams as hp
from hparams import hparams_debug_string
from datafeeder import DataFeeder,GetEditDistance
from models import create_model
import datetime,time
import traceback,random
import numpy as np
from keras import backend as K
def add_stats(model):
with tf.variable_scope('stats') as scope:
tf.summary.histogram('pred_logits', model.pred_logits)
tf.summary.histogram('labels', model.labels)
tf.summary.scalar('batch_loss', model.batch_loss)
tf.summary.scalar('learning_rate', model.learning_rate)
gradient_norms = [tf.norm(grad) for grad in model.gradients]
tf.summary.histogram('gradient_norm', gradient_norms)
tf.summary.scalar('max_gradient_norm', tf.reduce_max(gradient_norms))
return tf.summary.merge_all()
def add_dev_stats(model):
with tf.variable_scope('dev_stats') as scope:
tf.summary.scalar('dev_batch_loss',model.batch_loss)
return tf.summary.merge_all()
def time_string():
return datetime.now().strftime('%Y-%m-%d %H:%M')
def train(logdir,args):
# TODO:parse ckpt,arguments,hparams
checkpoint_path = os.path.join(logdir,'model.ckpt')
input_path = args.data_dir
log('Checkpoint path: %s' % checkpoint_path)
log('Loading training data from : %s ' % input_path)
log('Using model : %s' %args.model)
# TODO:set up datafeeder
with tf.variable_scope('datafeeder') as scope:
hp.data_length=None
hp.decay_learning_rate= False
hp.initial_learning_rate = 0.0005
feeder = DataFeeder(args=hp)
log('num_wavs:'+str(len(feeder.wav_lst))) # 283600
hp.input_vocab_size = len(feeder.pny_vocab)
hp.final_output_dim = len(feeder.pny_vocab)
hp.steps_per_epoch = len(feeder.wav_lst)//hp.batch_size
log('steps_per_epoch:' + str(hp.steps_per_epoch)) # 17725
log('pinyin_vocab_size:'+str(hp.input_vocab_size)) # 1292
hp.label_vocab_size = len(feeder.han_vocab)
log('label_vocab_size :' + str(hp.label_vocab_size)) # 6291
# TODO:set up model
global_step = tf.Variable(initial_value=0,name='global_step',trainable=False)
valid_step = 0
# valid_global_step = tf.Variable(initial_value=0,name='valid_global_step',trainable=False)
with tf.variable_scope('model') as scope:
model = create_model(args.model,hp)
model.build_graph()
model.add_loss()
model.add_decoder()
model.add_optimizer(global_step=global_step)
# TODO: summary
stats = add_stats(model=model)
valid_stats = add_dev_stats(model)
# TODO:Set up saver and Bookkeeping
time_window = ValueWindow(100)
loss_window = ValueWindow(100)
# wer_window = ValueWindow(100)
valid_time_window = ValueWindow(100)
valid_loss_window = ValueWindow(100)
valid_wer_window = ValueWindow(100)
saver = tf.train.Saver(max_to_keep=20)
first_serving = True
# TODO: train
with tf.Session() as sess:
log(hparams_debug_string(hp))
try:
# TODO: Set writer and initializer
summary_writer = tf.summary.FileWriter(logdir, sess.graph)
sess.run(tf.global_variables_initializer())
# TODO: Restore
if args.restore_step:
# Restore from a checkpoint if the user requested it.
restore_path = '%s-%d' % (checkpoint_path, args.restore_step)
saver.restore(sess, restore_path)
log('Resuming from checkpoint: %s ' % restore_path)
else:
log('Starting new training run ')
step = 0
# TODO: epochs steps batch
for i in range(args.epochs):
batch_data = feeder.get_am_batch()
log('Traning epoch '+ str(i)+':')
for j in range(hp.steps_per_epoch):
input_batch = next(batch_data)
feed_dict = {model.inputs:input_batch['the_inputs'],
model.labels:input_batch['the_labels'],
model.input_lengths:input_batch['input_length'],
model.label_lengths:input_batch['label_length']}
# TODO: Run one step
start_time = time.time()
total_step, array_loss, batch_loss,opt = sess.run([global_step, model.ctc_loss,
model.batch_loss,model.optimize],feed_dict=feed_dict)
time_window.append(time.time() - start_time)
step = total_step
# TODO: Append loss
# loss = np.sum(array_loss).item()/hp.batch_size
loss_window.append(batch_loss)
# print('loss',loss,'batch_loss',batch_loss)
message = 'Step %-7d [%.03f sec/step, loss=%.05f, avg_loss=%.05f, lr=%.07f]' % (
step, time_window.average, batch_loss, loss_window.average,K.get_value(model.learning_rate))
log(message)
# ctcloss返回值是[batch_size,1]形式的所以刚开始没sum报错only size-1 arrays can be converted to Python scalars
# TODO: Check loss
if math.isnan(batch_loss):
log('Loss exploded to %.05f at step %d!' % (batch_loss, step))
raise Exception('Loss Exploded')
# TODO: Check sumamry
if step % args.summary_interval == 0:
log('Writing summary at step: %d' % step)
summary_writer.add_summary(sess.run(stats,feed_dict=feed_dict), step)
# TODO: Check checkpoint
if step % args.checkpoint_interval == 0:
log('Saving checkpoint to: %s-%d' % (checkpoint_path, step))
saver.save(sess, checkpoint_path, global_step=step)
log('test acc...')
# eval_start_time = time.time()
# with tf.name_scope('eval') as scope:
# with open(os.path.expanduser('~/my_asr2/datasets/resource/preprocessedData/dev-meta.txt'), encoding='utf-8') as f:
# metadata = [line.strip().split('|') for line in f]
# random.shuffle(metadata)
# eval_loss = []
# batch_size = args.hp.batch_size
# batchs = len(metadata)//batch_size
# for i in range(batchs):
# batch = metadata[i*batch_size : i*batch_size+batch_size]
# batch = list(map(eval_get_example,batch))
# batch = eval_prepare_batch(batch)
# feed_dict = {'labels':batch[0],}
# label,final_pred_label ,log_probabilities = sess.run([
# model.labels[0], model.decoded[0], model.log_probabilities[0]])
# # 刚开始没有加[]会报错 https://github.com/tensorflow/tensorflow/issues/11840
# print('label: ' ,label)
# print('final_pred_label: ', final_pred_label[0])
# log('eval time: %.03f, avg_eval_loss: %.05f' % (time.time()-eval_start_time,np.mean(eval_loss)))
label,final_pred_label ,log_probabilities,y_pred2 = sess.run([
model.labels, model.decoded, model.log_probabilities,model.y_pred2],feed_dict=feed_dict)
# 刚开始没有加[]会报错 https://github.com/tensorflow/tensorflow/issues/11840
log('label.shape :'+str(label.shape)) # (batch_size , label_length)
log('final_pred_label.shape:'+str(np.asarray(final_pred_label).shape)) # (1, batch_size, decode_length<=label_length)
log('y_pred2.shape : '+str( y_pred2.shape))
log('label: ' +str(label[0]))
log('y_pred2 : '+str(y_pred2[0]))
log('final_pred_label: '+str(np.asarray(final_pred_label)[0][0]))
# 刚开始打不出来,因为使用的tf.nn.ctc_beam_decoder,这个返回的是sparse tensor所以打不出来
# 后来用keras封装的decoder自动将sparse转成dense tensor才能打出来
# waveform = audio.inv_spectrogram(spectrogram.T)
# audio.save_wav(waveform, os.path.join(logdir, 'step-%d-audio.wav' % step))
# plot.plot_alignment(alignment, os.path.join(logdir, 'step-%d-align.png' % step),
# info='%s, %s, %s, step=%d, loss=%.5f' % (
# args.model, commit, time_string(), step, loss))
# log('Input: %s' % sequence_to_text(input_seq))
# TODO: Check stop
if step % hp.steps_per_epoch ==0:
# TODO: Set up serving builder and signature map
serve_dir = args.serving_dir + '_' + str(total_step//hp.steps_per_epoch -1)
if os.path.exists(serve_dir):
os.removedirs(serve_dir)
builder = tf.saved_model.builder.SavedModelBuilder(export_dir=serve_dir)
input_spec = tf.saved_model.utils.build_tensor_info(model.inputs)
input_len = tf.saved_model.utils.build_tensor_info(model.input_lengths)
output_labels = tf.saved_model.utils.build_tensor_info(model.decoded2)
output_logits = tf.saved_model.utils.build_tensor_info(model.pred_logits)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'spec': input_spec, 'len': input_len},
outputs={'label': output_labels, 'logits': output_logits},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
)
if first_serving:
first_serving = False
builder.add_meta_graph_and_variables(
sess=sess, tags=[tf.saved_model.tag_constants.SERVING, 'ASR'],
signature_def_map={
'predict_AudioSpec2Pinyin':
prediction_signature,
},
main_op=tf.tables_initializer(),
strip_default_attrs=True
)
else:
builder.add_meta_graph_and_variables(
sess=sess, tags=[tf.saved_model.tag_constants.SERVING, 'ASR'],
signature_def_map={
'predict_AudioSpec2Pinyin':
prediction_signature,
},
strip_default_attrs=True
)
builder.save()
log('Done store serving-model')
# TODO: Validation
if step % hp.steps_per_epoch == 0 :
log('validation...')
valid_start = time.time()
# TODO: validation
valid_hp = copy.deepcopy(hp)
valid_hp.data_type = 'dev'
valid_hp.thchs30 = True
valid_hp.aishell = True
valid_hp.prime = False
valid_hp.stcmd = False
valid_hp.shuffle = True
valid_hp.data_length = None
valid_feeder = DataFeeder(args=valid_hp)
valid_batch_data = valid_feeder.get_am_batch()
log('valid_num_wavs:' + str(len(valid_feeder.wav_lst))) # 15219
valid_hp.input_vocab_size = len(valid_feeder.pny_vocab)
valid_hp.final_output_dim = len(valid_feeder.pny_vocab)
valid_hp.steps_per_epoch = len(valid_feeder.wav_lst) // valid_hp.batch_size
log('valid_steps_per_epoch:' + str(valid_hp.steps_per_epoch)) # 951
log('valid_pinyin_vocab_size:' + str(valid_hp.input_vocab_size)) # 1124
valid_hp.label_vocab_size = len(valid_feeder.han_vocab)
log('valid_label_vocab_size :' + str(valid_hp.label_vocab_size)) # 3327
words_num = 0
word_error_num = 0
# dev 只跑一个epoch就行
with tf.variable_scope('validation') as scope:
for k in range(len(valid_feeder.wav_lst) // valid_hp.batch_size):
valid_input_batch = next(valid_batch_data)
valid_feed_dict = {model.inputs: valid_input_batch['the_inputs'],
model.labels: valid_input_batch['the_labels'],
model.input_lengths: valid_input_batch['input_length'],
model.label_lengths: valid_input_batch['label_length']}
# TODO: Run one step
valid_start_time = time.time()
valid_batch_loss ,valid_WER = sess.run([model.batch_loss,model.WER], feed_dict=valid_feed_dict)
valid_time_window.append(time.time() - valid_start_time)
valid_loss_window.append(valid_batch_loss)
# print('loss',loss,'batch_loss',batch_loss)
message = 'Valid-Step %-7d [%.03f sec/step, valid_loss=%.05f, avg_loss=%.05f, WER=%.05f, avg_WER=%.05f]' % (
valid_step, valid_time_window.average, valid_batch_loss, valid_loss_window.average,valid_WER,valid_wer_window.average)
log(message)
summary_writer.add_summary(sess.run(valid_stats,feed_dict=valid_feed_dict), valid_step)
valid_step += 1
log('Done Validation!Total Time Cost(sec):' + str(time.time()-valid_start))
except Exception as e:
log('Exiting due to exception: %s' % e)
traceback.print_exc()
def main():
parser = argparse.ArgumentParser()
# TODO: add arguments
parser.add_argument('--log_dir', default=os.path.expanduser('~/my_asr2/logdir/logging'))
parser.add_argument('--serving_dir', default=os.path.expanduser('~/my_asr2/logdir/serving_asr_0331'))
parser.add_argument('--data_dir', default=os.path.expanduser('~/corpus_zn'))
parser.add_argument('--model', default='ASR')
parser.add_argument('--epochs', type=int, help='Max epochs to run.', default=100)
parser.add_argument('--restore_step', type=int, help='Global step to restore from checkpoint.',default=16800)
# parser.add_argument('--serving_interval', type=int, help='', default=35450)
# parser.add_argument('--validation_interval', type=int, help='一个epoch验证5次,每次200步共3200条数据', default=30000) # 35450//5
parser.add_argument('--summary_interval', type=int, default=10,help='Steps between running summary ops.')
parser.add_argument('--checkpoint_interval', type=int, default=100, help='Steps between writing checkpoints.')
parser.add_argument('--hparams', default='',
help='Hyperparameter overrides as a comma-separated list of name=value pairs')
args = parser.parse_args()
run_name = args.model
log_dir = os.path.join(args.log_dir, 'logs-%s' % run_name)
os.makedirs(log_dir, exist_ok=True)
# TODO: launch init and train
init(os.path.join(log_dir, 'train.log'), run_name)
hp.parse(args.hparams)
train(log_dir, args)
if __name__ == '__main__':
main() | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
train_ASR_transformer_encoder.py | Python | import tensorflow as tf
import argparse
import os,math,copy
from utils.mylogger import *
from utils import ValueWindow , plot
from hparams import hparams as hp
from hparams import hparams_debug_string
from datafeeder import DataFeeder,DataFeeder_wavnet,DataFeeder_transformer
from models import create_model
import datetime,time
import traceback,random
import numpy as np
from keras import backend as K
def add_stats(model):
with tf.variable_scope('stats') as scope:
tf.summary.scalar('acc', model.acc)
tf.summary.scalar('mean_loss', model.mean_loss)
# tf.summary.scalar('learning_rate', model.learning_rate)
# gradient_norms = [tf.norm(grad) for grad in model.gradients]
# tf.summary.histogram('gradient_norm', gradient_norms)
# tf.summary.scalar('max_gradient_norm', tf.reduce_max(gradient_norms))
return tf.summary.merge_all()
def add_dev_stats(model):
with tf.variable_scope('dev_stats') as scope:
tf.summary.scalar('dev_mean_loss',model.mean_loss)
tf.summary.scalar('dev_acc', model.acc)
return tf.summary.merge_all()
def time_string():
return datetime.now().strftime('%Y-%m-%d %H:%M')
def train(logdir,args):
# TODO:parse ckpt,arguments,hparams
checkpoint_path = os.path.join(logdir,'model.ckpt')
input_path = args.data_dir
log('Checkpoint path: %s' % checkpoint_path)
log('Loading training data from : %s ' % input_path)
log('Using model : %s' %args.model)
# TODO:set up datafeeder
with tf.variable_scope('datafeeder') as scope:
hp.data_length = None
hp.initial_learning_rate = 0.0001
hp.batch_size = 256
hp.prime = True
hp.stcmd = True
feeder = DataFeeder(args=hp)
log('num_sentences:'+str(len(feeder.wav_lst))) # 283600
hp.input_vocab_size = len(feeder.pny_vocab)
hp.final_output_dim = len(feeder.pny_vocab)
hp.steps_per_epoch = len(feeder.wav_lst)//hp.batch_size
log('steps_per_epoch:' + str(hp.steps_per_epoch)) # 17725
log('pinyin_vocab_size:'+str(hp.input_vocab_size)) # 1292
hp.label_vocab_size = len(feeder.han_vocab)
log('label_vocab_size :' + str(hp.label_vocab_size)) # 6291
# TODO:set up model
global_step = tf.Variable(initial_value=0,name='global_step',trainable=False)
valid_step = 0
# valid_global_step = tf.Variable(initial_value=0,name='valid_global_step',trainable=False)
with tf.variable_scope('model') as scope:
model = create_model(args.model,hp)
model.build_graph()
model.add_loss()
model.add_optimizer(global_step=global_step,loss=model.mean_loss)
# TODO: summary
stats = add_stats(model=model)
valid_stats = add_dev_stats(model)
# TODO:Set up saver and Bookkeeping
time_window = ValueWindow(100)
loss_window = ValueWindow(100)
acc_window = ValueWindow(100)
valid_time_window = ValueWindow(100)
valid_loss_window = ValueWindow(100)
valid_acc_window = ValueWindow(100)
saver = tf.train.Saver(max_to_keep=20)
first_serving = True
# TODO: train
with tf.Session() as sess:
log(hparams_debug_string(hp))
try:
# TODO: Set writer and initializer
summary_writer = tf.summary.FileWriter(logdir + '/train', sess.graph)
summary_writer_dev = tf.summary.FileWriter(logdir + '/dev')
sess.run(tf.global_variables_initializer())
# TODO: Restore
if args.restore_step:
# Restore from a checkpoint if the user requested it.
restore_path = '%s-%d' % (checkpoint_path, args.restore_step)
saver.restore(sess, restore_path)
log('Resuming from checkpoint: %s ' % restore_path)
else:
log('Starting new training run ')
step = 0
# TODO: epochs steps batch
for i in range(args.epochs):
batch_data = feeder.get_lm_batch()
log('Traning epoch '+ str(i)+':')
for j in range(hp.steps_per_epoch):
input_batch, label_batch = next(batch_data)
feed_dict = {
model.x:input_batch,
model.y:label_batch,
}
# TODO: Run one step ~~~
start_time = time.time()
total_step,batch_loss,batch_acc,opt = sess.run([global_step, model.mean_loss,model.acc,model.optimize],feed_dict=feed_dict)
time_window.append(time.time() - start_time)
step = total_step
# TODO: Append loss
loss_window.append(batch_loss)
acc_window.append(batch_acc)
message = 'Step %-7d [%.03f sec/step, loss=%.05f, avg_loss=%.05f,acc=%.05f, avg_acc=%.05f, lr=%.07f]' % (
step, time_window.average, batch_loss, loss_window.average,batch_acc,acc_window.average,K.get_value(model.learning_rate))
log(message)
# TODO: Check loss
if math.isnan(batch_loss):
log('Loss exploded to %.05f at step %d!' % (batch_loss, step))
raise Exception('Loss Exploded')
# TODO: Check sumamry
if step % args.summary_interval == 0:
log('Writing summary at step: %d' % step)
summary_writer.add_summary(sess.run(stats,feed_dict=feed_dict), step)
# TODO: Check checkpoint
if step % args.checkpoint_interval == 0:
log('Saving checkpoint to: %s-%d' % (checkpoint_path, step))
saver.save(sess, checkpoint_path, global_step=step)
log('test acc...')
label,final_pred_label = sess.run([
model.y, model.preds],feed_dict=feed_dict)
log('label.shape :'+str(label.shape)) # (batch_size , label_length)
log('final_pred_label.shape:'+str(np.asarray(final_pred_label).shape)) # (1, batch_size, decode_length<=label_length)
log('label : '+str(label[0]))
log('final_pred_label: '+str( np.asarray(final_pred_label)[0]))
# TODO: serving
if args.serving :#and total_step // hp.steps_per_epoch > 5:
np.save('logdir/lm_pinyin_dict.npy',feeder.pny_vocab)
np.save('logdir/lm_hanzi_dict.npy',feeder.han_vocab)
print(total_step, 'hhhhhhhh')
# TODO: Set up serving builder and signature map
serve_dir = args.serving_dir + '0001'
if os.path.exists(serve_dir):
os.removedirs(serve_dir)
builder = tf.saved_model.builder.SavedModelBuilder(export_dir=serve_dir)
input = tf.saved_model.utils.build_tensor_info(model.x)
output_labels = tf.saved_model.utils.build_tensor_info(model.preds)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'pinyin': input},
outputs={'hanzi': output_labels},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
)
if first_serving:
first_serving = False
builder.add_meta_graph_and_variables(
sess=sess, tags=[tf.saved_model.tag_constants.SERVING],
signature_def_map={
'predict_Pinyin2Hanzi':
prediction_signature,
},
main_op=tf.tables_initializer(),
strip_default_attrs=True
)
builder.save()
log('Done store serving-model')
raise Exception('Done store serving-model')
# TODO: Validation
# if total_step % hp.steps_per_epoch == 0 and i >= 10:
if total_step % hp.steps_per_epoch == 0:
log('validation...')
valid_start = time.time()
# TODO: validation
valid_hp = copy.deepcopy(hp)
print('feature_type: ',hp.feature_type)
valid_hp.data_type = 'dev'
valid_hp.thchs30 = True
valid_hp.aishell = True
valid_hp.prime = True
valid_hp.stcmd = True
valid_hp.shuffle = True
valid_hp.data_length = None
valid_feeder = DataFeeder(args=valid_hp)
valid_feeder.pny_vocab = feeder.pny_vocab
valid_feeder.han_vocab = feeder.han_vocab
# valid_feeder.am_vocab = feeder.am_vocab
valid_batch_data = valid_feeder.get_lm_batch()
log('valid_num_sentences:' + str(len(valid_feeder.wav_lst))) # 15219
valid_hp.input_vocab_size = len(valid_feeder.pny_vocab)
valid_hp.final_output_dim = len(valid_feeder.pny_vocab)
valid_hp.steps_per_epoch = len(valid_feeder.wav_lst) // valid_hp.batch_size
log('valid_steps_per_epoch:' + str(valid_hp.steps_per_epoch)) # 951
log('valid_pinyin_vocab_size:' + str(valid_hp.input_vocab_size)) # 1124
valid_hp.label_vocab_size = len(valid_feeder.han_vocab)
log('valid_label_vocab_size :' + str(valid_hp.label_vocab_size)) # 3327
# dev 只跑一个epoch就行
with tf.variable_scope('validation') as scope:
for k in range(len(valid_feeder.wav_lst) // valid_hp.batch_size):
valid_input_batch,valid_label_batch = next(valid_batch_data)
valid_feed_dict = {
model.x: valid_input_batch,
model.y: valid_label_batch,
}
# TODO: Run one step
valid_start_time = time.time()
valid_batch_loss,valid_batch_acc = sess.run([model.mean_loss,model.acc], feed_dict=valid_feed_dict)
valid_time_window.append(time.time() - valid_start_time)
valid_loss_window.append(valid_batch_loss)
valid_acc_window.append(valid_batch_acc)
# print('loss',loss,'batch_loss',batch_loss)
message = 'Valid-Step %-7d [%.03f sec/step, valid_loss=%.05f, avg_loss=%.05f, valid_acc=%.05f, avg_acc=%.05f]' % (
valid_step, valid_time_window.average, valid_batch_loss, valid_loss_window.average,valid_batch_acc,valid_acc_window.average)
log(message)
summary_writer_dev.add_summary(sess.run(valid_stats,feed_dict=valid_feed_dict), valid_step)
valid_step += 1
log('Done Validation!Total Time Cost(sec):' + str(time.time()-valid_start))
except Exception as e:
log('Exiting due to exception: %s' % e)
traceback.print_exc()
def main():
parser = argparse.ArgumentParser()
# TODO: add arguments
parser.add_argument('--log_dir', default=os.path.expanduser('~/my_asr2/logdir/logging'))
parser.add_argument('--serving_dir', default=os.path.expanduser('~/my_asr2/logdir/serving_lm/'))
parser.add_argument('--data_dir', default=os.path.expanduser('~/corpus_zn'))
parser.add_argument('--model', default='ASR_transformer_encoder')
parser.add_argument('--epochs', type=int, help='Max epochs to run.', default=10)
parser.add_argument('--restore_step', type=int, help='Global step to restore from checkpoint.',default=93000)
parser.add_argument('--serving', type=bool, help='', default=True)
# parser.add_argument('--validation_interval', type=int, help='一个epoch验证5次,每次200步共3200条数据', default=7090) # 35450//5
parser.add_argument('--summary_interval', type=int, default=10,help='Steps between running summary ops.')
parser.add_argument('--checkpoint_interval', type=int, default=100, help='Steps between writing checkpoints.')
parser.add_argument('--hparams', default='',
help='Hyperparameter overrides as a comma-separated list of name=value pairs')
args = parser.parse_args()
run_name = args.model
log_dir = os.path.join(args.log_dir, 'logs-%s' % run_name)
os.makedirs(log_dir, exist_ok=True)
# TODO: launch init and train
init(os.path.join(log_dir, 'train.log'), run_name)
hp.parse(args.hparams)
train(log_dir, args)
if __name__ == '__main__':
main() | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
train_ASR_wavnet.py | Python | import tensorflow as tf
import argparse
import os,math,copy,shutil
from utils.mylogger import *
from utils import ValueWindow , plot
from hparams import hparams as hp
from hparams import hparams_debug_string
from datafeeder import DataFeeder,DataFeeder_wavnet
from models import create_model
import datetime,time
import traceback,random
import numpy as np
from keras import backend as K
def add_stats(model):
with tf.variable_scope('stats') as scope:
tf.summary.histogram('pred_labels', model.decoded2)
tf.summary.histogram('labels', model.labels)
tf.summary.scalar('batch_loss', model.batch_loss)
# tf.summary.scalar('learning_rate', model.learning_rate)
# gradient_norms = [tf.norm(grad) for grad in model.gradients]
# tf.summary.histogram('gradient_norm', gradient_norms)
# tf.summary.scalar('max_gradient_norm', tf.reduce_max(gradient_norms))
return tf.summary.merge_all()
def add_dev_stats(model):
with tf.variable_scope('dev_stats') as scope:
summary_op = tf.summary.scalar('dev_batch_loss',model.batch_loss)
return summary_op
# 不能return tf.summary.merge_all()这样会把dev的loss混进train的loss,因为他们记录的都是model.batch_loss
# 可以考虑用tf.summary.merge()整合指定的op
def time_string():
return datetime.now().strftime('%Y-%m-%d %H:%M')
def train(logdir,args):
# TODO:parse ckpt,arguments,hparams
checkpoint_path = os.path.join(logdir,'model.ckpt')
input_path = args.data_dir
log('Checkpoint path: %s' % checkpoint_path)
log('Loading training data from : %s ' % input_path)
log('Using model : %s' %args.model)
# TODO:set up datafeeder
with tf.variable_scope('datafeeder') as scope:
hp.feature_type = 'mfcc'
hp.data_length = None
hp.initial_learning_rate = 0.0005
hp.batch_size = 128
hp.aishell = False
hp.prime = False
hp.stcmd = False
hp.AM = True
hp.LM = False
hp.shuffle = True
feeder = DataFeeder_wavnet(args=hp)
log('num_wavs:'+str(len(feeder.wav_lst)))
hp.input_vocab_size = len(feeder.am_vocab)
hp.final_output_dim = len(feeder.am_vocab)
hp.steps_per_epoch = len(feeder.wav_lst)//hp.batch_size
log('steps_per_epoch:' + str(hp.steps_per_epoch))
log('pinyin_vocab_size:'+str(hp.input_vocab_size))
# TODO:set up model
global_step = tf.Variable(initial_value=0,name='global_step',trainable=False)
valid_step = 0
with tf.variable_scope('model') as scope:
model = create_model(args.model,hp)
model.build_graph()
model.add_loss()
model.add_decoder()
model.add_optimizer(global_step=global_step)
# TODO: summary
stats = add_stats(model=model)
valid_stats = add_dev_stats(model)
# TODO:Set up saver and Bookkeeping
time_window = ValueWindow(100)
loss_window = ValueWindow(100)
# wer_window = ValueWindow(100)
valid_time_window = ValueWindow(100)
valid_loss_window = ValueWindow(100)
valid_wer_window = ValueWindow(100)
saver = tf.train.Saver(max_to_keep=20)
first_serving = True
# TODO: train
with tf.Session(graph=tf.get_default_graph()) as sess:
log(hparams_debug_string(hp))
try:
# TODO: Set writer and initializer
summary_writer = tf.summary.FileWriter(logdir+'/train', sess.graph)
summary_writer_dev = tf.summary.FileWriter(logdir+'/dev')
sess.run(tf.global_variables_initializer())
# TODO: Restore
if args.restore_step:
# Restore from a checkpoint if the user requested it.
restore_path = '%s-%d' % (checkpoint_path, args.restore_step)
saver.restore(sess, restore_path)
log('Resuming from checkpoint: %s ' % restore_path)
else:
log('Starting new training run ')
step = 0
# TODO: epochs steps batch
for i in range(args.epochs):
batch_data = feeder.get_am_batch()
log('Traning epoch '+ str(i)+':')
for j in range(hp.steps_per_epoch):
input_batch = next(batch_data)
feed_dict = {model.inputs:input_batch['the_inputs'],
model.labels:input_batch['the_labels'],
model.input_lengths:input_batch['input_length'],
model.label_lengths:input_batch['label_length']}
# TODO: Run one step
start_time = time.time()
total_step, array_loss, batch_loss,opt = sess.run([global_step, model.ctc_loss,
model.batch_loss,model.optimize],feed_dict=feed_dict)
time_window.append(time.time() - start_time)
step = total_step
# TODO: Append loss
loss_window.append(batch_loss)
message = 'Step %-7d [%.03f sec/step, loss=%.05f, avg_loss=%.05f, lr=%.07f]' % (
step, time_window.average, batch_loss, loss_window.average,K.get_value(model.learning_rate))
log(message)
# TODO: Check loss
if math.isnan(batch_loss):
log('Loss exploded to %.05f at step %d!' % (batch_loss, step))
raise Exception('Loss Exploded')
# TODO: Check sumamry
if step % args.summary_interval == 0:
log('Writing summary at step: %d' % step)
summary_writer.add_summary(sess.run(stats,feed_dict=feed_dict), step)
# TODO: Check checkpoint
if step % args.checkpoint_interval == 0:
log('Saving checkpoint to: %s-%d' % (checkpoint_path, step))
saver.save(sess, checkpoint_path, global_step=step)
log('test acc...')
label,final_pred_label ,log_probabilities,y_pred2 = sess.run([
model.labels, model.decoded1, model.log_probabilities,model.pred_labels],feed_dict=feed_dict)
log('label.shape :'+str(label.shape)) # (batch_size , label_length)
log('final_pred_label.shape:'+str(np.asarray(final_pred_label).shape)) # (1, batch_size, decode_length<=label_length)
log('res_pred.shape : '+str(y_pred2.shape))
log('label : '+str(label[0]))
log('final_pred_label: '+str( np.asarray(final_pred_label)[0][0]))
log('res_pred : '+str( y_pred2[0]))
# TODO: serving
if args.serving :#and total_step // hp.steps_per_epoch > 5:
np.save('logdir/am_dict.npy',feeder.am_vocab)
print(total_step, 'hhhhhhhh')
# TODO: Set up serving builder and signature map
serve_dir = args.serving_dir + '0002'
if os.path.exists(serve_dir):
shutil.rmtree(serve_dir)
log('delete exists dirs:'+ serve_dir)
builder = tf.saved_model.builder.SavedModelBuilder(export_dir=serve_dir)
input_spec = tf.saved_model.utils.build_tensor_info(model.inputs)
input_len = tf.saved_model.utils.build_tensor_info(model.input_lengths)
output_labels = tf.saved_model.utils.build_tensor_info(model.decoded1)
output_logits = tf.saved_model.utils.build_tensor_info(model.pred_softmax)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'mfcc': input_spec, 'len': input_len},
outputs={'label': output_labels, 'logits': output_logits},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
)
if first_serving:
first_serving = False
builder.add_meta_graph_and_variables(
sess=sess, tags=[tf.saved_model.tag_constants.SERVING],
signature_def_map={
'predict_AudioSpec2Pinyin':
prediction_signature,
},
main_op=tf.tables_initializer(),
strip_default_attrs=False
)
builder.save()
log('Done store serving-model')
raise Exception('Done store serving-model')
# TODO: Validation
# if total_step % hp.steps_per_epoch == 0 and i >= 10:
if total_step % hp.steps_per_epoch == 0 :
log('validation...')
valid_start = time.time()
# TODO: validation
valid_hp = copy.deepcopy(hp)
print('feature_type: ',hp.feature_type)
valid_hp.data_type = 'dev'
valid_hp.thchs30 = True
valid_hp.aishell = False
valid_hp.prime = False
valid_hp.stcmd = False
valid_hp.shuffle = False
valid_hp.data_length = None
valid_hp.batch_size = 2
valid_feeder = DataFeeder_wavnet(args=valid_hp)
valid_feeder.am_vocab = feeder.am_vocab
valid_batch_data = valid_feeder.get_am_batch()
log('valid_num_wavs:' + str(len(valid_feeder.wav_lst))) # 15219
valid_hp.input_vocab_size = len(valid_feeder.am_vocab)
valid_hp.final_output_dim = len(valid_feeder.am_vocab)
valid_hp.steps_per_epoch = len(valid_feeder.wav_lst) // valid_hp.batch_size
log('valid_steps_per_epoch:' + str(valid_hp.steps_per_epoch)) # 951
log('valid_pinyin_vocab_size:' + str(valid_hp.input_vocab_size)) # 1124
# valid_hp.label_vocab_size = len(valid_feeder.han_vocab)
# log('valid_label_vocab_size :' + str(valid_hp.label_vocab_size)) # 3327
# dev 只跑一个epoch就行
with tf.variable_scope('validation') as scope:
for k in range(len(valid_feeder.wav_lst) // valid_hp.batch_size):
valid_input_batch = next(valid_batch_data)
valid_feed_dict = {model.inputs: valid_input_batch['the_inputs'],
model.labels: valid_input_batch['the_labels'],
model.input_lengths: valid_input_batch['input_length'],
model.label_lengths: valid_input_batch['label_length']}
# TODO: Run one step
valid_start_time = time.time()
valid_labels,valid_batch_loss,valid_WER,valid_preds = sess.run([model.labels,model.batch_loss,model.WER,model.decoded1], feed_dict=valid_feed_dict)
valid_time_window.append(time.time() - valid_start_time)
valid_loss_window.append(valid_batch_loss)
valid_wer_window.append(valid_WER)
# print('loss',loss,'batch_loss',batch_loss)
message = 'Valid-Step %-7d [%.03f sec/step, valid_loss=%.05f, avg_loss=%.05f, WER=%.05f, avg_WER=%.05f]' % (
valid_step, valid_time_window.average, valid_batch_loss, valid_loss_window.average,valid_WER,valid_wer_window.average)
log(message)
log('label.shape :' + str(valid_labels.shape)) # (batch_size , label_length)
log('final_pred_label.shape:' + str(
np.asarray(valid_preds).shape)) # (1, batch_size, decode_length<=label_length)
log('label : ' + str(valid_labels))
log('final_pred_label: ' + str(np.asarray(valid_preds)[0]))
summary_writer_dev.add_summary(sess.run(valid_stats,feed_dict=valid_feed_dict), valid_step)
valid_step += 1
log('Done Validation!Total Time Cost(sec):' + str(time.time()-valid_start))
except Exception as e:
log('Exiting due to exception: %s' % e)
traceback.print_exc()
def main():
parser = argparse.ArgumentParser()
# TODO: add arguments
parser.add_argument('--log_dir', default=os.path.expanduser('~/my_asr2/logdir/logging'))
parser.add_argument('--serving_dir', default=os.path.expanduser('~/my_asr2/logdir/serving_am/'))
parser.add_argument('--data_dir', default=os.path.expanduser('~/corpus_zn'))
parser.add_argument('--model', default='ASR_wavnet')
parser.add_argument('--epochs', type=int, help='Max epochs to run.', default=100)
parser.add_argument('--restore_step', type=int, help='Global step to restore from checkpoint.',default=2100)
parser.add_argument('--serving', type=bool, help='', default=False)
# parser.add_argument('--validation_interval', type=int, help='一个epoch验证5次,每次200步共3200条数据', default=7090) # 35450//5
parser.add_argument('--summary_interval', type=int, default=10,help='Steps between running summary ops.')
parser.add_argument('--checkpoint_interval', type=int, default=100, help='Steps between writing checkpoints.')
parser.add_argument('--hparams', default='',
help='Hyperparameter overrides as a comma-separated list of name=value pairs')
args = parser.parse_args()
run_name = args.model
log_dir = os.path.join(args.log_dir, 'logs-%s' % run_name)
os.makedirs(log_dir, exist_ok=True)
# TODO: launch init and train
init(os.path.join(log_dir, 'train.log'), run_name)
hp.parse(args.hparams)
train(log_dir, args)
if __name__ == '__main__':
main() | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
train_win.py | Python | import tensorflow as tf
import argparse
import os,math
from utils.mylogger import *
from utils import ValueWindow , plot
from hparams import hparams as hp
from datafeeder import DataFeeder,GetEditDistance
from models import create_model
import datetime,time
import traceback,random
import numpy as np
from keras import backend as K
def add_stats(model):
with tf.variable_scope('stats') as scope:
tf.summary.histogram('pred_logits', model.pred_logits)
tf.summary.histogram('labels', model.labels)
tf.summary.scalar('batch_loss', model.batch_loss)
tf.summary.scalar('learning_rate', model.learning_rate)
gradient_norms = [tf.norm(grad) for grad in model.gradients]
tf.summary.histogram('gradient_norm', gradient_norms)
tf.summary.scalar('max_gradient_norm', tf.reduce_max(gradient_norms))
return tf.summary.merge_all()
def add_dev_stats(model):
with tf.variable_scope('dev_stats') as scope:
tf.summary.scalar('dev_batch_loss',model.batch_loss)
return tf.summary.merge_all()
def time_string():
return datetime.now().strftime('%Y-%m-%d %H:%M')
def train(logdir,args):
# TODO:parse ckpt,arguments,hparams
checkpoint_path = os.path.join(logdir,'model.ckpt')
input_path = args.data_dir
log('Checkpoint path: %s' % checkpoint_path)
log('Loading training data from : %s ' % input_path)
log('Using model : %s' %args.model)
# TODO:set up datafeeder
with tf.variable_scope('datafeeder') as scope:
hp.aishell=True
hp.prime=False
hp.stcmd=False
hp.data_path = 'D:/pycharm_proj/corpus_zn/'
hp.initial_learning_rate = 0.001
hp.decay_learning_rate=False
hp.data_length = 512
hp.batch_size = 64
feeder = DataFeeder(args=hp)
log('num_wavs:'+str(len(feeder.wav_lst))) # 283600
hp.input_vocab_size = len(feeder.pny_vocab)
hp.final_output_dim = len(feeder.pny_vocab)
hp.steps_per_epoch = len(feeder.wav_lst)//hp.batch_size
log('steps_per_epoch:' + str(hp.steps_per_epoch)) # 17725
log('pinyin_vocab_size:'+str(hp.input_vocab_size)) # 1292
hp.label_vocab_size = len(feeder.han_vocab)
log('label_vocab_size :' + str(hp.label_vocab_size)) # 6291
# TODO:set up model
global_step = tf.Variable(initial_value=0,name='global_step',trainable=False)
valid_step = 0
# valid_global_step = tf.Variable(initial_value=0,name='valid_global_step',trainable=False)
with tf.variable_scope('model') as scope:
model = create_model(args.model,hp)
model.build_graph()
model.add_loss()
model.add_decoder()
model.add_optimizer(global_step=global_step)
# TODO: summary
stats = add_stats(model=model)
valid_stats = add_dev_stats(model)
# TODO:Set up saver and Bookkeeping
time_window = ValueWindow(100)
loss_window = ValueWindow(100)
wer_window = ValueWindow(100)
valid_time_window = ValueWindow(100)
valid_loss_window = ValueWindow(100)
valid_wer_window = ValueWindow(100)
saver = tf.train.Saver(max_to_keep=20)
first_serving = True
# TODO: train
with tf.Session() as sess:
try:
# TODO: Set writer and initializer
summary_writer = tf.summary.FileWriter(logdir, sess.graph)
sess.run(tf.global_variables_initializer())
# TODO: Restore
if args.restore_step:
# Restore from a checkpoint if the user requested it.
restore_path = '%s-%d' % (checkpoint_path, args.restore_step)
saver.restore(sess, restore_path)
log('Resuming from checkpoint: %s ' % restore_path)
else:
log('Starting new training run ')
step = 0
# TODO: epochs steps batch
for i in range(args.epochs):
batch_data = feeder.get_am_batch()
log('Traning epoch '+ str(i)+':')
for j in range(hp.steps_per_epoch):
input_batch = next(batch_data)
feed_dict = {model.inputs:input_batch['the_inputs'],
model.labels:input_batch['the_labels'],
model.input_lengths:input_batch['input_length'],
model.label_lengths:input_batch['label_length']}
# TODO: Run one step
start_time = time.time()
total_step, array_loss, batch_loss,opt = sess.run([global_step, model.ctc_loss,
model.batch_loss,model.optimize],feed_dict=feed_dict)
time_window.append(time.time() - start_time)
step = total_step
# TODO: Append loss
# loss = np.sum(array_loss).item()/hp.batch_size
loss_window.append(batch_loss)
# print('loss',loss,'batch_loss',batch_loss)
message = 'Step %-7d [%.03f sec/step, loss=%.05f, avg_loss=%.05f, lr=%.07f]' % (
step, time_window.average, batch_loss, loss_window.average,K.get_value(model.learning_rate))
log(message)
# ctcloss返回值是[batch_size,1]形式的所以刚开始没sum报错only size-1 arrays can be converted to Python scalars
# TODO: Check loss
if math.isnan(batch_loss):
log('Loss exploded to %.05f at step %d!' % (batch_loss, step))
raise Exception('Loss Exploded')
# TODO: Check sumamry
if step % args.summary_interval == 0:
log('Writing summary at step: %d' % step)
summary_writer.add_summary(sess.run(stats,feed_dict=feed_dict), step)
# TODO: Check checkpoint
if step % args.checkpoint_interval == 0:
log('Saving checkpoint to: %s-%d' % (checkpoint_path, step))
saver.save(sess, checkpoint_path, global_step=step)
log('test acc...')
# eval_start_time = time.time()
# with tf.name_scope('eval') as scope:
# with open(os.path.expanduser('~/my_asr2/datasets/resource/preprocessedData/dev-meta.txt'), encoding='utf-8') as f:
# metadata = [line.strip().split('|') for line in f]
# random.shuffle(metadata)
# eval_loss = []
# batch_size = args.hp.batch_size
# batchs = len(metadata)//batch_size
# for i in range(batchs):
# batch = metadata[i*batch_size : i*batch_size+batch_size]
# batch = list(map(eval_get_example,batch))
# batch = eval_prepare_batch(batch)
# feed_dict = {'labels':batch[0],}
# label,final_pred_label ,log_probabilities = sess.run([
# model.labels[0], model.decoded[0], model.log_probabilities[0]])
# # 刚开始没有加[]会报错 https://github.com/tensorflow/tensorflow/issues/11840
# print('label: ' ,label)
# print('final_pred_label: ', final_pred_label[0])
# log('eval time: %.03f, avg_eval_loss: %.05f' % (time.time()-eval_start_time,np.mean(eval_loss)))
label,final_pred_label ,log_probabilities,y_pred2 = sess.run([
model.labels, model.decoded, model.log_probabilities,model.y_pred2],feed_dict=feed_dict)
# 刚开始没有加[]会报错 https://github.com/tensorflow/tensorflow/issues/11840
print('label.shape :',label.shape) # (batch_size , label_length)
print('final_pred_label.shape:',np.asarray(final_pred_label).shape) # (1, batch_size, decode_length<=label_length)
print('y_pred2.shape : ', y_pred2.shape)
print('label: ' ,label[0])
print('y_pred2 : ', y_pred2[0])
print('final_pred_label: ', np.asarray(final_pred_label)[0][0])
# 刚开始打不出来,因为使用的tf.nn.ctc_beam_decoder,这个返回的是sparse tensor所以打不出来
# 后来用keras封装的decoder自动将sparse转成dense tensor才能打出来
# waveform = audio.inv_spectrogram(spectrogram.T)
# audio.save_wav(waveform, os.path.join(logdir, 'step-%d-audio.wav' % step))
# plot.plot_alignment(alignment, os.path.join(logdir, 'step-%d-align.png' % step),
# info='%s, %s, %s, step=%d, loss=%.5f' % (
# args.model, commit, time_string(), step, loss))
# log('Input: %s' % sequence_to_text(input_seq))
# TODO: Check stop
if step % hp.steps_per_epoch ==0:
# TODO: Set up serving builder and signature map
serve_dir = args.serving_dir + '_' + str(total_step//hp.steps_per_epoch -1)
if os.path.exists(serve_dir):
os.removedirs(serve_dir)
builder = tf.saved_model.builder.SavedModelBuilder(export_dir=serve_dir)
input_spec = tf.saved_model.utils.build_tensor_info(model.inputs)
input_len = tf.saved_model.utils.build_tensor_info(model.input_lengths)
output_labels = tf.saved_model.utils.build_tensor_info(model.decoded2)
output_logits = tf.saved_model.utils.build_tensor_info(model.pred_logits)
prediction_signature = (
tf.saved_model.signature_def_utils.build_signature_def(
inputs={'spec': input_spec, 'len': input_len},
outputs={'label': output_labels, 'logits': output_logits},
method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)
)
if first_serving:
first_serving = False
builder.add_meta_graph_and_variables(
sess=sess, tags=[tf.saved_model.tag_constants.SERVING, 'ASR'],
signature_def_map={
'predict_AudioSpec2Pinyin':
prediction_signature,
},
main_op=tf.tables_initializer(),
strip_default_attrs=True
)
else:
builder.add_meta_graph_and_variables(
sess=sess, tags=[tf.saved_model.tag_constants.SERVING, 'ASR'],
signature_def_map={
'predict_AudioSpec2Pinyin':
prediction_signature,
},
strip_default_attrs=True
)
builder.save()
log('Done store serving-model')
# TODO: Validation
if step % hp.steps_per_epoch ==0 and i >= 10:
log('validation...')
valid_start = time.time()
# TODO: validation
valid_hp = hp
valid_hp.data_type = 'dev'
valid_hp.thchs30 = True
valid_hp.aishell = False
valid_hp.prime = False
valid_hp.stcmd = False
valid_hp.shuffle = True
valid_hp.data_length = None
valid_feeder = DataFeeder(args=valid_hp)
valid_batch_data = valid_feeder.get_am_batch()
log('valid_num_wavs:' + str(len(valid_feeder.wav_lst))) # 15219
valid_hp.input_vocab_size = len(valid_feeder.pny_vocab)
valid_hp.final_output_dim = len(valid_feeder.pny_vocab)
valid_hp.steps_per_epoch = len(valid_feeder.wav_lst) // valid_hp.batch_size
log('valid_steps_per_epoch:' + str(valid_hp.steps_per_epoch)) # 951
log('valid_pinyin_vocab_size:' + str(valid_hp.input_vocab_size)) # 1124
valid_hp.label_vocab_size = len(valid_feeder.han_vocab)
log('valid_label_vocab_size :' + str(valid_hp.label_vocab_size)) # 3327
words_num = 0
word_error_num = 0
# dev 只跑一个epoch就行
with tf.variable_scope('validation') as scope:
for k in range(len(valid_feeder.wav_lst) // valid_hp.batch_size):
valid_input_batch = next(valid_batch_data)
valid_feed_dict = {model.inputs: valid_input_batch['the_inputs'],
model.labels: valid_input_batch['the_labels'],
model.input_lengths: valid_input_batch['input_length'],
model.label_lengths: valid_input_batch['label_length']}
# TODO: Run one step
valid_start_time = time.time()
valid_batch_loss,valid_WER = sess.run([model.batch_loss,model.WER], feed_dict=valid_feed_dict)
valid_time_window.append(time.time() - valid_start_time)
valid_loss_window.append(valid_batch_loss)
valid_wer_window.append(valid_WER)
# print('loss',loss,'batch_loss',batch_loss)
message = 'Valid-Step %-7d [%.03f sec/step, valid_loss=%.05f, avg_loss=%.05f, WER=%.05f, avg_WER=%.05f, lr=%.07f]' % (
valid_step, valid_time_window.average, valid_batch_loss, valid_loss_window.average,valid_WER,valid_wer_window.average,K.get_value(model.learning_rate))
log(message)
summary_writer.add_summary(sess.run(valid_stats,feed_dict=valid_feed_dict), valid_step)
valid_step += 1
log('Done Validation!Total Time Cost(sec):' + str(time.time()-valid_start))
except Exception as e:
log('Exiting due to exception: %s' % e)
traceback.print_exc()
def main():
parser = argparse.ArgumentParser()
# TODO: add arguments
parser.add_argument('--log_dir', default='D:/pycharm_proj/temp_log_from_server/win-logs/')
parser.add_argument('--serving_dir', default='D:/pycharm_proj/temp_log_from_server/win-logs/serving/')
parser.add_argument('--data_dir', default='D:\pycharm_proj/corpus_zn/')
parser.add_argument('--model', default='ASR')
parser.add_argument('--epochs', type=int, help='Max epochs to run.', default=100)
parser.add_argument('--restore_step', type=int, help='Global step to restore from checkpoint.',default=None)
# parser.add_argument('--serving_interval', type=int, help='', default=10000)
# parser.add_argument('--validation_interval', type=int, help='一个epoch验证5次,每次200步共3200条数据', default=10000) # 35450//5
parser.add_argument('--summary_interval', type=int, default=10,help='Steps between running summary ops.')
parser.add_argument('--checkpoint_interval', type=int, default=100, help='Steps between writing checkpoints.')
parser.add_argument('--hparams', default='',
help='Hyperparameter overrides as a comma-separated list of name=value pairs')
args = parser.parse_args()
run_name = args.model
log_dir = os.path.join(args.log_dir, 'logs-%s' % run_name)
os.makedirs(log_dir, exist_ok=True)
# TODO: launch init and train
init(os.path.join(log_dir, 'train.log'), run_name)
hp.parse(args.hparams)
train(log_dir, args)
if __name__ == '__main__':
main() | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
utils/__init__.py | Python | from .mylogger import *
from .valuewindow import *
class ValueWindow():
def __init__(self, window_size=100):
self._window_size = window_size
self._values = []
def append(self, x):
self._values = self._values[-(self._window_size - 1):] + [x]
@property
def sum(self):
return sum(self._values)
@property
def count(self):
return len(self._values)
@property
def average(self):
return self.sum / max(1, self.count)
def reset(self):
self._values = [] | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
utils/audio.py | Python | import librosa
import librosa.filters
import numpy as np
import tensorflow as tf
import scipy
from hparams import hparams
def load_wav(path):
return librosa.core.load(path, sr=hparams.sample_rate)[0]
def save_wav(wav, path):
wav *= 32767 / max(0.01, np.max(np.abs(wav)))
scipy.io.wavfile.write(path, hparams.sample_rate, wav.astype(np.int16))
def preemphasis(x):
return scipy.signal.lfilter([1, -hparams.preemphasis], [1], x)
def inv_preemphasis(x):
return scipy.signal.lfilter([1], [1, -hparams.preemphasis], x)
def spectrogram(y):
D = _stft(preemphasis(y)) # [n-fft, n-frames]
S = _amp_to_db(np.abs(D)) - hparams.ref_level_db
return _normalize(S)
# def inv_spectrogram(spectrogram):
# '''Converts spectrogram to waveform using librosa'''
# S = _db_to_amp(_denormalize(spectrogram) + hparams.ref_level_db) # Convert back to linear
# return inv_preemphasis(_griffin_lim(S ** hparams.power)) # Reconstruct phase
# def inv_spectrogram_tensorflow(spectrogram):
# '''Builds computational graph to convert spectrogram to waveform using TensorFlow.
#
# Unlike inv_spectrogram, this does NOT invert the preemphasis. The caller should call
# inv_preemphasis on the output after running the graph.
# '''
# S = _db_to_amp_tensorflow(_denormalize_tensorflow(spectrogram) + hparams.ref_level_db)
# return _griffin_lim_tensorflow(tf.pow(S, hparams.power))
def melspectrogram(y):
D = _stft(preemphasis(y)) # [n-fft, n-frames]
S = _amp_to_db(_linear_to_mel(np.abs(D))) - hparams.ref_level_db # [n-mel, n-frames]
return _normalize(S)
def mfcc(y):
D = _stft(preemphasis(y)) # [n-fft, n-frames]
S = _amp_to_db(_linear_to_mel(np.abs(D))) - hparams.ref_level_db # [n-mel, n-frames]
MFCCs = np.dot(librosa.filters.dct(hparams.num_mfcc, S.shape[0]), S)
return MFCCs
def find_endpoint(wav, threshold_db=-40, min_silence_sec=0.8):
window_length = int(hparams.sample_rate * min_silence_sec)
hop_length = int(window_length / 4)
threshold = _db_to_amp(threshold_db)
for x in range(hop_length, len(wav) - window_length, hop_length):
if np.max(wav[x:x+window_length]) < threshold:
return x + hop_length
return len(wav)
# def _griffin_lim(S):
# '''librosa implementation of Griffin-Lim
# Based on https://github.com/librosa/librosa/issues/434
# '''
# angles = np.exp(2j * np.pi * np.random.rand(*S.shape))
# S_complex = np.abs(S).astype(np.complex)
# y = _istft(S_complex * angles)
# for i in range(hparams.griffin_lim_iters):
# angles = np.exp(1j * np.angle(_stft(y)))
# y = _istft(S_complex * angles)
# return y
# def _griffin_lim_tensorflow(S):
# '''TensorFlow implementation of Griffin-Lim
# Based on https://github.com/Kyubyong/tensorflow-exercises/blob/master/Audio_Processing.ipynb
# '''
# with tf.variable_scope('griffinlim'):
# # TensorFlow's stft and istft operate on a batch of spectrograms; create batch of size 1
# S = tf.expand_dims(S, 0)
# S_complex = tf.identity(tf.cast(S, dtype=tf.complex64))
# y = _istft_tensorflow(S_complex)
# for i in range(hparams.griffin_lim_iters):
# est = _stft_tensorflow(y)
# angles = est / tf.cast(tf.maximum(1e-8, tf.abs(est)), tf.complex64)
# y = _istft_tensorflow(S_complex * angles)
# return tf.squeeze(y, 0)
def _stft(y):
n_fft, hop_length, win_length = _stft_parameters()
return librosa.stft(y=y, n_fft=n_fft, hop_length=hop_length, win_length=win_length)
def _istft(y):
_, hop_length, win_length = _stft_parameters()
return librosa.istft(y, hop_length=hop_length, win_length=win_length)
def _stft_tensorflow(signals):
n_fft, hop_length, win_length = _stft_parameters()
return tf.contrib.signal.stft(signals, win_length, hop_length, n_fft, pad_end=False)
def _istft_tensorflow(stfts):
n_fft, hop_length, win_length = _stft_parameters()
return tf.contrib.signal.inverse_stft(stfts, win_length, hop_length, n_fft)
def _stft_parameters():
n_fft = (hparams.num_freq - 1) * 2
hop_length = int(hparams.frame_shift_ms / 1000 * hparams.sample_rate)
win_length = int(hparams.frame_length_ms / 1000 * hparams.sample_rate)
return n_fft, hop_length, win_length
# Conversions:
_mel_basis = None
def _linear_to_mel(spectrogram):
global _mel_basis
if _mel_basis is None:
_mel_basis = _build_mel_basis()
return np.dot(_mel_basis, spectrogram)
def _build_mel_basis():
n_fft = (hparams.num_freq - 1) * 2
return librosa.filters.mel(hparams.sample_rate, n_fft, n_mels=hparams.num_mels)
def _amp_to_db(x):
return 20 * np.log10(np.maximum(1e-5, x))
def _db_to_amp(x):
return np.power(10.0, x * 0.05)
def _db_to_amp_tensorflow(x):
return tf.pow(tf.ones(tf.shape(x)) * 10.0, x * 0.05)
def _normalize(S):
return np.clip((S - hparams.min_level_db) / -hparams.min_level_db, 0, 1)
def _denormalize(S):
return (np.clip(S, 0, 1) * -hparams.min_level_db) + hparams.min_level_db
def _denormalize_tensorflow(S):
return (tf.clip_by_value(S, 0, 1) * -hparams.min_level_db) + hparams.min_level_db
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
utils/mylogger.py | Python | import logging
from termcolor import colored
from datetime import datetime
import sys
# set up logger
class _MyFormatter(logging.Formatter):
def format(self, record):
date = colored('[%(asctime)s @%(filename)s:%(lineno)d]', 'green')
msg = '%(message)s'
if record.levelno == logging.WARNING:
fmt = date + ' ' + colored('WRN', 'red', attrs=['blink']) + ' ' + msg
elif record.levelno == logging.ERROR or record.levelno == logging.CRITICAL:
fmt = date + ' ' + colored('ERR', 'red', attrs=['blink', 'underline']) + ' ' + msg
else:
fmt = date + ' ' + msg
if hasattr(self, '_style'):
# Python3 compatibility
self._style._fmt = fmt
self._fmt = fmt
return super(_MyFormatter, self).format(record)
logger = logging.getLogger('SXC')
logger.propagate = False
logger.setLevel(logging.INFO)
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(_MyFormatter(datefmt='%m%d %H:%M:%S'))
logger.addHandler(handler)
# set up log_file
_file = None
_run_name = None
_slack_url = None
_format = '%Y-%m-%d %H:%M:%S.%f'
def init(filename,run_name,slack_url=None):
global _file, _run_name, _slack_url
_close_logfile()
_file = open(filename, 'a', encoding="utf-8")
_file.write('\n-----------------------------------------------------------------\n')
_file.write('Starting new training run\n')
_file.write('-----------------------------------------------------------------\n')
_run_name = run_name
_slack_url = slack_url
def _close_logfile():
global _file
if _file is not None:
_file.close()
_file = None
def log(msg):
logger.info(msg)
if _file is not None:
_file.write('[%s] %s\n' % (datetime.now().strftime(_format)[:-3], msg)) | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
utils/plot.py | Python | import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.ticker import MultipleLocator
def plot_alignment(true_labels, pred_labels, info=None):
lens = len(true_labels)
matrix = np.zeros(shape=[lens,lens],dtype=np.int32)
for j in range(lens):
for i in range(lens):
matrix[i][j] = 1 if pred_labels[i]==true_labels[j] else 0
# plot
plt.switch_backend('agg')
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(matrix)
fig.colorbar(cax)
ax.xaxis.set_major_locator(MultipleLocator(1))
ax.yaxis.set_major_locator(MultipleLocator(1))
for i in range(matrix.shape[0]):
ax.text(i, i, str('%.2f' % (matrix[i, i] * 100)), va='center', ha='center')
ax.set_xticklabels([''] + true_labels, rotation=90)
ax.set_yticklabels([''] + true_labels)
plt.savefig('temp.png', format='png') | xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
utils/valuewindow.py | Python | class ValueWindow():
def __init__(self, window_size=100):
self._window_size = window_size
self._values = []
def append(self, x):
self._values = self._values[-(self._window_size - 1):] + [x]
@property
def sum(self):
return sum(self._values)
@property
def count(self):
return len(self._values)
@property
def average(self):
return self.sum / max(1, self.count)
def reset(self):
self._values = []
| xingchensong/ASR-Wavnet | 5 | some ASR-system implementations (via tensorflow 1.x) | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
setup.py | Python | import setuptools
PACKAGE_NAME = "cosyvoice_ttsfrd"
# Include package data in `setup.cfg` or `setup.py`
# setuptools will automatically handle configurations in pyproject.toml, but we need to ensure files are included
setuptools.setup(
# Ensure specific files in the bundled_files directory are included, but exclude resource.zip
package_data={
f'{PACKAGE_NAME}': [
'bundled_files/*.whl',
# 'bundled_files/resource.zip' is excluded, will be downloaded from the network
],
}
)
| xingchensong/CosyVoice-ttsfrd | 25 | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) | |
src/cosyvoice_ttsfrd/__init__.py | Python | __version__ = "0.4.2"
import pathlib
# Provide a helper function or variable to assist users in finding the decompressed resources.
# The resources are expected to be in the parent directory of this file.
PACKAGE_ROOT = pathlib.Path(__file__).parent
RESOURCE_PATH = PACKAGE_ROOT / "resource" # Assume the decompressed directory is called resource
print(f"CosyVoice-ttsfrd initialized. Resources are expected in: {RESOURCE_PATH}")
def get_resource_path():
"""Returns the path to the unzipped resource directory."""
return str(RESOURCE_PATH)
| xingchensong/CosyVoice-ttsfrd | 25 | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) | |
src/cosyvoice_ttsfrd/post_install.py | Python | import sys
import subprocess
import pathlib
import zipfile
import urllib.request
import hashlib
def download_with_progress(url, dest_path, expected_sha256=None):
print(f"Downloading {url}...")
def progress_hook(block_num, block_size, total_size):
if total_size > 0:
percent = min(100, (block_num * block_size * 100) // total_size)
print(f"\rDownload progress: {percent}%", end="", flush=True)
urllib.request.urlretrieve(url, dest_path, progress_hook)
print()
if expected_sha256:
print("Verifying file integrity...")
with open(dest_path, 'rb') as f:
file_hash = hashlib.sha256(f.read()).hexdigest()
if file_hash != expected_sha256:
raise ValueError(f"File integrity check failed. Expected: {expected_sha256}, Got: {file_hash}")
print("File integrity verified.")
def main():
"""
This function will be called when the user runs the `cosyvoice-post-install` command.
"""
try:
# Find the directory where this file is located, to locate the entire package
wrapper_package_dir = pathlib.Path(__file__).parent
bundled_files_dir = wrapper_package_dir / "bundled_files"
resource_dir = wrapper_package_dir / "resource"
print("--- Running Post-Installation Setup for CosyVoice-TTSFRD ---")
# 1. Download and unzip resource.zip from GitHub releases
resource_zip_path = wrapper_package_dir / "resource.zip"
if not resource_dir.exists() or not any(resource_dir.iterdir()):
print("Downloading resources from GitHub releases...")
# GitHub release URL
resource_url = "https://github.com/xingchensong/CosyVoice-ttsfrd/releases/download/v0.4.3/resource.zip"
expected_sha256 = "dcb3970fd4f52d036f245493360d97d0da1014f917deb4b9d83a3ded97483113"
try:
download_with_progress(resource_url, resource_zip_path, expected_sha256)
print(f"Unzipping resources to {wrapper_package_dir}...")
with zipfile.ZipFile(resource_zip_path, 'r') as zip_ref:
zip_ref.extractall(wrapper_package_dir)
resource_zip_path.unlink()
print("Resources downloaded and extracted successfully.")
except Exception as e:
print(f"Failed to download resources: {e}")
print("Please check your internet connection and try again.")
sys.exit(1)
else:
print("Resources already exist, skipping download.")
# 2. Install the dependency and main program whl files
pip_command = [sys.executable, "-m", "pip"]
whl_files = [
"ttsfrd_dependency-0.1-py3-none-any.whl",
"ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl"
]
for whl in whl_files:
whl_path = bundled_files_dir / whl
if whl_path.exists():
print(f"Installing {whl} from bundled file...")
subprocess.check_call(pip_command + ["install", str(whl_path)])
else:
print(f"ERROR: Could not find {whl_path}")
sys.exit(1)
print("\n--- ✅ Post-Installation Finished Successfully! ---")
print("You can now use the 'ttsfrd' module in your Python scripts.")
except Exception as e:
print("\n--- ❌ An error occurred during post-installation ---")
print(e)
sys.exit(1)
if __name__ == "__main__":
main()
| xingchensong/CosyVoice-ttsfrd | 25 | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) | |
flashcosyvoice/cli.py | Python | # Copyright (c) 2025 Tsinghua Univ. (authors: Xingchen Song)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Example Usage: see README.md
"""
import argparse
import json
import os
import random
import sys
import time
from concurrent.futures import ThreadPoolExecutor
from datetime import datetime
import numpy as np
import onnxruntime
import s3tokenizer
import torch
import torch.distributed as dist
import torchaudio
import torchaudio.compliance.kaldi as kaldi
from torch.utils.data import DataLoader, Dataset, DistributedSampler
from tqdm import tqdm
from flashcosyvoice.config import Config, CosyVoice2LLMConfig, SamplingParams
from flashcosyvoice.cosyvoice2 import CosyVoice2
from flashcosyvoice.utils.audio import mel_spectrogram
def set_all_random_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)
def save_file_async(
wav, prompt_speech_tokens, generated_speech_tokens,
info, timing_stats
):
"""Save audio asynchronously."""
try:
os.makedirs(os.path.dirname(info['wav']), exist_ok=True)
if wav is not None:
wav = wav.cpu()
# Support saving both wav and flac
if info['wav'].endswith('.flac'):
torchaudio.save(info['wav'], wav, 24000, format="flac")
else: # By default, we save it as wav
torchaudio.save(info['wav'], wav, 24000)
duration = wav.shape[-1] / 24000.0
rtf = ((timing_stats['dataloader_time'] + timing_stats['model_inference_time']) / timing_stats['batch_size']) / duration
timing_stats['rtf'] = rtf
else:
duration = 0.0
info['timing_stats'] = timing_stats
info['prompt_speech_tokens'] = prompt_speech_tokens
info['generated_speech_tokens'] = generated_speech_tokens
# Support saving both wav and flac, and make it more flexible
json_path = os.path.splitext(info['wav'])[0] + '.json'
with open(json_path, "w") as f:
json.dump(info, f, ensure_ascii=False, indent=4)
return duration
except Exception as e:
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [ERROR] - Error saving audio {info.get('key', 'unknown')}: {e}")
return 0.0
class AudioDataset(Dataset):
def __init__(self, text_norm, text_tokenizer, data_list, model_config: Config):
self.datas = []
self.text_norm = text_norm
self.model_config = model_config
"""Example data_list:
```
{"key": "uttid_1", "prompt_text": "你好,我是小明。", "text": "你好,我是小红。", "prompt_wav": "/mnt/data/audio/00000000.wav", "wav": "/mnt/data/audio_synthetic/uttid_1.wav"}
{"key": "uttid_2", "prompt_text": "你好,我是小红。", "text": "你好,我是小明。", "prompt_wav": "/mnt/data/audio/00000001.wav", "wav": "/mnt/data/audio_synthetic/uttid_2.wav"}
```
Note:
- `key` is the key of this sample.
- `prompt_text` is the text used for prompt.
- `text` is the text used for generating real audio.
- `prompt_wav` is the audio used for prompt.
- `wav` is the path to the generated audio to be saved (we highly recommend to pre-define the save path before running the script).
"""
missing = 0
with open(data_list, 'r', encoding='utf-8') as f:
lines = f.readlines()
total_lines = len(lines)
if torch.distributed.get_node_local_rank() == 0:
iterator = tqdm(lines, desc='Loading data')
else:
iterator = lines
for line in iterator:
data = json.loads(line.strip())
valid = True
for k in ['key', 'prompt_text', 'text', 'prompt_wav']:
if k not in data:
valid = False
break
if data[k] is None:
valid = False
break
if not os.path.exists(data['prompt_wav']):
valid = False
if valid:
self.datas.append(data)
else:
missing += 1
if torch.distributed.get_node_local_rank() == 0:
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f'[{timestamp}] - [INFO] - Loaded {total_lines} lines, found {missing} missing lines, total valid lines == {len(self.datas)}.')
self.text_tokenizer = text_tokenizer
option = onnxruntime.SessionOptions()
option.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
option.intra_op_num_threads = 1
self.spk_model = onnxruntime.InferenceSession(f"{self.model_config.model}/campplus.onnx", sess_options=option,
providers=["CPUExecutionProvider"])
def __len__(self):
return len(self.datas)
def __getitem__(self, idx):
data = self.datas[idx]
try:
# 1. feature for s3tokenizer
audio = s3tokenizer.load_audio(data['prompt_wav'], sr=16000) # [T]
log_mel = s3tokenizer.log_mel_spectrogram(audio) # [num_mels, T]
# 2. feature for speaker embedding
spk_feat = kaldi.fbank(audio.unsqueeze(0), num_mel_bins=80, dither=0, sample_frequency=16000)
spk_feat = spk_feat - spk_feat.mean(dim=0, keepdim=True)
spk_emb = self.spk_model.run(
None, {self.spk_model.get_inputs()[0].name: spk_feat.unsqueeze(dim=0).cpu().numpy()}
)[0].flatten().tolist()
# 3. feature for flow
audio, sample_rate = torchaudio.load(data['prompt_wav'], backend='soundfile')
audio = audio.mean(dim=0, keepdim=True) # [1, T]
if sample_rate != 24000:
audio = torchaudio.transforms.Resample(orig_freq=sample_rate, new_freq=24000)(audio)
mel = mel_spectrogram(audio).transpose(1, 2).squeeze(0) # [T, num_mels]
mel_len = mel.shape[0]
# 4. feature for llm
if self.text_norm is not None:
prompt_texts = [i["text"] for i in json.loads(self.text_norm.do_voicegen_frd(data['prompt_text'].strip()))["sentences"]]
prompt_text = ''.join(prompt_texts)
texts = [i["text"] for i in json.loads(self.text_norm.do_voicegen_frd(data['text'].strip()))["sentences"]]
text = ''.join(texts)
else:
prompt_text = data['prompt_text']
text = data['text']
prompt_text_ids = self.text_tokenizer.encode(prompt_text)
prompt_text_ids = [i + self.model_config.hf_config.speech_vocab_size + 2 for i in prompt_text_ids]
text_ids = self.text_tokenizer.encode(text)
text_ids = [i + self.model_config.hf_config.speech_vocab_size + 2 for i in text_ids]
item = {
"prompt_text_tokens": prompt_text_ids, "text_tokens": text_ids,
"spk_emb": spk_emb, "mel": mel, "mel_len": mel_len, "log_mel": log_mel, "info": data,
"min_tokens": len(text_ids) * self.model_config.min_token_text_ratio,
"max_tokens": len(text_ids) * self.model_config.max_token_text_ratio,
}
except Exception as e:
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [WARNING] - Error processing data item {data.get('key', idx)}: {e}")
return None
return item
def collate_fn(batch):
prompt_mels_for_llm = [item["log_mel"] for item in batch if item is not None]
prompt_mels_for_llm, prompt_mels_lens_for_llm = s3tokenizer.padding(prompt_mels_for_llm) # [B, num_mels=128, T]
prompt_text_tokens_for_llm = [item["prompt_text_tokens"] for item in batch if item is not None]
text_tokens_for_llm = [item["text_tokens"] for item in batch if item is not None]
prompt_mels_for_flow = [item["mel"] for item in batch if item is not None]
prompt_mels_for_flow = torch.nn.utils.rnn.pad_sequence(prompt_mels_for_flow, batch_first=True, padding_value=0) # [B, T', num_mels=80]
prompt_mels_lens_for_flow = [item["mel_len"] for item in batch if item is not None]
prompt_mels_lens_for_flow = torch.tensor(prompt_mels_lens_for_flow)
spk_emb_for_flow = [item["spk_emb"] for item in batch if item is not None]
spk_emb_for_flow = torch.tensor(spk_emb_for_flow)
sampling_params = [SamplingParams(min_tokens=item["min_tokens"], max_tokens=item["max_tokens"], use_ras=True) for item in batch if item is not None]
infos = [item["info"] for item in batch if item is not None]
return {
"prompt_mels_for_llm": prompt_mels_for_llm,
"prompt_mels_lens_for_llm": prompt_mels_lens_for_llm,
"prompt_text_tokens_for_llm": prompt_text_tokens_for_llm,
"text_tokens_for_llm": text_tokens_for_llm,
"prompt_mels_for_flow": prompt_mels_for_flow,
"prompt_mels_lens_for_flow": prompt_mels_lens_for_flow,
"spk_emb_for_flow": spk_emb_for_flow,
"sampling_params": sampling_params,
"infos": infos,
}
def init_distributed():
world_size = int(os.environ.get('WORLD_SIZE', 1))
local_rank = int(os.environ.get('LOCAL_RANK', 0))
rank = int(os.environ.get('RANK', 0))
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f'[{timestamp}] - [INFO] - Inference on multiple gpus, this gpu {local_rank}, rank {rank}, world_size {world_size}')
torch.cuda.set_device(local_rank)
dist.init_process_group("nccl")
return world_size, local_rank, rank
def get_args():
parser = argparse.ArgumentParser(description='FlashCosyVoice')
parser.add_argument('--model_path',
required=True,
type=str,
help='model path')
parser.add_argument('--data_list',
required=True,
type=str,
help='data list')
parser.add_argument('--batch_size_dataloader',
required=True,
type=int,
help='batch size (per-device) for dataloading')
parser.add_argument('--batch_size_flow',
required=True,
type=int,
help='batch size (per-device) for flow-matching')
parser.add_argument('--num_workers',
type=int,
default=4,
help='workers for dataloader')
parser.add_argument('--prefetch',
type=int,
default=5,
help='prefetch for dataloader')
parser.add_argument('--enable_tn',
action='store_true',
help='enable text normalization')
parser.add_argument('--only_llm',
action='store_true',
help='only generate speech tokens from llm')
parser.add_argument('--fp16_flow',
action='store_true',
help='enable fp16 flow')
parser.add_argument('--seed',
type=int,
default=1986,
help='random seed for generation')
args = parser.parse_args()
return args
def main():
args = get_args()
if args.enable_tn:
# Check python version, if == 3.10, use ttsfrd
if sys.version_info.major == 3 and sys.version_info.minor == 10:
# Check if ttsfrd is installed
try:
import ttsfrd
from cosyvoice_ttsfrd import get_resource_path
except ImportError as e:
raise ImportError("ttsfrd is not installed, please install it first, see `https://github.com/xingchensong/CosyVoice-ttsfrd` for installation guide.") from e
text_norm = ttsfrd.TtsFrontendEngine()
text_norm.initialize(get_resource_path())
text_norm.set_lang_type('pinyinvg')
else:
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [WARNING] - Only python 3.10 is supported for ttsfrd, see `https://github.com/xingchensong/CosyVoice-ttsfrd` for more info. Setting enable_tn to False...")
# TODO: maybe we should use wetext if python version is not 3.10?
args.enable_tn = False
text_norm = None
else:
text_norm = None
assert (torch.cuda.is_available())
world_size, local_rank, rank = init_distributed()
config = Config(model=args.model_path, enforce_eager=True, tensor_parallel_size=1,
max_num_seqs=args.batch_size_dataloader,
hf_config=CosyVoice2LLMConfig(fp16_flow=args.fp16_flow), rank=local_rank)
model = CosyVoice2(config)
set_all_random_seed(args.seed)
dataset = AudioDataset(text_norm, model.llm.tokenizer, args.data_list, config)
sampler = DistributedSampler(dataset,
num_replicas=world_size,
rank=rank)
dataloader = DataLoader(dataset, batch_size=args.batch_size_dataloader, num_workers=args.num_workers, pin_memory=True,
sampler=sampler, shuffle=False, prefetch_factor=args.prefetch, collate_fn=collate_fn)
total_steps = len(dataset)
if local_rank == 0:
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [INFO] - {args}")
progress_bar = tqdm(total=total_steps, desc="Processing samples", unit="wav",
position=0, leave=True, dynamic_ncols=True)
cpu_counts = os.cpu_count()
executor = ThreadPoolExecutor(max_workers=min(args.batch_size_dataloader, cpu_counts // 8))
pending_futures = []
dataloader_iter = iter(dataloader)
succeed_duration = 0.01 # avoid division by zero
start_time = time.time()
estimated_total_wavs = 0
succeed_wavs = 0
failed_wavs = 0
last_print_time = start_time
while True:
try:
dataloader_start = time.time()
batch = next(dataloader_iter)
dataloader_time = time.time() - dataloader_start
if len(batch['infos']) == 0:
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [WARNING] - rank {rank} of {world_size}: No valid batch found, skipping this batch...")
continue
model_start = time.time()
results_dict, timing_stats = model(**batch, batch_size_flow=args.batch_size_flow,
only_llm=args.only_llm)
model_time = time.time() - model_start
estimated_total_wavs += len(results_dict['generated_wavs'])
timing_stats['dataloader_time'] = dataloader_time
timing_stats['model_inference_time'] = model_time
if args.only_llm:
results_dict['generated_wavs'] = [None] * len(results_dict['prompt_speech_tokens'])
for i in range(len(results_dict['generated_wavs'])):
future = executor.submit(
save_file_async, results_dict['generated_wavs'][i],
results_dict['prompt_speech_tokens'][i],
results_dict['generated_speech_tokens'][i],
batch['infos'][i].copy(), timing_stats.copy()
)
pending_futures.append(future)
completed_futures = []
for future in pending_futures:
if future.done():
try:
duration = future.result()
succeed_duration += duration
succeed_wavs += 1
except Exception as e:
failed_wavs += 1
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [ERROR] - rank {rank} of {world_size}: Error in async save task: {e}")
completed_futures.append(future)
for future in completed_futures:
pending_futures.remove(future)
if local_rank == 0:
update_n = world_size * len(batch["prompt_text_tokens_for_llm"])
if progress_bar.n + update_n > progress_bar.total:
progress_bar.update(progress_bar.total - progress_bar.n)
else:
progress_bar.update(update_n)
current_time = time.time()
if current_time - last_print_time >= 120 and not args.only_llm:
elapsed_time = current_time - start_time
avg_duration = succeed_duration / succeed_wavs if succeed_wavs > 0 else 0
estimated_total_duration = avg_duration * estimated_total_wavs
current_rtf = elapsed_time / estimated_total_duration if estimated_total_duration > 0.01 else 0
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [INFO] - rank {rank} of {world_size}: Estimated total wavs: {estimated_total_wavs} ({estimated_total_wavs - succeed_wavs} pending to save), Succeed wavs: {succeed_wavs}, Failed wavs: {failed_wavs}, Estimated total duration: {estimated_total_duration:.2f}s ({estimated_total_duration / 3600:.2f} h), Estimated RTF: {current_rtf:.5f}, Elapsed time: {elapsed_time:.2f}s") # noqa
last_print_time = current_time
except StopIteration:
break
except Exception as e:
failed_wavs += 1
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [ERROR] - rank {rank} of {world_size}: Error in main loop: {e}")
continue
total_time = time.time() - start_time
if local_rank == 0:
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [INFO] - Waiting for {len(pending_futures)} pending save tasks to complete...")
for future in pending_futures:
try:
duration = future.result(timeout=60)
succeed_duration += duration
succeed_wavs += 1
except Exception as e:
failed_wavs += 1
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [ERROR] - rank {rank} of {world_size}: Error in final async save task: {e}")
executor.shutdown(wait=True)
if local_rank == 0:
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [INFO] - All async save tasks completed.")
progress_bar.close()
if not args.only_llm:
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [INFO] - rank {rank} of {world_size}: Final Report - Succeed wavs: {succeed_wavs}, Failed wavs: {failed_wavs}, Total duration: {succeed_duration:.2f}s ({succeed_duration / 3600:.2f} h), RTF: {total_time / succeed_duration:.5f}") # noqa
dist.barrier()
dist.destroy_process_group()
if __name__ == "__main__":
main()
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/config.py | Python | import os
from dataclasses import dataclass, field
import torch
from transformers import AutoConfig
@dataclass
class CosyVoice2LLMConfig:
architectures: list[str] = field(default_factory=lambda: ["Qwen2ForCausalLM"])
attention_dropout: float = 0.0
bos_token_id: int = 151643
eos_token_id: int = 6561 # speech eos
hidden_act: str = "silu"
hidden_size: int = 896
initializer_range: float = 0.02
intermediate_size: int = 4864
max_position_embeddings: int = 32768
max_window_layers: int = 24
model_type: str = "qwen2"
num_attention_heads: int = 14
num_hidden_layers: int = 24
num_key_value_heads: int = 2
head_dim: int = 64
rms_norm_eps: float = 1e-06
rope_scaling: dict | None = None
rope_theta: float = 1000000.0
sliding_window: int = 32768
tie_word_embeddings: bool = False
torch_dtype: torch.dtype = torch.bfloat16
transformers_version: str = "4.52.0.dev0"
use_cache: bool = True
use_sliding_window: bool = False
vocab_size: int = 158500 # text_vocab_size + speech_vocab_size + 2 (eos and task_id)
text_vocab_size: int = 151936
speech_vocab_size: int = 6562 # actually 6564, we only care about non-streaming inference, so cut off tokens (6562, 6563) that are only used for streaming TTS
lm_head_bias: bool = True
qkv_bias: bool = True
fp16_flow: bool = True
@dataclass
class SamplingParams:
temperature: float = 1.0
min_tokens: int = 2
max_tokens: int = 64
ignore_eos: bool = False
top_k: int = 25
# RasSampler parameters
use_ras: bool = False
win_size: int = 10
tau_r: float = 0.1
top_p: float = 0.8
@dataclass
class Config:
model: str
max_num_batched_tokens: int = 1572864
max_num_seqs: int = 1024
max_model_len: int = 1536 # 15s prompt + 30s generated audio for 25hz audio tokenizer
gpu_memory_utilization: float = 0.9
tensor_parallel_size: int = 1
enforce_eager: bool = False
hf_config: CosyVoice2LLMConfig | AutoConfig = field(default_factory=CosyVoice2LLMConfig)
eos: int = -1
kvcache_block_size: int = 256
num_kvcache_blocks: int = -1
min_token_text_ratio: int = 2
max_token_text_ratio: int = 20
rank: int = 0
def __post_init__(self):
assert os.path.isdir(self.model)
assert self.kvcache_block_size % 256 == 0
assert 1 <= self.tensor_parallel_size <= 8
max_pos = getattr(self.hf_config, "max_position_embeddings", 4096)
self.max_model_len = min(self.max_model_len, max_pos)
assert self.max_num_batched_tokens >= self.max_model_len
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/cosyvoice2.py | Python | # Copyright (c) 2025 Tsinghua Univ. (authors: Xingchen Song)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
from datetime import datetime
import s3tokenizer
import torch
from tqdm import tqdm
from flashcosyvoice.config import Config, SamplingParams
from flashcosyvoice.engine.llm_engine import LLMEngine
from flashcosyvoice.modules.flow import CausalMaskedDiffWithXvec
from flashcosyvoice.modules.hifigan import HiFTGenerator
class CosyVoice2(torch.nn.Module):
def __init__(self, config: Config = None):
super().__init__()
self.config = Config() if config is None else config
self.audio_tokenizer = s3tokenizer.load_model("speech_tokenizer_v2_25hz").cuda().eval()
self.llm = LLMEngine(**self.config.__dict__)
self.use_tqdm = torch.distributed.get_node_local_rank() == 0
self.flow = CausalMaskedDiffWithXvec()
if self.config.hf_config.fp16_flow:
timestamp = datetime.now().strftime('%Y-%m-%d %H:%M:%S,%f')[:-3]
tqdm.write(f"[{timestamp}] - [INFO] - Casting flow to fp16")
self.flow.half()
self.flow.load_state_dict(torch.load(f"{self.config.model}/flow.pt", map_location="cpu", weights_only=True), strict=True)
self.flow.cuda().eval()
self.hift = HiFTGenerator()
hift_state_dict = {k.replace('generator.', ''): v for k, v in torch.load(f"{self.config.model}/hift.pt", map_location="cpu", weights_only=True).items()}
self.hift.load_state_dict(hift_state_dict, strict=True)
self.hift.cuda().eval()
@torch.inference_mode()
def forward(
self, prompt_mels_for_llm: torch.Tensor, prompt_mels_lens_for_llm: torch.Tensor,
prompt_text_tokens_for_llm: list[list[int]], text_tokens_for_llm: list[list[int]],
prompt_mels_for_flow: torch.Tensor, prompt_mels_lens_for_flow: torch.Tensor,
spk_emb_for_flow: torch.Tensor,
sampling_params: SamplingParams | list[SamplingParams],
batch_size_flow: int,
only_llm: bool,
**kwargs, # for compatibility
):
timing_stats = {}
# Audio tokenization
start_time = time.time()
prompt_speech_tokens, prompt_speech_tokens_lens = self.audio_tokenizer.quantize(
prompt_mels_for_llm.cuda(), prompt_mels_lens_for_llm.cuda()
)
timing_stats['audio_tokenization'] = time.time() - start_time
batch_size = prompt_speech_tokens.shape[0]
assert len(prompt_text_tokens_for_llm) == batch_size
# Prepare LLM inputs
start_time = time.time()
valid_prompt_speech_tokens = []
inputs = []
for i in range(batch_size):
speech_tokens_i = prompt_speech_tokens[i, :prompt_speech_tokens_lens[i].item()].tolist()
valid_prompt_speech_tokens.append(speech_tokens_i)
inputs.append([self.config.hf_config.speech_vocab_size] + prompt_text_tokens_for_llm[i] + text_tokens_for_llm[i] + [self.config.hf_config.speech_vocab_size + 1] + speech_tokens_i)
timing_stats['prepare_llm_inputs'] = time.time() - start_time
# LLM generation
start_time = time.time()
llm_outputs = self.llm.generate(inputs, sampling_params, use_tqdm=self.use_tqdm)
timing_stats['llm_generation'] = time.time() - start_time
results_dict = {
"prompt_speech_tokens": valid_prompt_speech_tokens,
"generated_speech_tokens": [o['token_ids'][:-1] for o in llm_outputs],
}
if only_llm:
return results_dict, timing_stats
# Prepare Flow inputs
start_time = time.time()
flow_inputs = []
flow_inputs_lens = []
for i, o in enumerate(llm_outputs):
generated_speech_tokens = o['token_ids'][:-1] # ignore last eos
prompt_speech_tokens = valid_prompt_speech_tokens[i]
flow_inputs.append(torch.tensor(prompt_speech_tokens + generated_speech_tokens))
flow_inputs_lens.append(len(prompt_speech_tokens) + len(generated_speech_tokens))
flow_inputs = torch.nn.utils.rnn.pad_sequence(flow_inputs, batch_first=True, padding_value=0)
flow_inputs_lens = torch.tensor(flow_inputs_lens)
timing_stats['prepare_flow_inputs'] = time.time() - start_time
# Flow generation and HiFi-GAN generation (with batching)
total_batch_size = flow_inputs.shape[0]
generated_wavs = []
flow_total_time = 0.0
hifigan_total_time = 0.0
# Process in batches according to batch_size_flow, batch_size_flow <= total_batch_size
# NOTE(xcsong): When executing both LLM and Flow on the same GPU,
# Flow can easily fill up the SM and memory. Therefore, batch processing is required to avoid OOM.
num_batches = (total_batch_size + batch_size_flow - 1) // batch_size_flow
batch_iterator = range(0, total_batch_size, batch_size_flow)
if self.use_tqdm:
batch_iterator = tqdm(batch_iterator, desc="Generating wavs (Flow+HiFi-GAN)", leave=False, unit="batch",
total=num_batches, dynamic_ncols=True, position=self.config.rank + 1)
for start_idx in batch_iterator:
end_idx = min(start_idx + batch_size_flow, total_batch_size)
batch_flow_inputs = flow_inputs[start_idx:end_idx]
batch_flow_inputs_lens = flow_inputs_lens[start_idx:end_idx]
batch_prompt_mels = prompt_mels_for_flow[start_idx:end_idx]
batch_prompt_mels_lens = prompt_mels_lens_for_flow[start_idx:end_idx]
batch_spk_emb = spk_emb_for_flow[start_idx:end_idx]
# Flow generation for this batch
flow_start_time = time.time()
with torch.amp.autocast("cuda", dtype=torch.float16 if self.config.hf_config.fp16_flow else torch.float32):
batch_generated_mels, batch_generated_mels_lens = self.flow(
batch_flow_inputs.cuda(), batch_flow_inputs_lens.cuda(),
batch_prompt_mels.cuda(), batch_prompt_mels_lens.cuda(), batch_spk_emb.cuda(),
streaming=False, finalize=True
)
flow_total_time += time.time() - flow_start_time
# HiFi-GAN generation for this batch
hifigan_start_time = time.time()
batch_size_current = end_idx - start_idx
for i in range(batch_size_current):
mel = batch_generated_mels[i, :, batch_prompt_mels_lens[i].item():batch_generated_mels_lens[i].item()].unsqueeze(0)
wav, _ = self.hift(speech_feat=mel)
generated_wavs.append(wav)
hifigan_total_time += time.time() - hifigan_start_time
timing_stats['flow_generation'] = flow_total_time
timing_stats['hifigan_generation'] = hifigan_total_time
# Calculate total time and batch statistics
timing_stats['model.forward_total'] = sum(timing_stats.values())
timing_stats['batch_size'] = len(generated_wavs)
timing_stats['batch_size_flow'] = batch_size_flow
results_dict['generated_wavs'] = generated_wavs
return results_dict, timing_stats
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/cosyvoice3.py | Python | # TODO(xcsong): Implement CosyVoice3 when it is released
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/engine/block_manager.py | Python | from collections import deque
import numpy as np
import xxhash
from flashcosyvoice.engine.sequence import Sequence
class Block:
def __init__(self, block_id):
self.block_id = block_id
self.ref_count = 0
self.hash = -1
self.token_ids = []
def update(self, hash: int, token_ids: list[int]):
self.hash = hash
self.token_ids = token_ids
def reset(self):
self.ref_count = 1
self.hash = -1
self.token_ids = []
class BlockManager:
def __init__(self, num_blocks: int, block_size: int):
assert num_blocks > 0
self.block_size = block_size
self.blocks: list[Block] = [Block(i) for i in range(num_blocks)]
self.hash_to_block_id: dict[int, int] = dict()
self.free_block_ids: deque[int] = deque(range(num_blocks))
self.used_block_ids: set[int] = set()
@classmethod
def compute_hash(cls, token_ids: list[int], prefix: int = -1):
h = xxhash.xxh64()
if prefix != -1:
h.update(prefix.to_bytes(8, "little"))
h.update(np.array(token_ids).tobytes())
return h.intdigest()
def _allocate_block(self, block_id: int) -> Block:
block = self.blocks[block_id]
assert block.ref_count == 0
block.reset()
self.free_block_ids.remove(block_id)
self.used_block_ids.add(block_id)
return self.blocks[block_id]
def _deallocate_block(self, block_id: int) -> Block:
assert self.blocks[block_id].ref_count == 0
self.used_block_ids.remove(block_id)
self.free_block_ids.append(block_id)
def can_allocate(self, seq: Sequence) -> bool:
return len(self.free_block_ids) >= seq.num_blocks
def allocate(self, seq: Sequence):
assert not seq.block_table
h = -1
cache_miss = False
for i in range(seq.num_blocks):
token_ids = seq.block(i)
h = self.compute_hash(token_ids, h) if len(token_ids) == self.block_size else -1
block_id = self.hash_to_block_id.get(h, -1)
if block_id == -1 or self.blocks[block_id].token_ids != token_ids:
cache_miss = True
if cache_miss:
block_id = self.free_block_ids[0]
block = self._allocate_block(block_id)
else:
seq.num_cached_tokens += self.block_size
if block_id in self.used_block_ids:
block = self.blocks[block_id]
block.ref_count += 1
else:
block = self._allocate_block(block_id)
if h != -1:
block.update(h, token_ids)
self.hash_to_block_id[h] = block_id
seq.block_table.append(block_id)
def deallocate(self, seq: Sequence):
for block_id in reversed(seq.block_table):
block = self.blocks[block_id]
block.ref_count -= 1
if block.ref_count == 0:
self._deallocate_block(block_id)
seq.num_cached_tokens = 0
seq.block_table.clear()
def can_append(self, seq: Sequence) -> bool:
return len(self.free_block_ids) >= (len(seq) % self.block_size == 1)
def may_append(self, seq: Sequence):
block_table = seq.block_table
last_block = self.blocks[block_table[-1]]
if len(seq) % self.block_size == 1:
assert last_block.hash != -1
block_id = self.free_block_ids[0]
self._allocate_block(block_id)
block_table.append(block_id)
elif len(seq) % self.block_size == 0:
assert last_block.hash == -1
token_ids = seq.block(seq.num_blocks - 1)
prefix = self.blocks[block_table[-2]].hash if len(block_table) > 1 else -1
h = self.compute_hash(token_ids, prefix)
last_block.update(h, token_ids)
self.hash_to_block_id[h] = last_block.block_id
else:
assert last_block.hash == -1
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/engine/llm_engine.py | Python | import atexit
from dataclasses import fields
from time import perf_counter
import torch.multiprocessing as mp
from tqdm.auto import tqdm
from transformers import AutoTokenizer
from flashcosyvoice.config import Config, SamplingParams
from flashcosyvoice.engine.model_runner import ModelRunner
from flashcosyvoice.engine.scheduler import Scheduler
from flashcosyvoice.engine.sequence import Sequence
class LLMEngine:
def __init__(self, model, **kwargs):
config_fields = {field.name for field in fields(Config)}
config_kwargs = {k: v for k, v in kwargs.items() if k in config_fields}
config = Config(model, **config_kwargs)
self.ps = []
self.events = []
ctx = mp.get_context("spawn")
assert config.tensor_parallel_size == 1, "NOTE(xcsong): Currently only support tp=1"
for i in range(1, config.tensor_parallel_size):
event = ctx.Event()
process = ctx.Process(target=ModelRunner, args=(config, i, event))
process.start()
self.ps.append(process)
self.events.append(event)
if hasattr(config.hf_config, "speech_vocab_size"):
# NOTE: non-chat model, all these special tokens keep randomly initialized.
special_tokens = {
'eos_token': '<|endoftext|>',
'pad_token': '<|endoftext|>',
'additional_special_tokens': [
'<|im_start|>', '<|im_end|>', '<|endofprompt|>',
'[breath]', '<strong>', '</strong>', '[noise]',
'[laughter]', '[cough]', '[clucking]', '[accent]',
'[quick_breath]',
"<laughter>", "</laughter>",
"[hissing]", "[sigh]", "[vocalized-noise]",
"[lipsmack]", "[mn]"
]
}
self.tokenizer = AutoTokenizer.from_pretrained(f"{config.model}/CosyVoice-BlankEN")
self.tokenizer.add_special_tokens(special_tokens)
self.skip_special_tokens = True
else:
self.tokenizer = AutoTokenizer.from_pretrained(config.model, use_fast=True)
if hasattr(config.hf_config, "eos_token_id"):
config.eos = config.hf_config.eos_token_id
else:
config.eos = self.tokenizer.eos_token_id
self.model_runner = ModelRunner(config, config.rank, self.events)
self.scheduler = Scheduler(config)
self.config = config
atexit.register(self.exit)
def exit(self):
self.model_runner.call("exit")
del self.model_runner
for p in self.ps:
p.join()
def add_request(self, prompt: str | list[int], sampling_params: SamplingParams):
if isinstance(prompt, str):
prompt = self.tokenizer.encode(prompt)
seq = Sequence(prompt, sampling_params)
self.scheduler.add(seq)
def step(self):
seqs, is_prefill = self.scheduler.schedule()
token_ids = self.model_runner.call("run", seqs, is_prefill)
self.scheduler.postprocess(seqs, token_ids)
outputs = [(seq.seq_id, seq.completion_token_ids) for seq in seqs if seq.is_finished]
num_tokens = sum(len(seq) for seq in seqs) if is_prefill else -len(seqs)
return outputs, num_tokens
def is_finished(self):
return self.scheduler.is_finished()
def generate(
self,
prompts: list[str] | list[list[int]],
sampling_params: SamplingParams | list[SamplingParams],
use_tqdm: bool = True,
) -> list[str]:
if use_tqdm:
pbar = tqdm(total=len(prompts), desc="Generating tokens (LLM)", leave=False,
dynamic_ncols=True, position=self.config.rank + 1)
if not isinstance(sampling_params, list):
sampling_params = [sampling_params] * len(prompts)
for prompt, sp in zip(prompts, sampling_params):
self.add_request(prompt, sp)
outputs = {}
prefill_throughput = decode_throughput = instant_decode_throughput = 0.
total_decode_tokens = 0
total_decode_time = 0.
while not self.is_finished():
t = perf_counter()
output, num_tokens = self.step()
step_time = perf_counter() - t
if use_tqdm:
if num_tokens > 0:
prefill_throughput = num_tokens / step_time
else:
instant_decode_throughput = -num_tokens / step_time
total_decode_tokens += -num_tokens
total_decode_time += step_time
decode_throughput = total_decode_tokens / total_decode_time if total_decode_time > 0 else 0
pbar.set_postfix({
"Prefill": f"{int(prefill_throughput)}tok/s",
"AvgDecode": f"{int(decode_throughput)}tok/s",
"InstDecode": f"{int(instant_decode_throughput)}tok/s",
})
for seq_id, token_ids in output:
outputs[seq_id] = token_ids
if use_tqdm:
pbar.update(1)
outputs = [outputs[seq_id] for seq_id in sorted(outputs)]
outputs = [{"text": self.tokenizer.decode(token_ids), "token_ids": token_ids} for token_ids in outputs]
if use_tqdm:
pbar.close()
return outputs
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/engine/model_runner.py | Python | import pickle
from multiprocessing.shared_memory import SharedMemory
from multiprocessing.synchronize import Event
import torch
import torch.distributed as dist
from flashcosyvoice.config import Config
from flashcosyvoice.engine.sequence import Sequence
from flashcosyvoice.modules.qwen2 import Qwen2ForCausalLM
from flashcosyvoice.modules.sampler import RasSampler, Sampler
from flashcosyvoice.utils.context import (get_context, reset_context,
set_context)
from flashcosyvoice.utils.loader import load_model
class ModelRunner:
def __init__(self, config: Config, rank: int, event: Event | list[Event]):
self.config = config
hf_config = config.hf_config
self.block_size = config.kvcache_block_size
self.enforce_eager = config.enforce_eager
self.world_size = config.tensor_parallel_size
self.rank = rank
self.event = event
# TODO(xcsong): support tp > 1
if self.world_size > 1:
dist.init_process_group("nccl", "tcp://localhost:2333", world_size=self.world_size, rank=rank)
torch.cuda.set_device(rank)
default_dtype = torch.get_default_dtype()
torch.set_default_dtype(hf_config.torch_dtype)
torch.set_default_device("cuda")
self.model = Qwen2ForCausalLM(hf_config)
load_model(self.model, config.model, hf_config)
self.sampler = Sampler()
self.ras_sampler = RasSampler()
self.warmup_model()
self.allocate_kv_cache()
if not self.enforce_eager:
self.capture_cudagraph()
torch.set_default_device("cpu")
torch.set_default_dtype(default_dtype)
if self.world_size > 1:
if rank == 0:
self.shm = SharedMemory(name="flashcosyvoice", create=True, size=2**20)
dist.barrier()
else:
dist.barrier()
self.shm = SharedMemory(name="flashcosyvoice")
self.loop()
def exit(self):
if self.world_size > 1:
self.shm.close()
dist.barrier()
if self.rank == 0:
self.shm.unlink()
if not self.enforce_eager:
del self.graphs, self.graph_pool
torch.cuda.synchronize()
if self.world_size > 1:
dist.destroy_process_group()
def loop(self):
while True:
method_name, args = self.read_shm()
self.call(method_name, *args)
if method_name == "exit":
break
def read_shm(self):
assert self.world_size > 1 and self.rank
self.event.wait()
n = int.from_bytes(self.shm.buf[0:4], "little")
method_name, *args = pickle.loads(self.shm.buf[4:n + 4])
self.event.clear()
return method_name, args
def write_shm(self, method_name, *args):
assert self.world_size > 1 and not self.rank
data = pickle.dumps([method_name, *args])
n = len(data)
self.shm.buf[0:4] = n.to_bytes(4, "little")
self.shm.buf[4:n + 4] = data
for event in self.event:
event.set()
def call(self, method_name, *args):
if self.world_size > 1 and self.rank == 0:
self.write_shm(method_name, *args)
method = getattr(self, method_name, None)
return method(*args)
def warmup_model(self):
torch.cuda.empty_cache()
torch.cuda.reset_peak_memory_stats()
max_num_batched_tokens, max_model_len = self.config.max_num_batched_tokens, self.config.max_model_len
num_seqs = min(max_num_batched_tokens // max_model_len, self.config.max_num_seqs)
seqs = [Sequence([0] * max_model_len) for _ in range(num_seqs)]
self.run(seqs, True)
torch.cuda.empty_cache()
def allocate_kv_cache(self):
config = self.config
hf_config = config.hf_config
free, total = torch.cuda.mem_get_info()
used = total - free
peak = torch.cuda.memory_stats()["allocated_bytes.all.peak"]
current = torch.cuda.memory_stats()["allocated_bytes.all.current"]
num_kv_heads = hf_config.num_key_value_heads // self.world_size
head_dim = getattr(hf_config, "head_dim", hf_config.hidden_size // hf_config.num_attention_heads)
block_bytes = 2 * hf_config.num_hidden_layers * self.block_size * num_kv_heads * head_dim * hf_config.torch_dtype.itemsize
config.num_kvcache_blocks = int(total * config.gpu_memory_utilization - used - peak + current) // block_bytes
assert config.num_kvcache_blocks > 0, "try to **increase** gpu_memory_utilization"
self.kv_cache = torch.zeros(2, hf_config.num_hidden_layers, config.num_kvcache_blocks, self.block_size, num_kv_heads, head_dim)
layer_id = 0
for module in self.model.modules():
if hasattr(module, "k_cache") and hasattr(module, "v_cache"):
module.k_cache = self.kv_cache[0, layer_id]
module.v_cache = self.kv_cache[1, layer_id]
layer_id += 1
def prepare_block_tables(self, seqs: list[Sequence]):
max_len = max(len(seq.block_table) for seq in seqs)
block_tables = [seq.block_table + [-1] * (max_len - len(seq.block_table)) for seq in seqs]
block_tables = torch.tensor(block_tables, dtype=torch.int32, pin_memory=True).cuda(non_blocking=True)
return block_tables
def prepare_prefill(self, seqs: list[Sequence]):
input_ids = []
positions = []
cu_seqlens_q = [0]
cu_seqlens_k = [0]
max_seqlen_q = 0
max_seqlen_k = 0
slot_mapping = []
block_tables = None
for seq in seqs:
seqlen = len(seq)
input_ids.extend(seq[seq.num_cached_tokens:])
positions.extend(list(range(seq.num_cached_tokens, seqlen)))
seqlen_q = seqlen - seq.num_cached_tokens
seqlen_k = seqlen
cu_seqlens_q.append(cu_seqlens_q[-1] + seqlen_q)
cu_seqlens_k.append(cu_seqlens_k[-1] + seqlen_k)
max_seqlen_q = max(seqlen_q, max_seqlen_q)
max_seqlen_k = max(seqlen_k, max_seqlen_k)
if not seq.block_table:
continue
for i in range(seq.num_cached_blocks, seq.num_blocks):
start = seq.block_table[i] * self.block_size
if i != seq.num_blocks - 1:
end = start + self.block_size
else:
end = start + seq.last_block_num_tokens
slot_mapping.extend(list(range(start, end)))
if cu_seqlens_k[-1] > cu_seqlens_q[-1]: # prefix cache
block_tables = self.prepare_block_tables(seqs)
input_ids = torch.tensor(input_ids, dtype=torch.int64, pin_memory=True).cuda(non_blocking=True)
positions = torch.tensor(positions, dtype=torch.int64, pin_memory=True).cuda(non_blocking=True)
cu_seqlens_q = torch.tensor(cu_seqlens_q, dtype=torch.int32, pin_memory=True).cuda(non_blocking=True)
cu_seqlens_k = torch.tensor(cu_seqlens_k, dtype=torch.int32, pin_memory=True).cuda(non_blocking=True)
slot_mapping = torch.tensor(slot_mapping, dtype=torch.int32, pin_memory=True).cuda(non_blocking=True)
set_context(True, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, slot_mapping, None, block_tables)
return input_ids, positions
def prepare_decode(self, seqs: list[Sequence]):
input_ids = []
positions = []
slot_mapping = []
context_lens = []
for seq in seqs:
input_ids.append(seq.last_token)
positions.append(len(seq))
context_lens.append(len(seq))
slot_mapping.append(seq.block_table[-1] * self.block_size + seq.last_block_num_tokens - 1)
input_ids = torch.tensor(input_ids, dtype=torch.int64, pin_memory=True).cuda(non_blocking=True)
positions = torch.tensor(positions, dtype=torch.int64, pin_memory=True).cuda(non_blocking=True)
slot_mapping = torch.tensor(slot_mapping, dtype=torch.int32, pin_memory=True).cuda(non_blocking=True)
context_lens = torch.tensor(context_lens, dtype=torch.int32, pin_memory=True).cuda(non_blocking=True)
block_tables = self.prepare_block_tables(seqs)
set_context(False, slot_mapping=slot_mapping, context_lens=context_lens, block_tables=block_tables)
return input_ids, positions
def prepare_sample(self, seqs: list[Sequence]):
temperatures = []
top_ks = []
win_sizes = []
tau_rs = []
top_ps = []
min_tokens_list = []
use_ras_list = []
for seq in seqs:
temperatures.append(seq.temperature)
top_ks.append(seq.top_k)
win_sizes.append(seq.win_size)
tau_rs.append(seq.tau_r)
top_ps.append(seq.top_p)
min_tokens_list.append(seq.min_tokens)
use_ras_list.append(seq.use_ras)
temperatures_tensor = torch.tensor(temperatures, dtype=torch.float32, pin_memory=True).cuda(non_blocking=True)
# check all items equal
assert all(item == temperatures[0] for item in temperatures)
assert all(item == top_ks[0] for item in top_ks)
assert all(item == win_sizes[0] for item in win_sizes)
assert all(item == tau_rs[0] for item in tau_rs)
assert all(item == top_ps[0] for item in top_ps)
assert all(item == use_ras_list[0] for item in use_ras_list)
return {
'temperatures': temperatures_tensor,
'top_k': top_ks[0],
'win_size': win_sizes[0],
'tau_r': tau_rs[0],
'top_p': top_ps[0],
'eos_token': self.config.eos,
'min_tokens': min_tokens_list,
'use_ras': use_ras_list[0]
}
@torch.inference_mode()
def run_model(self, input_ids: torch.Tensor, positions: torch.Tensor, is_prefill: bool):
if is_prefill or self.enforce_eager or input_ids.size(0) > 512:
return self.model.compute_logits(self.model(input_ids, positions))
else:
bs = input_ids.size(0)
context = get_context()
graph = self.graphs[next(x for x in self.graph_bs if x >= bs)]
graph_vars = self.graph_vars
for k, v in graph_vars.items():
if k != "outputs":
v.zero_()
graph_vars["input_ids"][:bs] = input_ids
graph_vars["positions"][:bs] = positions
graph_vars["slot_mapping"][:bs] = context.slot_mapping
graph_vars["context_lens"][:bs] = context.context_lens
graph_vars["block_tables"][:bs, :context.block_tables.size(1)] = context.block_tables
graph.replay()
return self.model.compute_logits(graph_vars["outputs"][:bs])
def run(self, seqs: list[Sequence], is_prefill: bool) -> list[int]:
input_ids, positions = self.prepare_prefill(seqs) if is_prefill else self.prepare_decode(seqs)
if self.rank == 0 or self.world_size == 1:
sample_params = self.prepare_sample(seqs)
logits = self.run_model(input_ids, positions, is_prefill)
if sample_params['use_ras']:
# Prepare decoded tokens list for RasSampler
decoded_tokens_list = [seq.completion_token_ids for seq in seqs]
# Pass all parameters as lists to RasSampler
token_ids = self.ras_sampler(
logits,
decoded_tokens_list,
win_size=sample_params['win_size'],
tau_r=sample_params['tau_r'],
top_p=sample_params['top_p'],
top_k=sample_params['top_k'],
eos_token=sample_params['eos_token'],
min_tokens=sample_params['min_tokens']
).tolist()
else:
# Use the default sampler with list form of top_ks
token_ids = self.sampler(logits, sample_params['temperatures'], sample_params['top_k']).tolist()
else:
logits = self.run_model(input_ids, positions, is_prefill)
token_ids = None
reset_context()
return token_ids
@torch.inference_mode()
def capture_cudagraph(self):
config = self.config
hf_config = config.hf_config
max_bs = min(self.config.max_num_seqs, 512)
max_num_blocks = (config.max_model_len + self.block_size - 1) // self.block_size
input_ids = torch.zeros(max_bs, dtype=torch.int64)
positions = torch.zeros(max_bs, dtype=torch.int64)
slot_mapping = torch.zeros(max_bs, dtype=torch.int32)
context_lens = torch.zeros(max_bs, dtype=torch.int32)
block_tables = torch.zeros(max_bs, max_num_blocks, dtype=torch.int32)
outputs = torch.zeros(max_bs, hf_config.hidden_size)
self.graph_bs = [1, 2, 4, 8] + list(range(16, max_bs + 1, 16))
self.graphs = {}
self.graph_pool = None
for bs in reversed(self.graph_bs):
graph = torch.cuda.CUDAGraph()
set_context(False, slot_mapping=slot_mapping[:bs], context_lens=context_lens[:bs], block_tables=block_tables[:bs])
outputs[:bs] = self.model(input_ids[:bs], positions[:bs]) # warmup
with torch.cuda.graph(graph, self.graph_pool):
outputs[:bs] = self.model(input_ids[:bs], positions[:bs]) # capture
if self.graph_pool is None:
self.graph_pool = graph.pool()
self.graphs[bs] = graph
torch.cuda.synchronize()
reset_context()
self.graph_vars = dict(
input_ids=input_ids,
positions=positions,
slot_mapping=slot_mapping,
context_lens=context_lens,
block_tables=block_tables,
outputs=outputs,
)
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/engine/scheduler.py | Python | from collections import deque
from flashcosyvoice.config import Config
from flashcosyvoice.engine.block_manager import BlockManager
from flashcosyvoice.engine.sequence import Sequence, SequenceStatus
class Scheduler:
def __init__(self, config: Config):
self.max_num_seqs = config.max_num_seqs
self.max_num_batched_tokens = config.max_num_batched_tokens
self.eos = config.eos
self.block_manager = BlockManager(config.num_kvcache_blocks, config.kvcache_block_size)
self.waiting: deque[Sequence] = deque()
self.running: deque[Sequence] = deque()
def is_finished(self):
return not self.waiting and not self.running
def add(self, seq: Sequence):
self.waiting.append(seq)
def schedule(self) -> tuple[list[Sequence], bool]:
# prefill
scheduled_seqs = []
num_seqs = 0
num_batched_tokens = 0
while self.waiting and num_seqs < self.max_num_seqs:
seq = self.waiting[0]
if num_batched_tokens + len(seq) > self.max_num_batched_tokens or not self.block_manager.can_allocate(seq):
break
num_seqs += 1
self.block_manager.allocate(seq)
num_batched_tokens += len(seq) - seq.num_cached_tokens
seq.status = SequenceStatus.RUNNING
self.waiting.popleft()
self.running.append(seq)
scheduled_seqs.append(seq)
if scheduled_seqs:
return scheduled_seqs, True
# decode
while self.running and num_seqs < self.max_num_seqs:
seq = self.running.popleft()
while not self.block_manager.can_append(seq):
if self.running:
self.preempt(self.running.pop())
else:
self.preempt(seq)
break
else:
num_seqs += 1
self.block_manager.may_append(seq)
scheduled_seqs.append(seq)
assert scheduled_seqs
self.running.extendleft(reversed(scheduled_seqs))
return scheduled_seqs, False
def preempt(self, seq: Sequence):
seq.status = SequenceStatus.WAITING
self.block_manager.deallocate(seq)
self.waiting.appendleft(seq)
def postprocess(self, seqs: list[Sequence], token_ids: list[int]) -> list[bool]:
for seq, token_id in zip(seqs, token_ids):
seq.append_token(token_id)
# Check if the sequence has reached the maximum number of tokens
reached_max_tokens = seq.num_completion_tokens == seq.max_tokens
# Check if the sequence has reached EOS and has generated enough tokens (satisfying min_tokens requirements)
eos_with_min_tokens = (not seq.ignore_eos and token_id == self.eos and
seq.num_completion_tokens >= seq.min_tokens)
if reached_max_tokens or eos_with_min_tokens:
seq.status = SequenceStatus.FINISHED
self.block_manager.deallocate(seq)
self.running.remove(seq)
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/engine/sequence.py | Python | from copy import copy
from enum import Enum, auto
from itertools import count
from flashcosyvoice.config import SamplingParams
class SequenceStatus(Enum):
WAITING = auto()
RUNNING = auto()
FINISHED = auto()
class Sequence:
block_size = 256
counter = count()
def __init__(self, token_ids: list[int], sampling_params = SamplingParams()):
self.seq_id = next(Sequence.counter)
self.status = SequenceStatus.WAITING
self.token_ids = copy(token_ids)
self.last_token = token_ids[-1]
self.num_tokens = len(self.token_ids)
self.num_prompt_tokens = len(token_ids)
self.num_cached_tokens = 0
self.block_table = []
self.temperature = sampling_params.temperature
self.min_tokens = sampling_params.min_tokens
self.max_tokens = sampling_params.max_tokens
self.ignore_eos = sampling_params.ignore_eos
self.top_k = sampling_params.top_k
# RasSampler parameters
self.use_ras = sampling_params.use_ras
self.win_size = sampling_params.win_size
self.tau_r = sampling_params.tau_r
self.top_p = sampling_params.top_p
def __len__(self):
return self.num_tokens
def __getitem__(self, key):
return self.token_ids[key]
@property
def is_finished(self):
return self.status == SequenceStatus.FINISHED
@property
def num_completion_tokens(self):
return self.num_tokens - self.num_prompt_tokens
@property
def prompt_token_ids(self):
return self.token_ids[:self.num_prompt_tokens]
@property
def completion_token_ids(self):
return self.token_ids[self.num_prompt_tokens:]
@property
def num_cached_blocks(self):
return self.num_cached_tokens // self.block_size
@property
def num_blocks(self):
return (self.num_tokens + self.block_size - 1) // self.block_size
@property
def last_block_num_tokens(self):
return self.num_tokens - (self.num_blocks - 1) * self.block_size
def block(self, i):
assert 0 <= i < self.num_blocks
return self.token_ids[i*self.block_size: (i+1)*self.block_size]
def append_token(self, token_id: int):
self.token_ids.append(token_id)
self.last_token = token_id
self.num_tokens += 1
def __getstate__(self):
return (self.num_tokens, self.num_prompt_tokens, self.num_cached_tokens, self.block_table,
self.token_ids if self.num_completion_tokens == 0 else self.last_token)
def __setstate__(self, state):
self.num_tokens, self.num_prompt_tokens, self.num_cached_tokens, self.block_table = state[:-1]
if self.num_completion_tokens == 0:
self.token_ids = state[-1]
else:
self.last_token = state[-1]
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/modules/flow.py | Python | from dataclasses import dataclass
import torch
import torch.nn as nn
import torch.nn.functional as F
from flashcosyvoice.modules.flow_components.estimator import \
CausalConditionalDecoder
from flashcosyvoice.modules.flow_components.upsample_encoder import (
UpsampleConformerEncoder, make_pad_mask)
# TODO(xcsong): make it configurable
@dataclass
class CfmParams:
sigma_min: float = 1e-6
solver: str = "euler"
t_scheduler: str = "cosine"
training_cfg_rate: float = 0.2
inference_cfg_rate: float = 0.7
class CausalConditionalCFM(torch.nn.Module):
def __init__(self, in_channels=320, cfm_params=CfmParams(), n_spks=1, spk_emb_dim=80, estimator: torch.nn.Module = None):
super().__init__()
self.n_feats = in_channels
self.n_spks = n_spks
self.spk_emb_dim = spk_emb_dim
self.solver = cfm_params.solver
if hasattr(cfm_params, "sigma_min"):
self.sigma_min = cfm_params.sigma_min
else:
self.sigma_min = 1e-4
self.t_scheduler = cfm_params.t_scheduler
self.training_cfg_rate = cfm_params.training_cfg_rate
self.inference_cfg_rate = cfm_params.inference_cfg_rate
in_channels = in_channels + (spk_emb_dim if n_spks > 0 else 0)
# Just change the architecture of the estimator here
self.estimator = CausalConditionalDecoder() if estimator is None else estimator
@torch.inference_mode()
def forward(self, mu, mask, n_timesteps, temperature=1.0, spks=None, cond=None, streaming=False):
"""Forward diffusion
Args:
mu (torch.Tensor): output of encoder
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): output_mask
shape: (batch_size, 1, mel_timesteps)
n_timesteps (int): number of diffusion steps
temperature (float, optional): temperature for scaling noise. Defaults to 1.0.
spks (torch.Tensor, optional): speaker ids. Defaults to None.
shape: (batch_size, spk_emb_dim)
cond: Not used but kept for future purposes
Returns:
sample: generated mel-spectrogram
shape: (batch_size, n_feats, mel_timesteps)
"""
z = torch.randn_like(mu).to(mu.device).to(mu.dtype) * temperature
# fix prompt and overlap part mu and z
t_span = torch.linspace(0, 1, n_timesteps + 1, device=mu.device, dtype=mu.dtype)
if self.t_scheduler == 'cosine':
t_span = 1 - torch.cos(t_span * 0.5 * torch.pi)
return self.solve_euler(z, t_span=t_span, mu=mu, mask=mask, spks=spks, cond=cond, streaming=streaming), None
def solve_euler(self, x, t_span, mu, mask, spks, cond, streaming=False):
"""
Fixed euler solver for ODEs.
Args:
x (torch.Tensor): random noise
t_span (torch.Tensor): n_timesteps interpolated
shape: (n_timesteps + 1,)
mu (torch.Tensor): output of encoder
shape: (batch_size, n_feats, mel_timesteps)
mask (torch.Tensor): output_mask
shape: (batch_size, 1, mel_timesteps)
spks (torch.Tensor, optional): speaker ids. Defaults to None.
shape: (batch_size, spk_emb_dim)
cond: Not used but kept for future purposes
"""
batch_size = x.size(0)
t, _, dt = t_span[0], t_span[-1], t_span[1] - t_span[0]
# I am storing this because I can later plot it by putting a debugger here and saving it to a file
# Or in future might add like a return_all_steps flag
sol = []
# Do not use concat, it may cause memory format changed and trt infer with wrong results!
# Create tensors with double batch size for CFG (conditional + unconditional)
x_in = torch.zeros([batch_size * 2, x.size(1), x.size(2)], device=x.device, dtype=x.dtype)
mask_in = torch.zeros([batch_size * 2, mask.size(1), mask.size(2)], device=x.device, dtype=x.dtype)
mu_in = torch.zeros([batch_size * 2, mu.size(1), mu.size(2)], device=x.device, dtype=x.dtype)
t_in = torch.zeros([batch_size * 2], device=x.device, dtype=x.dtype)
spks_in = torch.zeros([batch_size * 2, spks.size(1)], device=x.device, dtype=x.dtype)
cond_in = torch.zeros([batch_size * 2, cond.size(1), cond.size(2)], device=x.device, dtype=x.dtype)
for step in range(1, len(t_span)):
# Classifier-Free Guidance inference introduced in VoiceBox
# Copy conditional and unconditional input
x_in[:batch_size] = x
x_in[batch_size:] = x
mask_in[:batch_size] = mask
mask_in[batch_size:] = mask
mu_in[:batch_size] = mu
# Unconditional part remains 0
t_in.fill_(t)
spks_in[:batch_size] = spks
cond_in[:batch_size] = cond
dphi_dt = self.estimator(
x_in, mask_in,
mu_in, t_in,
spks_in,
cond_in,
streaming
)
dphi_dt, cfg_dphi_dt = torch.split(dphi_dt, [batch_size, batch_size], dim=0)
dphi_dt = ((1.0 + self.inference_cfg_rate) * dphi_dt - self.inference_cfg_rate * cfg_dphi_dt)
x = x + dt * dphi_dt
t = t + dt
sol.append(x)
if step < len(t_span) - 1:
dt = t_span[step + 1] - t
return sol[-1].float()
class CausalMaskedDiffWithXvec(torch.nn.Module):
def __init__(
self,
input_size: int = 512,
output_size: int = 80,
spk_embed_dim: int = 192,
output_type: str = "mel",
vocab_size: int = 6561,
input_frame_rate: int = 25,
token_mel_ratio: int = 2,
pre_lookahead_len: int = 3,
encoder: torch.nn.Module = None,
decoder: torch.nn.Module = None,
):
super().__init__()
self.input_size = input_size
self.output_size = output_size
self.vocab_size = vocab_size
self.output_type = output_type
self.input_frame_rate = input_frame_rate
self.input_embedding = nn.Embedding(vocab_size, input_size)
self.spk_embed_affine_layer = torch.nn.Linear(spk_embed_dim, output_size)
self.encoder = UpsampleConformerEncoder() if encoder is None else encoder
self.encoder_proj = torch.nn.Linear(self.encoder.output_size(), output_size)
self.decoder = CausalConditionalCFM() if decoder is None else decoder
self.token_mel_ratio = token_mel_ratio
self.pre_lookahead_len = pre_lookahead_len
@torch.inference_mode()
def forward(self,
token,
token_len,
prompt_feat,
prompt_feat_len,
embedding,
streaming,
finalize):
# xvec projection
embedding = F.normalize(embedding, dim=1)
embedding = self.spk_embed_affine_layer(embedding)
# concat text and prompt_text
mask = (~make_pad_mask(token_len, max_len=token.shape[1])).unsqueeze(-1).to(embedding)
token = self.input_embedding(torch.clamp(token, min=0)) * mask
# text encode
if finalize is True:
h, h_lengths = self.encoder(token, token_len, streaming=streaming)
else:
token, context = token[:, :-self.pre_lookahead_len], token[:, -self.pre_lookahead_len:]
h, h_lengths = self.encoder(token, token_len, context=context, streaming=streaming)
h = self.encoder_proj(h)
# get conditions
conds = torch.zeros_like(h, device=token.device)
for i, j in enumerate(prompt_feat_len):
conds[i, :j] = prompt_feat[i, :j]
conds = conds.transpose(1, 2)
h_lengths = h_lengths.sum(dim=-1).squeeze(dim=1)
mask = (~make_pad_mask(h_lengths, max_len=h.shape[1])).to(h)
feat, _ = self.decoder(
mu=h.transpose(1, 2).contiguous(),
mask=mask.unsqueeze(1),
spks=embedding,
cond=conds,
n_timesteps=10,
streaming=streaming
) # [B, num_mels, T]
return feat.float(), h_lengths
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/modules/flow_components/estimator.py | Python | import math
from typing import Any, Dict, Optional, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
from diffusers.models.attention import (GEGLU, GELU, AdaLayerNorm,
AdaLayerNormZero, ApproximateGELU)
from diffusers.models.attention_processor import Attention
from diffusers.models.lora import LoRACompatibleLinear
from diffusers.utils.torch_utils import maybe_allow_in_graph
from einops import pack, rearrange, repeat
from flashcosyvoice.modules.flow_components.upsample_encoder import \
add_optional_chunk_mask
def mask_to_bias(mask: torch.Tensor, dtype: torch.dtype) -> torch.Tensor:
assert mask.dtype == torch.bool
assert dtype in [torch.float32, torch.bfloat16, torch.float16]
mask = mask.to(dtype)
# attention mask bias
# NOTE(Mddct): torch.finfo jit issues
# chunk_masks = (1.0 - chunk_masks) * torch.finfo(dtype).min
mask = (1.0 - mask) * -1.0e+10
return mask
class SnakeBeta(nn.Module):
"""
A modified Snake function which uses separate parameters for the magnitude of the periodic components
Shape:
- Input: (B, C, T)
- Output: (B, C, T), same shape as the input
Parameters:
- alpha - trainable parameter that controls frequency
- beta - trainable parameter that controls magnitude
References:
- This activation function is a modified version based on this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda:
https://arxiv.org/abs/2006.08195
Examples:
>>> a1 = snakebeta(256)
>>> x = torch.randn(256)
>>> x = a1(x)
Args:
in_features: shape of the input
out_features: shape of the output
alpha: trainable parameter that controls frequency
alpha_trainable: whether alpha is trainable
alpha_logscale: whether to use log scale for alpha
alpha is initialized to 1 by default, higher values = higher-frequency.
beta is initialized to 1 by default, higher values = higher-magnitude.
alpha will be trained along with the rest of your model.
"""
def __init__(self, in_features, out_features, alpha=1.0, alpha_trainable=True, alpha_logscale=True):
super().__init__()
self.in_features = out_features if isinstance(out_features, list) else [out_features]
self.proj = LoRACompatibleLinear(in_features, out_features)
# initialize alpha
self.alpha_logscale = alpha_logscale
if self.alpha_logscale: # log scale alphas initialized to zeros
self.alpha = nn.Parameter(torch.zeros(self.in_features) * alpha)
self.beta = nn.Parameter(torch.zeros(self.in_features) * alpha)
else: # linear scale alphas initialized to ones
self.alpha = nn.Parameter(torch.ones(self.in_features) * alpha)
self.beta = nn.Parameter(torch.ones(self.in_features) * alpha)
self.alpha.requires_grad = alpha_trainable
self.beta.requires_grad = alpha_trainable
self.no_div_by_zero = 0.000000001
def forward(self, x):
"""
Forward pass of the function.
Applies the function to the input elementwise.
SnakeBeta ∶= x + 1/b * sin^2 (xa)
"""
x = self.proj(x)
if self.alpha_logscale:
alpha = torch.exp(self.alpha)
beta = torch.exp(self.beta)
else:
alpha = self.alpha
beta = self.beta
x = x + (1.0 / (beta + self.no_div_by_zero)) * torch.pow(torch.sin(x * alpha), 2)
return x
class FeedForward(nn.Module):
r"""
A feed-forward layer.
Parameters:
dim (`int`): The number of channels in the input.
dim_out (`int`, *optional*): The number of channels in the output. If not given, defaults to `dim`.
mult (`int`, *optional*, defaults to 4): The multiplier to use for the hidden dimension.
dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use.
activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward.
final_dropout (`bool` *optional*, defaults to False): Apply a final dropout.
"""
def __init__(
self,
dim: int,
dim_out: Optional[int] = None,
mult: int = 4,
dropout: float = 0.0,
activation_fn: str = "geglu",
final_dropout: bool = False,
):
super().__init__()
inner_dim = int(dim * mult)
dim_out = dim_out if dim_out is not None else dim
if activation_fn == "gelu":
act_fn = GELU(dim, inner_dim)
if activation_fn == "gelu-approximate":
act_fn = GELU(dim, inner_dim, approximate="tanh")
elif activation_fn == "geglu":
act_fn = GEGLU(dim, inner_dim)
elif activation_fn == "geglu-approximate":
act_fn = ApproximateGELU(dim, inner_dim)
elif activation_fn == "snakebeta":
act_fn = SnakeBeta(dim, inner_dim)
self.net = nn.ModuleList([])
# project in
self.net.append(act_fn)
# project dropout
self.net.append(nn.Dropout(dropout))
# project out
self.net.append(LoRACompatibleLinear(inner_dim, dim_out))
# FF as used in Vision Transformer, MLP-Mixer, etc. have a final dropout
if final_dropout:
self.net.append(nn.Dropout(dropout))
def forward(self, hidden_states):
for module in self.net:
hidden_states = module(hidden_states)
return hidden_states
@maybe_allow_in_graph
class BasicTransformerBlock(nn.Module):
r"""
A basic Transformer block.
Parameters:
dim (`int`): The number of channels in the input and output.
num_attention_heads (`int`): The number of heads to use for multi-head attention.
attention_head_dim (`int`): The number of channels in each head.
dropout (`float`, *optional*, defaults to 0.0): The dropout probability to use.
cross_attention_dim (`int`, *optional*): The size of the encoder_hidden_states vector for cross attention.
only_cross_attention (`bool`, *optional*):
Whether to use only cross-attention layers. In this case two cross attention layers are used.
double_self_attention (`bool`, *optional*):
Whether to use two self-attention layers. In this case no cross attention layers are used.
activation_fn (`str`, *optional*, defaults to `"geglu"`): Activation function to be used in feed-forward.
num_embeds_ada_norm (:
obj: `int`, *optional*): The number of diffusion steps used during training. See `Transformer2DModel`.
attention_bias (:
obj: `bool`, *optional*, defaults to `False`): Configure if the attentions should contain a bias parameter.
"""
def __init__(
self,
dim: int,
num_attention_heads: int,
attention_head_dim: int,
dropout=0.0,
cross_attention_dim: Optional[int] = None,
activation_fn: str = "geglu",
num_embeds_ada_norm: Optional[int] = None,
attention_bias: bool = False,
only_cross_attention: bool = False,
double_self_attention: bool = False,
upcast_attention: bool = False,
norm_elementwise_affine: bool = True,
norm_type: str = "layer_norm",
final_dropout: bool = False,
):
super().__init__()
self.only_cross_attention = only_cross_attention
self.use_ada_layer_norm_zero = (num_embeds_ada_norm is not None) and norm_type == "ada_norm_zero"
self.use_ada_layer_norm = (num_embeds_ada_norm is not None) and norm_type == "ada_norm"
if norm_type in ("ada_norm", "ada_norm_zero") and num_embeds_ada_norm is None:
raise ValueError(
f"`norm_type` is set to {norm_type}, but `num_embeds_ada_norm` is not defined. Please make sure to"
f" define `num_embeds_ada_norm` if setting `norm_type` to {norm_type}."
)
# Define 3 blocks. Each block has its own normalization layer.
# 1. Self-Attn
if self.use_ada_layer_norm:
self.norm1 = AdaLayerNorm(dim, num_embeds_ada_norm)
elif self.use_ada_layer_norm_zero:
self.norm1 = AdaLayerNormZero(dim, num_embeds_ada_norm)
else:
self.norm1 = nn.LayerNorm(dim, elementwise_affine=norm_elementwise_affine)
self.attn1 = Attention(
query_dim=dim,
heads=num_attention_heads,
dim_head=attention_head_dim,
dropout=dropout,
bias=attention_bias,
cross_attention_dim=cross_attention_dim if only_cross_attention else None,
upcast_attention=upcast_attention,
)
# 2. Cross-Attn
if cross_attention_dim is not None or double_self_attention:
# We currently only use AdaLayerNormZero for self attention where there will only be one attention block.
# I.e. the number of returned modulation chunks from AdaLayerZero would not make sense if returned during
# the second cross attention block.
self.norm2 = (
AdaLayerNorm(dim, num_embeds_ada_norm)
if self.use_ada_layer_norm
else nn.LayerNorm(dim, elementwise_affine=norm_elementwise_affine)
)
self.attn2 = Attention(
query_dim=dim,
cross_attention_dim=cross_attention_dim if not double_self_attention else None,
heads=num_attention_heads,
dim_head=attention_head_dim,
dropout=dropout,
bias=attention_bias,
upcast_attention=upcast_attention,
# scale_qk=False, # uncomment this to not to use flash attention
) # is self-attn if encoder_hidden_states is none
else:
self.norm2 = None
self.attn2 = None
# 3. Feed-forward
self.norm3 = nn.LayerNorm(dim, elementwise_affine=norm_elementwise_affine)
self.ff = FeedForward(dim, dropout=dropout, activation_fn=activation_fn, final_dropout=final_dropout)
# let chunk size default to None
self._chunk_size = None
self._chunk_dim = 0
def set_chunk_feed_forward(self, chunk_size: Optional[int], dim: int):
# Sets chunk feed-forward
self._chunk_size = chunk_size
self._chunk_dim = dim
def forward(
self,
hidden_states: torch.FloatTensor,
attention_mask: Optional[torch.FloatTensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
encoder_attention_mask: Optional[torch.FloatTensor] = None,
timestep: Optional[torch.LongTensor] = None,
cross_attention_kwargs: Dict[str, Any] = None,
class_labels: Optional[torch.LongTensor] = None,
):
# Notice that normalization is always applied before the real computation in the following blocks.
# 1. Self-Attention
if self.use_ada_layer_norm:
norm_hidden_states = self.norm1(hidden_states, timestep)
elif self.use_ada_layer_norm_zero:
norm_hidden_states, gate_msa, shift_mlp, scale_mlp, gate_mlp = self.norm1(
hidden_states, timestep, class_labels, hidden_dtype=hidden_states.dtype
)
else:
norm_hidden_states = self.norm1(hidden_states)
cross_attention_kwargs = cross_attention_kwargs if cross_attention_kwargs is not None else {}
attn_output = self.attn1(
norm_hidden_states,
encoder_hidden_states=encoder_hidden_states if self.only_cross_attention else None,
attention_mask=encoder_attention_mask if self.only_cross_attention else attention_mask,
**cross_attention_kwargs,
)
if self.use_ada_layer_norm_zero:
attn_output = gate_msa.unsqueeze(1) * attn_output
hidden_states = attn_output + hidden_states
# 2. Cross-Attention
if self.attn2 is not None:
norm_hidden_states = (
self.norm2(hidden_states, timestep) if self.use_ada_layer_norm else self.norm2(hidden_states)
)
attn_output = self.attn2(
norm_hidden_states,
encoder_hidden_states=encoder_hidden_states,
attention_mask=encoder_attention_mask,
**cross_attention_kwargs,
)
hidden_states = attn_output + hidden_states
# 3. Feed-forward
norm_hidden_states = self.norm3(hidden_states)
if self.use_ada_layer_norm_zero:
norm_hidden_states = norm_hidden_states * (1 + scale_mlp[:, None]) + shift_mlp[:, None]
if self._chunk_size is not None:
# "feed_forward_chunk_size" can be used to save memory
if norm_hidden_states.shape[self._chunk_dim] % self._chunk_size != 0:
raise ValueError(
f"`hidden_states` dimension to be chunked: {norm_hidden_states.shape[self._chunk_dim]} has to be divisible by chunk size: {self._chunk_size}. Make sure to set an appropriate `chunk_size` when calling `unet.enable_forward_chunking`."
)
num_chunks = norm_hidden_states.shape[self._chunk_dim] // self._chunk_size
ff_output = torch.cat(
[self.ff(hid_slice) for hid_slice in norm_hidden_states.chunk(num_chunks, dim=self._chunk_dim)],
dim=self._chunk_dim,
)
else:
ff_output = self.ff(norm_hidden_states)
if self.use_ada_layer_norm_zero:
ff_output = gate_mlp.unsqueeze(1) * ff_output
hidden_states = ff_output + hidden_states
return hidden_states
class SinusoidalPosEmb(torch.nn.Module):
def __init__(self, dim):
super().__init__()
self.dim = dim
assert self.dim % 2 == 0, "SinusoidalPosEmb requires dim to be even"
def forward(self, x, scale=1000):
if x.ndim < 1:
x = x.unsqueeze(0)
device = x.device
half_dim = self.dim // 2
emb = math.log(10000) / (half_dim - 1)
emb = torch.exp(torch.arange(half_dim, device=device).float() * -emb)
emb = scale * x.unsqueeze(1) * emb.unsqueeze(0)
emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
return emb
class Block1D(torch.nn.Module):
def __init__(self, dim, dim_out, groups=8):
super().__init__()
self.block = torch.nn.Sequential(
torch.nn.Conv1d(dim, dim_out, 3, padding=1),
torch.nn.GroupNorm(groups, dim_out),
nn.Mish(),
)
def forward(self, x, mask):
output = self.block(x * mask)
return output * mask
class ResnetBlock1D(torch.nn.Module):
def __init__(self, dim, dim_out, time_emb_dim, groups=8):
super().__init__()
self.mlp = torch.nn.Sequential(nn.Mish(), torch.nn.Linear(time_emb_dim, dim_out))
self.block1 = Block1D(dim, dim_out, groups=groups)
self.block2 = Block1D(dim_out, dim_out, groups=groups)
self.res_conv = torch.nn.Conv1d(dim, dim_out, 1)
def forward(self, x, mask, time_emb):
h = self.block1(x, mask)
h += self.mlp(time_emb).unsqueeze(-1)
h = self.block2(h, mask)
output = h + self.res_conv(x * mask)
return output
class Downsample1D(nn.Module):
def __init__(self, dim):
super().__init__()
self.conv = torch.nn.Conv1d(dim, dim, 3, 2, 1)
def forward(self, x):
return self.conv(x)
class TimestepEmbedding(nn.Module):
def __init__(
self,
in_channels: int,
time_embed_dim: int,
act_fn: str = "silu",
out_dim: int = None,
post_act_fn: Optional[str] = None,
cond_proj_dim=None,
):
super().__init__()
assert act_fn == "silu", "act_fn must be silu"
self.linear_1 = nn.Linear(in_channels, time_embed_dim)
if cond_proj_dim is not None:
self.cond_proj = nn.Linear(cond_proj_dim, in_channels, bias=False)
else:
self.cond_proj = None
self.act = nn.SiLU()
if out_dim is not None:
time_embed_dim_out = out_dim
else:
time_embed_dim_out = time_embed_dim
self.linear_2 = nn.Linear(time_embed_dim, time_embed_dim_out)
if post_act_fn is None:
self.post_act = None
else:
self.post_act = nn.SiLU()
def forward(self, sample, condition=None):
if condition is not None:
sample = sample + self.cond_proj(condition)
sample = self.linear_1(sample)
if self.act is not None:
sample = self.act(sample)
sample = self.linear_2(sample)
if self.post_act is not None:
sample = self.post_act(sample)
return sample
class Upsample1D(nn.Module):
"""A 1D upsampling layer with an optional convolution.
Parameters:
channels (`int`):
number of channels in the inputs and outputs.
use_conv (`bool`, default `False`):
option to use a convolution.
use_conv_transpose (`bool`, default `False`):
option to use a convolution transpose.
out_channels (`int`, optional):
number of output channels. Defaults to `channels`.
"""
def __init__(self, channels, use_conv=False, use_conv_transpose=True, out_channels=None, name="conv"):
super().__init__()
self.channels = channels
self.out_channels = out_channels or channels
self.use_conv = use_conv
self.use_conv_transpose = use_conv_transpose
self.name = name
self.conv = None
if use_conv_transpose:
self.conv = nn.ConvTranspose1d(channels, self.out_channels, 4, 2, 1)
elif use_conv:
self.conv = nn.Conv1d(self.channels, self.out_channels, 3, padding=1)
def forward(self, inputs):
assert inputs.shape[1] == self.channels
if self.use_conv_transpose:
return self.conv(inputs)
outputs = F.interpolate(inputs, scale_factor=2.0, mode="nearest")
if self.use_conv:
outputs = self.conv(outputs)
return outputs
class Transpose(torch.nn.Module):
def __init__(self, dim0: int, dim1: int):
super().__init__()
self.dim0 = dim0
self.dim1 = dim1
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = torch.transpose(x, self.dim0, self.dim1)
return x
class CausalConv1d(torch.nn.Conv1d):
def __init__(
self,
in_channels: int,
out_channels: int,
kernel_size: int,
stride: int = 1,
dilation: int = 1,
groups: int = 1,
bias: bool = True,
padding_mode: str = 'zeros',
device=None,
dtype=None
) -> None:
super(CausalConv1d, self).__init__(in_channels, out_channels,
kernel_size, stride,
padding=0, dilation=dilation,
groups=groups, bias=bias,
padding_mode=padding_mode,
device=device, dtype=dtype)
assert stride == 1
self.causal_padding = kernel_size - 1
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = F.pad(x, (self.causal_padding, 0), value=0.0)
x = super(CausalConv1d, self).forward(x)
return x
class CausalBlock1D(Block1D):
def __init__(self, dim: int, dim_out: int):
super(CausalBlock1D, self).__init__(dim, dim_out)
self.block = torch.nn.Sequential(
CausalConv1d(dim, dim_out, 3),
Transpose(1, 2),
nn.LayerNorm(dim_out),
Transpose(1, 2),
nn.Mish(),
)
def forward(self, x: torch.Tensor, mask: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
output = self.block(x * mask)
return output * mask
class CausalResnetBlock1D(ResnetBlock1D):
def __init__(self, dim: int, dim_out: int, time_emb_dim: int, groups: int = 8):
super(CausalResnetBlock1D, self).__init__(dim, dim_out, time_emb_dim, groups)
self.block1 = CausalBlock1D(dim, dim_out)
self.block2 = CausalBlock1D(dim_out, dim_out)
class ConditionalDecoder(nn.Module):
"""
This decoder requires an input with the same shape of the target. So, if your text content
is shorter or longer than the outputs, please re-sampling it before feeding to the decoder.
Args:
in_channels: number of input channels
out_channels: number of output channels
channels: tuple of channel dimensions
dropout: dropout rate
attention_head_dim: dimension of attention heads
n_blocks: number of transformer blocks
num_mid_blocks: number of middle blocks
num_heads: number of attention heads
act_fn: activation function name
"""
def __init__(
self,
in_channels,
out_channels,
channels=(256, 256),
dropout=0.05,
attention_head_dim=64,
n_blocks=1,
num_mid_blocks=2,
num_heads=4,
act_fn="snake",
):
super().__init__()
channels = tuple(channels)
self.in_channels = in_channels
self.out_channels = out_channels
self.time_embeddings = SinusoidalPosEmb(in_channels)
time_embed_dim = channels[0] * 4
self.time_mlp = TimestepEmbedding(
in_channels=in_channels,
time_embed_dim=time_embed_dim,
act_fn="silu",
)
self.down_blocks = nn.ModuleList([])
self.mid_blocks = nn.ModuleList([])
self.up_blocks = nn.ModuleList([])
output_channel = in_channels
for i in range(len(channels)): # pylint: disable=consider-using-enumerate
input_channel = output_channel
output_channel = channels[i]
is_last = i == len(channels) - 1
resnet = ResnetBlock1D(dim=input_channel, dim_out=output_channel, time_emb_dim=time_embed_dim)
transformer_blocks = nn.ModuleList(
[
BasicTransformerBlock(
dim=output_channel,
num_attention_heads=num_heads,
attention_head_dim=attention_head_dim,
dropout=dropout,
activation_fn=act_fn,
)
for _ in range(n_blocks)
]
)
downsample = (
Downsample1D(output_channel) if not is_last else nn.Conv1d(output_channel, output_channel, 3, padding=1)
)
self.down_blocks.append(nn.ModuleList([resnet, transformer_blocks, downsample]))
for _ in range(num_mid_blocks):
input_channel = channels[-1]
out_channels = channels[-1]
resnet = ResnetBlock1D(dim=input_channel, dim_out=output_channel, time_emb_dim=time_embed_dim)
transformer_blocks = nn.ModuleList(
[
BasicTransformerBlock(
dim=output_channel,
num_attention_heads=num_heads,
attention_head_dim=attention_head_dim,
dropout=dropout,
activation_fn=act_fn,
)
for _ in range(n_blocks)
]
)
self.mid_blocks.append(nn.ModuleList([resnet, transformer_blocks]))
channels = channels[::-1] + (channels[0],)
for i in range(len(channels) - 1):
input_channel = channels[i] * 2
output_channel = channels[i + 1]
is_last = i == len(channels) - 2
resnet = ResnetBlock1D(
dim=input_channel,
dim_out=output_channel,
time_emb_dim=time_embed_dim,
)
transformer_blocks = nn.ModuleList(
[
BasicTransformerBlock(
dim=output_channel,
num_attention_heads=num_heads,
attention_head_dim=attention_head_dim,
dropout=dropout,
activation_fn=act_fn,
)
for _ in range(n_blocks)
]
)
upsample = (
Upsample1D(output_channel, use_conv_transpose=True)
if not is_last
else nn.Conv1d(output_channel, output_channel, 3, padding=1)
)
self.up_blocks.append(nn.ModuleList([resnet, transformer_blocks, upsample]))
self.final_block = Block1D(channels[-1], channels[-1])
self.final_proj = nn.Conv1d(channels[-1], self.out_channels, 1)
self.initialize_weights()
def initialize_weights(self):
for m in self.modules():
if isinstance(m, nn.Conv1d):
nn.init.kaiming_normal_(m.weight, nonlinearity="relu")
if m.bias is not None:
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.GroupNorm):
nn.init.constant_(m.weight, 1)
nn.init.constant_(m.bias, 0)
elif isinstance(m, nn.Linear):
nn.init.kaiming_normal_(m.weight, nonlinearity="relu")
if m.bias is not None:
nn.init.constant_(m.bias, 0)
def forward(self, x, mask, mu, t, spks=None, cond=None, streaming=False):
"""Forward pass of the UNet1DConditional model.
Args:
x (torch.Tensor): shape (batch_size, in_channels, time)
mask (_type_): shape (batch_size, 1, time)
t (_type_): shape (batch_size)
spks (_type_, optional): shape: (batch_size, condition_channels). Defaults to None.
cond (_type_, optional): placeholder for future use. Defaults to None.
Raises:
ValueError: _description_
ValueError: _description_
Returns:
_type_: _description_
"""
t = self.time_embeddings(t).to(t.dtype)
t = self.time_mlp(t)
x = pack([x, mu], "b * t")[0]
if spks is not None:
spks = repeat(spks, "b c -> b c t", t=x.shape[-1])
x = pack([x, spks], "b * t")[0]
if cond is not None:
x = pack([x, cond], "b * t")[0]
hiddens = []
masks = [mask]
for resnet, transformer_blocks, downsample in self.down_blocks:
mask_down = masks[-1]
x = resnet(x, mask_down, t)
x = rearrange(x, "b c t -> b t c").contiguous()
attn_mask = add_optional_chunk_mask(x, mask_down.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
attn_mask = mask_to_bias(attn_mask, x.dtype)
for transformer_block in transformer_blocks:
x = transformer_block(
hidden_states=x,
attention_mask=attn_mask,
timestep=t,
)
x = rearrange(x, "b t c -> b c t").contiguous()
hiddens.append(x) # Save hidden states for skip connections
x = downsample(x * mask_down)
masks.append(mask_down[:, :, ::2])
masks = masks[:-1]
mask_mid = masks[-1]
for resnet, transformer_blocks in self.mid_blocks:
x = resnet(x, mask_mid, t)
x = rearrange(x, "b c t -> b t c").contiguous()
attn_mask = add_optional_chunk_mask(x, mask_mid.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
attn_mask = mask_to_bias(attn_mask, x.dtype)
for transformer_block in transformer_blocks:
x = transformer_block(
hidden_states=x,
attention_mask=attn_mask,
timestep=t,
)
x = rearrange(x, "b t c -> b c t").contiguous()
for resnet, transformer_blocks, upsample in self.up_blocks:
mask_up = masks.pop()
skip = hiddens.pop()
x = pack([x[:, :, :skip.shape[-1]], skip], "b * t")[0]
x = resnet(x, mask_up, t)
x = rearrange(x, "b c t -> b t c").contiguous()
attn_mask = add_optional_chunk_mask(x, mask_up.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
attn_mask = mask_to_bias(attn_mask, x.dtype)
for transformer_block in transformer_blocks:
x = transformer_block(
hidden_states=x,
attention_mask=attn_mask,
timestep=t,
)
x = rearrange(x, "b t c -> b c t").contiguous()
x = upsample(x * mask_up)
x = self.final_block(x, mask_up)
output = self.final_proj(x * mask_up)
return output * mask
class CausalConditionalDecoder(ConditionalDecoder):
"""
This decoder requires an input with the same shape of the target. So, if your text content
is shorter or longer than the outputs, please re-sampling it before feeding to the decoder.
Args:
in_channels: number of input channels
out_channels: number of output channels
channels: list of channel dimensions
dropout: dropout rate
attention_head_dim: dimension of attention heads
n_blocks: number of transformer blocks
num_mid_blocks: number of middle blocks
num_heads: number of attention heads
act_fn: activation function name
static_chunk_size: size of static chunks
num_decoding_left_chunks: number of left chunks for decoding
"""
def __init__(
self,
in_channels=320,
out_channels=80,
channels=[256], # noqa
dropout=0.0,
attention_head_dim=64,
n_blocks=4,
num_mid_blocks=12,
num_heads=8,
act_fn="gelu",
static_chunk_size=50,
num_decoding_left_chunks=-1,
):
torch.nn.Module.__init__(self)
channels = tuple(channels)
self.in_channels = in_channels
self.out_channels = out_channels
self.time_embeddings = SinusoidalPosEmb(in_channels)
time_embed_dim = channels[0] * 4
self.time_mlp = TimestepEmbedding(
in_channels=in_channels,
time_embed_dim=time_embed_dim,
act_fn="silu",
)
self.static_chunk_size = static_chunk_size
self.num_decoding_left_chunks = num_decoding_left_chunks
self.down_blocks = nn.ModuleList([])
self.mid_blocks = nn.ModuleList([])
self.up_blocks = nn.ModuleList([])
output_channel = in_channels
for i in range(len(channels)): # pylint: disable=consider-using-enumerate
input_channel = output_channel
output_channel = channels[i]
is_last = i == len(channels) - 1
resnet = CausalResnetBlock1D(dim=input_channel, dim_out=output_channel, time_emb_dim=time_embed_dim)
transformer_blocks = nn.ModuleList(
[
BasicTransformerBlock(
dim=output_channel,
num_attention_heads=num_heads,
attention_head_dim=attention_head_dim,
dropout=dropout,
activation_fn=act_fn,
)
for _ in range(n_blocks)
]
)
downsample = (
Downsample1D(output_channel) if not is_last else CausalConv1d(output_channel, output_channel, 3)
)
self.down_blocks.append(nn.ModuleList([resnet, transformer_blocks, downsample]))
for _ in range(num_mid_blocks):
input_channel = channels[-1]
out_channels = channels[-1]
resnet = CausalResnetBlock1D(dim=input_channel, dim_out=output_channel, time_emb_dim=time_embed_dim)
transformer_blocks = nn.ModuleList(
[
BasicTransformerBlock(
dim=output_channel,
num_attention_heads=num_heads,
attention_head_dim=attention_head_dim,
dropout=dropout,
activation_fn=act_fn,
)
for _ in range(n_blocks)
]
)
self.mid_blocks.append(nn.ModuleList([resnet, transformer_blocks]))
channels = channels[::-1] + (channels[0],)
for i in range(len(channels) - 1):
input_channel = channels[i] * 2
output_channel = channels[i + 1]
is_last = i == len(channels) - 2
resnet = CausalResnetBlock1D(
dim=input_channel,
dim_out=output_channel,
time_emb_dim=time_embed_dim,
)
transformer_blocks = nn.ModuleList(
[
BasicTransformerBlock(
dim=output_channel,
num_attention_heads=num_heads,
attention_head_dim=attention_head_dim,
dropout=dropout,
activation_fn=act_fn,
)
for _ in range(n_blocks)
]
)
upsample = (
Upsample1D(output_channel, use_conv_transpose=True)
if not is_last
else CausalConv1d(output_channel, output_channel, 3)
)
self.up_blocks.append(nn.ModuleList([resnet, transformer_blocks, upsample]))
self.final_block = CausalBlock1D(channels[-1], channels[-1])
self.final_proj = nn.Conv1d(channels[-1], self.out_channels, 1)
self.initialize_weights()
def forward(self, x, mask, mu, t, spks=None, cond=None, streaming=False):
"""Forward pass of the UNet1DConditional model.
Args:
x (torch.Tensor): shape (batch_size, in_channels, time)
mask (_type_): shape (batch_size, 1, time)
t (_type_): shape (batch_size)
spks (_type_, optional): shape: (batch_size, condition_channels). Defaults to None.
cond (_type_, optional): placeholder for future use. Defaults to None.
Raises:
ValueError: _description_
ValueError: _description_
Returns:
_type_: _description_
"""
t = self.time_embeddings(t).to(t.dtype)
t = self.time_mlp(t)
x = pack([x, mu], "b * t")[0]
if spks is not None:
spks = repeat(spks, "b c -> b c t", t=x.shape[-1])
x = pack([x, spks], "b * t")[0]
if cond is not None:
x = pack([x, cond], "b * t")[0]
hiddens = []
masks = [mask]
for resnet, transformer_blocks, downsample in self.down_blocks:
mask_down = masks[-1]
x = resnet(x, mask_down, t)
x = rearrange(x, "b c t -> b t c").contiguous()
if streaming is True:
attn_mask = add_optional_chunk_mask(x, mask_down.bool(), False, False, 0, self.static_chunk_size, -1)
else:
attn_mask = add_optional_chunk_mask(x, mask_down.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
attn_mask = mask_to_bias(attn_mask, x.dtype)
for transformer_block in transformer_blocks:
x = transformer_block(
hidden_states=x,
attention_mask=attn_mask,
timestep=t,
)
x = rearrange(x, "b t c -> b c t").contiguous()
hiddens.append(x) # Save hidden states for skip connections
x = downsample(x * mask_down)
masks.append(mask_down[:, :, ::2])
masks = masks[:-1]
mask_mid = masks[-1]
for resnet, transformer_blocks in self.mid_blocks:
x = resnet(x, mask_mid, t)
x = rearrange(x, "b c t -> b t c").contiguous()
if streaming is True:
attn_mask = add_optional_chunk_mask(x, mask_mid.bool(), False, False, 0, self.static_chunk_size, -1)
else:
attn_mask = add_optional_chunk_mask(x, mask_mid.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
attn_mask = mask_to_bias(attn_mask, x.dtype)
for transformer_block in transformer_blocks:
x = transformer_block(
hidden_states=x,
attention_mask=attn_mask,
timestep=t,
)
x = rearrange(x, "b t c -> b c t").contiguous()
for resnet, transformer_blocks, upsample in self.up_blocks:
mask_up = masks.pop()
skip = hiddens.pop()
x = pack([x[:, :, :skip.shape[-1]], skip], "b * t")[0]
x = resnet(x, mask_up, t)
x = rearrange(x, "b c t -> b t c").contiguous()
if streaming is True:
attn_mask = add_optional_chunk_mask(x, mask_up.bool(), False, False, 0, self.static_chunk_size, -1)
else:
attn_mask = add_optional_chunk_mask(x, mask_up.bool(), False, False, 0, 0, -1).repeat(1, x.size(1), 1)
attn_mask = mask_to_bias(attn_mask, x.dtype)
for transformer_block in transformer_blocks:
x = transformer_block(
hidden_states=x,
attention_mask=attn_mask,
timestep=t,
)
x = rearrange(x, "b t c -> b c t").contiguous()
x = upsample(x * mask_up)
x = self.final_block(x, mask_up)
output = self.final_proj(x * mask_up)
return output * mask
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/modules/flow_components/upsample_encoder.py | Python | import math
from typing import Optional, Tuple, Union
import torch
import torch.nn as nn
import torch.nn.functional as F
def subsequent_chunk_mask(
size: int,
chunk_size: int,
num_left_chunks: int = -1,
device: torch.device = torch.device("cpu"),
) -> torch.Tensor:
"""Create mask for subsequent steps (size, size) with chunk size,
this is for streaming encoder
Args:
size (int): size of mask
chunk_size (int): size of chunk
num_left_chunks (int): number of left chunks
<0: use full chunk
>=0: use num_left_chunks
device (torch.device): "cpu" or "cuda" or torch.Tensor.device
Returns:
torch.Tensor: mask
Examples:
>>> subsequent_chunk_mask(4, 2)
[[1, 1, 0, 0],
[1, 1, 0, 0],
[1, 1, 1, 1],
[1, 1, 1, 1]]
"""
# NOTE this modified implementation meets onnx export requirements, but it doesn't support num_left_chunks
pos_idx = torch.arange(size, device=device)
block_value = (torch.div(pos_idx, chunk_size, rounding_mode='trunc') + 1) * chunk_size
ret = pos_idx.unsqueeze(0) < block_value.unsqueeze(1)
return ret
def add_optional_chunk_mask(xs: torch.Tensor,
masks: torch.Tensor,
use_dynamic_chunk: bool,
use_dynamic_left_chunk: bool,
decoding_chunk_size: int,
static_chunk_size: int,
num_decoding_left_chunks: int,
enable_full_context: bool = True):
""" Apply optional mask for encoder.
Args:
xs (torch.Tensor): padded input, (B, L, D), L for max length
mask (torch.Tensor): mask for xs, (B, 1, L)
use_dynamic_chunk (bool): whether to use dynamic chunk or not
use_dynamic_left_chunk (bool): whether to use dynamic left chunk for
training.
decoding_chunk_size (int): decoding chunk size for dynamic chunk, it's
0: default for training, use random dynamic chunk.
<0: for decoding, use full chunk.
>0: for decoding, use fixed chunk size as set.
static_chunk_size (int): chunk size for static chunk training/decoding
if it's greater than 0, if use_dynamic_chunk is true,
this parameter will be ignored
num_decoding_left_chunks: number of left chunks, this is for decoding,
the chunk size is decoding_chunk_size.
>=0: use num_decoding_left_chunks
<0: use all left chunks
enable_full_context (bool):
True: chunk size is either [1, 25] or full context(max_len)
False: chunk size ~ U[1, 25]
Returns:
torch.Tensor: chunk mask of the input xs.
"""
# Whether to use chunk mask or not
if use_dynamic_chunk:
max_len = xs.size(1)
if decoding_chunk_size < 0:
chunk_size = max_len
num_left_chunks = -1
elif decoding_chunk_size > 0:
chunk_size = decoding_chunk_size
num_left_chunks = num_decoding_left_chunks
else:
# chunk size is either [1, 25] or full context(max_len).
# Since we use 4 times subsampling and allow up to 1s(100 frames)
# delay, the maximum frame is 100 / 4 = 25.
chunk_size = torch.randint(1, max_len, (1, )).item()
num_left_chunks = -1
if chunk_size > max_len // 2 and enable_full_context:
chunk_size = max_len
else:
chunk_size = chunk_size % 25 + 1
if use_dynamic_left_chunk:
max_left_chunks = (max_len - 1) // chunk_size
num_left_chunks = torch.randint(0, max_left_chunks,
(1, )).item()
chunk_masks = subsequent_chunk_mask(xs.size(1), chunk_size,
num_left_chunks,
xs.device) # (L, L)
chunk_masks = chunk_masks.unsqueeze(0) # (1, L, L)
chunk_masks = masks & chunk_masks # (B, L, L)
elif static_chunk_size > 0:
num_left_chunks = num_decoding_left_chunks
chunk_masks = subsequent_chunk_mask(xs.size(1), static_chunk_size,
num_left_chunks,
xs.device) # (L, L)
chunk_masks = chunk_masks.unsqueeze(0) # (1, L, L)
chunk_masks = masks & chunk_masks # (B, L, L)
else:
chunk_masks = masks
assert chunk_masks.dtype == torch.bool
if (chunk_masks.sum(dim=-1) == 0).sum().item() != 0:
print('get chunk_masks all false at some timestep, force set to true, make sure they are masked in futuer computation!')
chunk_masks[chunk_masks.sum(dim=-1) == 0] = True
return chunk_masks
def make_pad_mask(lengths: torch.Tensor, max_len: int = 0) -> torch.Tensor:
"""Make mask tensor containing indices of padded part.
See description of make_non_pad_mask.
Args:
lengths (torch.Tensor): Batch of lengths (B,).
Returns:
torch.Tensor: Mask tensor containing indices of padded part.
Examples:
>>> lengths = [5, 3, 2]
>>> make_pad_mask(lengths)
masks = [[0, 0, 0, 0 ,0],
[0, 0, 0, 1, 1],
[0, 0, 1, 1, 1]]
"""
batch_size = lengths.size(0)
max_len = max_len if max_len > 0 else lengths.max().item()
seq_range = torch.arange(0,
max_len,
dtype=torch.int64,
device=lengths.device)
seq_range_expand = seq_range.unsqueeze(0).expand(batch_size, max_len)
seq_length_expand = lengths.unsqueeze(-1)
mask = seq_range_expand >= seq_length_expand
return mask
class EspnetRelPositionalEncoding(torch.nn.Module):
"""Relative positional encoding module (new implementation).
Details can be found in https://github.com/espnet/espnet/pull/2816.
See : Appendix B in https://arxiv.org/abs/1901.02860
Args:
d_model (int): Embedding dimension.
max_len (int): Maximum input length.
"""
def __init__(self, d_model: int, max_len: int = 5000):
super(EspnetRelPositionalEncoding, self).__init__()
self.d_model = d_model
self.xscale = math.sqrt(self.d_model)
self.pe = None
self.extend_pe(torch.tensor(0.0).expand(1, max_len))
def extend_pe(self, x: torch.Tensor):
"""Reset the positional encodings."""
if self.pe is not None:
# self.pe contains both positive and negative parts
# the length of self.pe is 2 * input_len - 1
if self.pe.size(1) >= x.size(1) * 2 - 1:
if self.pe.dtype != x.dtype or self.pe.device != x.device:
self.pe = self.pe.to(dtype=x.dtype, device=x.device)
return
# Suppose `i` means to the position of query vecotr and `j` means the
# position of key vector. We use position relative positions when keys
# are to the left (i>j) and negative relative positions otherwise (i<j).
pe_positive = torch.zeros(x.size(1), self.d_model)
pe_negative = torch.zeros(x.size(1), self.d_model)
position = torch.arange(0, x.size(1), dtype=torch.float32).unsqueeze(1)
div_term = torch.exp(
torch.arange(0, self.d_model, 2, dtype=torch.float32)
* -(math.log(10000.0) / self.d_model)
)
pe_positive[:, 0::2] = torch.sin(position * div_term)
pe_positive[:, 1::2] = torch.cos(position * div_term)
pe_negative[:, 0::2] = torch.sin(-1 * position * div_term)
pe_negative[:, 1::2] = torch.cos(-1 * position * div_term)
# Reserve the order of positive indices and concat both positive and
# negative indices. This is used to support the shifting trick
# as in https://arxiv.org/abs/1901.02860
pe_positive = torch.flip(pe_positive, [0]).unsqueeze(0)
pe_negative = pe_negative[1:].unsqueeze(0)
pe = torch.cat([pe_positive, pe_negative], dim=1)
self.pe = pe.to(device=x.device, dtype=x.dtype)
def forward(self, x: torch.Tensor, offset: Union[int, torch.Tensor] = 0) \
-> Tuple[torch.Tensor, torch.Tensor]:
"""Add positional encoding.
Args:
x (torch.Tensor): Input tensor (batch, time, `*`).
Returns:
torch.Tensor: Encoded tensor (batch, time, `*`).
"""
self.extend_pe(x)
x = x * self.xscale
pos_emb = self.position_encoding(size=x.size(1), offset=offset)
return x, pos_emb
def position_encoding(self,
offset: Union[int, torch.Tensor],
size: int) -> torch.Tensor:
""" For getting encoding in a streaming fashion
Attention!!!!!
we apply dropout only once at the whole utterance level in a none
streaming way, but will call this function several times with
increasing input size in a streaming scenario, so the dropout will
be applied several times.
Args:
offset (int or torch.tensor): start offset
size (int): required size of position encoding
Returns:
torch.Tensor: Corresponding encoding
"""
# How to subscript a Union type:
# https://github.com/pytorch/pytorch/issues/69434
if isinstance(offset, int):
pos_emb = self.pe[
:,
self.pe.size(1) // 2 - size - offset + 1: self.pe.size(1) // 2 + size + offset,
]
elif isinstance(offset, torch.Tensor):
pos_emb = self.pe[
:,
self.pe.size(1) // 2 - size - offset + 1: self.pe.size(1) // 2 + size + offset,
]
return pos_emb
class LinearNoSubsampling(torch.nn.Module):
"""Linear transform the input without subsampling
Args:
idim (int): Input dimension.
odim (int): Output dimension.
pos_enc_class (torch.nn.Module): Positional encoding class.
"""
def __init__(self, idim: int, odim: int,
pos_enc_class: torch.nn.Module):
super().__init__()
self.out = torch.nn.Sequential(
torch.nn.Linear(idim, odim),
torch.nn.LayerNorm(odim, eps=1e-5),
)
self.pos_enc = pos_enc_class
self.right_context = 0
self.subsampling_rate = 1
def forward(
self,
x: torch.Tensor,
x_mask: torch.Tensor,
offset: Union[int, torch.Tensor] = 0
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Input x.
Args:
x (torch.Tensor): Input tensor (#batch, time, idim).
x_mask (torch.Tensor): Input mask (#batch, 1, time).
Returns:
torch.Tensor: linear input tensor (#batch, time', odim),
where time' = time .
torch.Tensor: linear input mask (#batch, 1, time'),
where time' = time .
"""
x = self.out(x)
x, pos_emb = self.pos_enc(x, offset)
return x, pos_emb, x_mask
def position_encoding(self, offset: Union[int, torch.Tensor],
size: int) -> torch.Tensor:
return self.pos_enc.position_encoding(offset, size)
class Upsample1D(nn.Module):
"""A 1D upsampling layer with an optional convolution.
Parameters:
channels (`int`):
number of channels in the inputs and outputs.
use_conv (`bool`, default `False`):
option to use a convolution.
use_conv_transpose (`bool`, default `False`):
option to use a convolution transpose.
out_channels (`int`, optional):
number of output channels. Defaults to `channels`.
"""
def __init__(self, channels: int, out_channels: int, stride: int = 2):
super().__init__()
self.channels = channels
self.out_channels = out_channels
self.stride = stride
# In this mode, first repeat interpolate, than conv with stride=1
self.conv = nn.Conv1d(self.channels, self.out_channels, stride * 2 + 1, stride=1, padding=0)
def forward(self, inputs: torch.Tensor, input_lengths: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
outputs = F.interpolate(inputs, scale_factor=float(self.stride), mode="nearest")
outputs = F.pad(outputs, (self.stride * 2, 0), value=0.0)
outputs = self.conv(outputs)
return outputs, input_lengths * self.stride
class PreLookaheadLayer(nn.Module):
def __init__(self, channels: int, pre_lookahead_len: int = 1):
super().__init__()
self.channels = channels
self.pre_lookahead_len = pre_lookahead_len
self.conv1 = nn.Conv1d(
channels, channels,
kernel_size=pre_lookahead_len + 1,
stride=1, padding=0,
)
self.conv2 = nn.Conv1d(
channels, channels,
kernel_size=3, stride=1, padding=0,
)
def forward(self, inputs: torch.Tensor, context: torch.Tensor = torch.zeros(0, 0, 0)) -> torch.Tensor:
"""
inputs: (batch_size, seq_len, channels)
"""
outputs = inputs.transpose(1, 2).contiguous()
context = context.transpose(1, 2).contiguous()
# look ahead
if context.size(2) == 0:
outputs = F.pad(outputs, (0, self.pre_lookahead_len), mode='constant', value=0.0)
else:
assert self.training is False, 'you have passed context, make sure that you are running inference mode'
assert context.size(2) == self.pre_lookahead_len
outputs = F.pad(torch.concat([outputs, context], dim=2), (0, self.pre_lookahead_len - context.size(2)), mode='constant', value=0.0)
outputs = F.leaky_relu(self.conv1(outputs))
# outputs
outputs = F.pad(outputs, (self.conv2.kernel_size[0] - 1, 0), mode='constant', value=0.0)
outputs = self.conv2(outputs)
outputs = outputs.transpose(1, 2).contiguous()
# residual connection
outputs = outputs + inputs
return outputs
class MultiHeadedAttention(nn.Module):
"""Multi-Head Attention layer.
Args:
n_head (int): The number of heads.
n_feat (int): The number of features.
dropout_rate (float): Dropout rate.
key_bias (bool): Whether to use bias in key linear layer.
"""
def __init__(self,
n_head: int,
n_feat: int,
dropout_rate: float,
key_bias: bool = True):
super().__init__()
assert n_feat % n_head == 0
# We assume d_v always equals d_k
self.d_k = n_feat // n_head
self.h = n_head
self.linear_q = nn.Linear(n_feat, n_feat)
self.linear_k = nn.Linear(n_feat, n_feat, bias=key_bias)
self.linear_v = nn.Linear(n_feat, n_feat)
self.linear_out = nn.Linear(n_feat, n_feat)
self.dropout = nn.Dropout(p=dropout_rate)
def forward_qkv(
self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""Transform query, key and value.
Args:
query (torch.Tensor): Query tensor (#batch, time1, size).
key (torch.Tensor): Key tensor (#batch, time2, size).
value (torch.Tensor): Value tensor (#batch, time2, size).
Returns:
torch.Tensor: Transformed query tensor, size
(#batch, n_head, time1, d_k).
torch.Tensor: Transformed key tensor, size
(#batch, n_head, time2, d_k).
torch.Tensor: Transformed value tensor, size
(#batch, n_head, time2, d_k).
"""
n_batch = query.size(0)
q = self.linear_q(query).view(n_batch, -1, self.h, self.d_k)
k = self.linear_k(key).view(n_batch, -1, self.h, self.d_k)
v = self.linear_v(value).view(n_batch, -1, self.h, self.d_k)
q = q.transpose(1, 2) # (batch, head, time1, d_k)
k = k.transpose(1, 2) # (batch, head, time2, d_k)
v = v.transpose(1, 2) # (batch, head, time2, d_k)
return q, k, v
def forward_attention(
self,
value: torch.Tensor,
scores: torch.Tensor,
mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool)
) -> torch.Tensor:
"""Compute attention context vector.
Args:
value (torch.Tensor): Transformed value, size
(#batch, n_head, time2, d_k).
scores (torch.Tensor): Attention score, size
(#batch, n_head, time1, time2).
mask (torch.Tensor): Mask, size (#batch, 1, time2) or
(#batch, time1, time2), (0, 0, 0) means fake mask.
Returns:
torch.Tensor: Transformed value (#batch, time1, d_model)
weighted by the attention score (#batch, time1, time2).
"""
n_batch = value.size(0)
# NOTE(xcsong): When will `if mask.size(2) > 0` be True?
# 1. onnx(16/4) [WHY? Because we feed real cache & real mask for the
# 1st chunk to ease the onnx export.]
# 2. pytorch training
if mask.size(2) > 0: # time2 > 0
mask = mask.unsqueeze(1).eq(0) # (batch, 1, *, time2)
# For last chunk, time2 might be larger than scores.size(-1)
mask = mask[:, :, :, :scores.size(-1)] # (batch, 1, *, time2)
scores = scores.masked_fill(mask, -float('inf'))
attn = torch.softmax(scores, dim=-1).masked_fill(
mask, 0.0) # (batch, head, time1, time2)
# NOTE(xcsong): When will `if mask.size(2) > 0` be False?
# 1. onnx(16/-1, -1/-1, 16/0)
# 2. jit (16/-1, -1/-1, 16/0, 16/4)
else:
attn = torch.softmax(scores, dim=-1) # (batch, head, time1, time2)
p_attn = self.dropout(attn)
x = torch.matmul(p_attn, value) # (batch, head, time1, d_k)
x = (x.transpose(1, 2).contiguous().view(n_batch, -1,
self.h * self.d_k)
) # (batch, time1, d_model)
return self.linear_out(x) # (batch, time1, d_model)
def forward(
self,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
pos_emb: torch.Tensor = torch.empty(0),
cache: torch.Tensor = torch.zeros((0, 0, 0, 0))
) -> Tuple[torch.Tensor, torch.Tensor]:
"""Compute scaled dot product attention.
Args:
query (torch.Tensor): Query tensor (#batch, time1, size).
key (torch.Tensor): Key tensor (#batch, time2, size).
value (torch.Tensor): Value tensor (#batch, time2, size).
mask (torch.Tensor): Mask tensor (#batch, 1, time2) or
(#batch, time1, time2).
1.When applying cross attention between decoder and encoder,
the batch padding mask for input is in (#batch, 1, T) shape.
2.When applying self attention of encoder,
the mask is in (#batch, T, T) shape.
3.When applying self attention of decoder,
the mask is in (#batch, L, L) shape.
4.If the different position in decoder see different block
of the encoder, such as Mocha, the passed in mask could be
in (#batch, L, T) shape. But there is no such case in current
CosyVoice.
cache (torch.Tensor): Cache tensor (1, head, cache_t, d_k * 2),
where `cache_t == chunk_size * num_decoding_left_chunks`
and `head * d_k == size`
Returns:
torch.Tensor: Output tensor (#batch, time1, d_model).
torch.Tensor: Cache tensor (1, head, cache_t + time1, d_k * 2)
where `cache_t == chunk_size * num_decoding_left_chunks`
and `head * d_k == size`
"""
q, k, v = self.forward_qkv(query, key, value)
# NOTE(xcsong):
# when export onnx model, for 1st chunk, we feed
# cache(1, head, 0, d_k * 2) (16/-1, -1/-1, 16/0 mode)
# or cache(1, head, real_cache_t, d_k * 2) (16/4 mode).
# In all modes, `if cache.size(0) > 0` will alwayse be `True`
# and we will always do splitting and
# concatnation(this will simplify onnx export). Note that
# it's OK to concat & split zero-shaped tensors(see code below).
# when export jit model, for 1st chunk, we always feed
# cache(0, 0, 0, 0) since jit supports dynamic if-branch.
# >>> a = torch.ones((1, 2, 0, 4))
# >>> b = torch.ones((1, 2, 3, 4))
# >>> c = torch.cat((a, b), dim=2)
# >>> torch.equal(b, c) # True
# >>> d = torch.split(a, 2, dim=-1)
# >>> torch.equal(d[0], d[1]) # True
if cache.size(0) > 0:
key_cache, value_cache = torch.split(cache,
cache.size(-1) // 2,
dim=-1)
k = torch.cat([key_cache, k], dim=2)
v = torch.cat([value_cache, v], dim=2)
# NOTE(xcsong): We do cache slicing in encoder.forward_chunk, since it's
# non-trivial to calculate `next_cache_start` here.
new_cache = torch.cat((k, v), dim=-1)
scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_k)
return self.forward_attention(v, scores, mask), new_cache
class RelPositionMultiHeadedAttention(MultiHeadedAttention):
"""Multi-Head Attention layer with relative position encoding.
Paper: https://arxiv.org/abs/1901.02860
Args:
n_head (int): The number of heads.
n_feat (int): The number of features.
dropout_rate (float): Dropout rate.
key_bias (bool): Whether to use bias in key linear layer.
"""
def __init__(self,
n_head: int,
n_feat: int,
dropout_rate: float,
key_bias: bool = True):
super().__init__(n_head, n_feat, dropout_rate, key_bias)
# linear transformation for positional encoding
self.linear_pos = nn.Linear(n_feat, n_feat, bias=False)
# these two learnable bias are used in matrix c and matrix d
# as described in https://arxiv.org/abs/1901.02860 Section 3.3
self.pos_bias_u = nn.Parameter(torch.Tensor(self.h, self.d_k))
self.pos_bias_v = nn.Parameter(torch.Tensor(self.h, self.d_k))
torch.nn.init.xavier_uniform_(self.pos_bias_u)
torch.nn.init.xavier_uniform_(self.pos_bias_v)
def rel_shift(self, x: torch.Tensor) -> torch.Tensor:
"""Compute relative positional encoding.
Args:
x (torch.Tensor): Input tensor (batch, head, time1, 2*time1-1).
time1 means the length of query vector.
Returns:
torch.Tensor: Output tensor.
"""
zero_pad = torch.zeros((x.size()[0], x.size()[1], x.size()[2], 1),
device=x.device,
dtype=x.dtype)
x_padded = torch.cat([zero_pad, x], dim=-1)
x_padded = x_padded.view(x.size()[0],
x.size()[1],
x.size(3) + 1, x.size(2))
x = x_padded[:, :, 1:].view_as(x)[
:, :, :, : x.size(-1) // 2 + 1
] # only keep the positions from 0 to time2
return x
def forward(
self,
query: torch.Tensor,
key: torch.Tensor,
value: torch.Tensor,
mask: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
pos_emb: torch.Tensor = torch.empty(0),
cache: torch.Tensor = torch.zeros((0, 0, 0, 0))
) -> Tuple[torch.Tensor, torch.Tensor]:
"""Compute 'Scaled Dot Product Attention' with rel. positional encoding.
Args:
query (torch.Tensor): Query tensor (#batch, time1, size).
key (torch.Tensor): Key tensor (#batch, time2, size).
value (torch.Tensor): Value tensor (#batch, time2, size).
mask (torch.Tensor): Mask tensor (#batch, 1, time2) or
(#batch, time1, time2), (0, 0, 0) means fake mask.
pos_emb (torch.Tensor): Positional embedding tensor
(#batch, time2, size).
cache (torch.Tensor): Cache tensor (1, head, cache_t, d_k * 2),
where `cache_t == chunk_size * num_decoding_left_chunks`
and `head * d_k == size`
Returns:
torch.Tensor: Output tensor (#batch, time1, d_model).
torch.Tensor: Cache tensor (1, head, cache_t + time1, d_k * 2)
where `cache_t == chunk_size * num_decoding_left_chunks`
and `head * d_k == size`
"""
q, k, v = self.forward_qkv(query, key, value)
q = q.transpose(1, 2) # (batch, time1, head, d_k)
# NOTE(xcsong):
# when export onnx model, for 1st chunk, we feed
# cache(1, head, 0, d_k * 2) (16/-1, -1/-1, 16/0 mode)
# or cache(1, head, real_cache_t, d_k * 2) (16/4 mode).
# In all modes, `if cache.size(0) > 0` will alwayse be `True`
# and we will always do splitting and
# concatnation(this will simplify onnx export). Note that
# it's OK to concat & split zero-shaped tensors(see code below).
# when export jit model, for 1st chunk, we always feed
# cache(0, 0, 0, 0) since jit supports dynamic if-branch.
# >>> a = torch.ones((1, 2, 0, 4))
# >>> b = torch.ones((1, 2, 3, 4))
# >>> c = torch.cat((a, b), dim=2)
# >>> torch.equal(b, c) # True
# >>> d = torch.split(a, 2, dim=-1)
# >>> torch.equal(d[0], d[1]) # True
if cache.size(0) > 0:
key_cache, value_cache = torch.split(cache,
cache.size(-1) // 2,
dim=-1)
k = torch.cat([key_cache, k], dim=2)
v = torch.cat([value_cache, v], dim=2)
# NOTE(xcsong): We do cache slicing in encoder.forward_chunk, since it's
# non-trivial to calculate `next_cache_start` here.
new_cache = torch.cat((k, v), dim=-1)
n_batch_pos = pos_emb.size(0)
p = self.linear_pos(pos_emb).view(n_batch_pos, -1, self.h, self.d_k)
p = p.transpose(1, 2) # (batch, head, time1, d_k)
# (batch, head, time1, d_k)
q_with_bias_u = (q + self.pos_bias_u).transpose(1, 2)
# (batch, head, time1, d_k)
q_with_bias_v = (q + self.pos_bias_v).transpose(1, 2)
# compute attention score
# first compute matrix a and matrix c
# as described in https://arxiv.org/abs/1901.02860 Section 3.3
# (batch, head, time1, time2)
matrix_ac = torch.matmul(q_with_bias_u, k.transpose(-2, -1))
# compute matrix b and matrix d
# (batch, head, time1, time2)
matrix_bd = torch.matmul(q_with_bias_v, p.transpose(-2, -1))
# NOTE(Xiang Lyu): Keep rel_shift since espnet rel_pos_emb is used
if matrix_ac.shape != matrix_bd.shape:
matrix_bd = self.rel_shift(matrix_bd)
scores = (matrix_ac + matrix_bd) / math.sqrt(
self.d_k) # (batch, head, time1, time2)
return self.forward_attention(v, scores, mask), new_cache
class PositionwiseFeedForward(torch.nn.Module):
"""Positionwise feed forward layer.
FeedForward are appied on each position of the sequence.
The output dim is same with the input dim.
Args:
idim (int): Input dimenstion.
hidden_units (int): The number of hidden units.
dropout_rate (float): Dropout rate.
activation (torch.nn.Module): Activation function
"""
def __init__(
self,
idim: int,
hidden_units: int,
dropout_rate: float,
activation: torch.nn.Module = torch.nn.ReLU(),
):
super(PositionwiseFeedForward, self).__init__()
self.w_1 = torch.nn.Linear(idim, hidden_units)
self.activation = activation
self.dropout = torch.nn.Dropout(dropout_rate)
self.w_2 = torch.nn.Linear(hidden_units, idim)
def forward(self, xs: torch.Tensor) -> torch.Tensor:
"""Forward function.
Args:
xs: input tensor (B, L, D)
Returns:
output tensor, (B, L, D)
"""
return self.w_2(self.dropout(self.activation(self.w_1(xs))))
class ConformerEncoderLayer(nn.Module):
"""Encoder layer module.
Args:
size (int): Input dimension.
self_attn (torch.nn.Module): Self-attention module instance.
`MultiHeadedAttention` or `RelPositionMultiHeadedAttention`
instance can be used as the argument.
feed_forward (torch.nn.Module): Feed-forward module instance.
`PositionwiseFeedForward` instance can be used as the argument.
feed_forward_macaron (torch.nn.Module): Additional feed-forward module
instance.
`PositionwiseFeedForward` instance can be used as the argument.
conv_module (torch.nn.Module): Convolution module instance.
`ConvlutionModule` instance can be used as the argument.
dropout_rate (float): Dropout rate.
normalize_before (bool):
True: use layer_norm before each sub-block.
False: use layer_norm after each sub-block.
"""
def __init__(
self,
size: int,
self_attn: torch.nn.Module,
feed_forward: Optional[nn.Module] = None,
feed_forward_macaron: Optional[nn.Module] = None,
conv_module: Optional[nn.Module] = None,
dropout_rate: float = 0.0,
normalize_before: bool = True,
):
super().__init__()
self.self_attn = self_attn
self.feed_forward = feed_forward
self.feed_forward_macaron = feed_forward_macaron
self.conv_module = conv_module
self.norm_ff = nn.LayerNorm(size, eps=1e-12) # for the FNN module
self.norm_mha = nn.LayerNorm(size, eps=1e-12) # for the MHA module
if feed_forward_macaron is not None:
self.norm_ff_macaron = nn.LayerNorm(size, eps=1e-12)
self.ff_scale = 0.5
else:
self.ff_scale = 1.0
if self.conv_module is not None:
self.norm_conv = nn.LayerNorm(size, eps=1e-12) # for the CNN module
self.norm_final = nn.LayerNorm(
size, eps=1e-12) # for the final output of the block
self.dropout = nn.Dropout(dropout_rate)
self.size = size
self.normalize_before = normalize_before
def forward(
self,
x: torch.Tensor,
mask: torch.Tensor,
pos_emb: torch.Tensor,
mask_pad: torch.Tensor = torch.ones((0, 0, 0), dtype=torch.bool),
att_cache: torch.Tensor = torch.zeros((0, 0, 0, 0)),
cnn_cache: torch.Tensor = torch.zeros((0, 0, 0, 0)),
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor]:
"""Compute encoded features.
Args:
x (torch.Tensor): (#batch, time, size)
mask (torch.Tensor): Mask tensor for the input (#batch, time,time),
(0, 0, 0) means fake mask.
pos_emb (torch.Tensor): positional encoding, must not be None
for ConformerEncoderLayer.
mask_pad (torch.Tensor): batch padding mask used for conv module.
(#batch, 1,time), (0, 0, 0) means fake mask.
att_cache (torch.Tensor): Cache tensor of the KEY & VALUE
(#batch=1, head, cache_t1, d_k * 2), head * d_k == size.
cnn_cache (torch.Tensor): Convolution cache in conformer layer
(#batch=1, size, cache_t2)
Returns:
torch.Tensor: Output tensor (#batch, time, size).
torch.Tensor: Mask tensor (#batch, time, time).
torch.Tensor: att_cache tensor,
(#batch=1, head, cache_t1 + time, d_k * 2).
torch.Tensor: cnn_cahce tensor (#batch, size, cache_t2).
"""
# whether to use macaron style
if self.feed_forward_macaron is not None:
residual = x
if self.normalize_before:
x = self.norm_ff_macaron(x)
x = residual + self.ff_scale * self.dropout(
self.feed_forward_macaron(x))
if not self.normalize_before:
x = self.norm_ff_macaron(x)
# multi-headed self-attention module
residual = x
if self.normalize_before:
x = self.norm_mha(x)
x_att, new_att_cache = self.self_attn(x, x, x, mask, pos_emb,
att_cache)
x = residual + self.dropout(x_att)
if not self.normalize_before:
x = self.norm_mha(x)
# convolution module
# Fake new cnn cache here, and then change it in conv_module
new_cnn_cache = torch.zeros((0, 0, 0), dtype=x.dtype, device=x.device)
if self.conv_module is not None:
residual = x
if self.normalize_before:
x = self.norm_conv(x)
x, new_cnn_cache = self.conv_module(x, mask_pad, cnn_cache)
x = residual + self.dropout(x)
if not self.normalize_before:
x = self.norm_conv(x)
# feed forward module
residual = x
if self.normalize_before:
x = self.norm_ff(x)
x = residual + self.ff_scale * self.dropout(self.feed_forward(x))
if not self.normalize_before:
x = self.norm_ff(x)
if self.conv_module is not None:
x = self.norm_final(x)
return x, mask, new_att_cache, new_cnn_cache
class UpsampleConformerEncoder(torch.nn.Module):
"""
Args:
input_size (int): input dim
output_size (int): dimension of attention
attention_heads (int): the number of heads of multi head attention
linear_units (int): the hidden units number of position-wise feed
forward
num_blocks (int): the number of decoder blocks
static_chunk_size (int): chunk size for static chunk training and
decoding
use_dynamic_chunk (bool): whether use dynamic chunk size for
training or not, You can only use fixed chunk(chunk_size > 0)
or dyanmic chunk size(use_dynamic_chunk = True)
use_dynamic_left_chunk (bool): whether use dynamic left chunk in
dynamic chunk training
key_bias: whether use bias in attention.linear_k, False for whisper models.
"""
def __init__(
self,
input_size: int = 512,
output_size: int = 512,
attention_heads: int = 8,
linear_units: int = 2048,
num_blocks: int = 6,
static_chunk_size: int = 25,
use_dynamic_chunk: bool = False,
use_dynamic_left_chunk: bool = False,
key_bias: bool = True,
):
super().__init__()
self._output_size = output_size
self.embed = LinearNoSubsampling(
input_size, output_size,
EspnetRelPositionalEncoding(output_size),
)
self.after_norm = torch.nn.LayerNorm(output_size, eps=1e-5)
self.static_chunk_size = static_chunk_size
self.use_dynamic_chunk = use_dynamic_chunk
self.use_dynamic_left_chunk = use_dynamic_left_chunk
activation = torch.nn.SiLU()
# self-attention module definition
encoder_selfattn_layer_args = (
attention_heads,
output_size,
0.0,
key_bias,
)
# feed-forward module definition
positionwise_layer_args = (
output_size,
linear_units,
0.0,
activation,
)
# convolution module definition
self.pre_lookahead_layer = PreLookaheadLayer(channels=512, pre_lookahead_len=3)
self.encoders = torch.nn.ModuleList([
ConformerEncoderLayer(
output_size,
RelPositionMultiHeadedAttention(*encoder_selfattn_layer_args),
PositionwiseFeedForward(*positionwise_layer_args),
) for _ in range(num_blocks)
])
self.up_layer = Upsample1D(channels=512, out_channels=512, stride=2)
self.up_embed = LinearNoSubsampling(
input_size, output_size,
EspnetRelPositionalEncoding(output_size),
)
self.up_encoders = torch.nn.ModuleList([
ConformerEncoderLayer(
output_size,
RelPositionMultiHeadedAttention(*encoder_selfattn_layer_args),
PositionwiseFeedForward(*positionwise_layer_args),
) for _ in range(4)
])
def output_size(self) -> int:
return self._output_size
def forward(
self,
xs: torch.Tensor,
xs_lens: torch.Tensor,
context: torch.Tensor = torch.zeros(0, 0, 0),
decoding_chunk_size: int = 0,
num_decoding_left_chunks: int = -1,
streaming: bool = False,
) -> Tuple[torch.Tensor, torch.Tensor]:
"""Embed positions in tensor.
Args:
xs: padded input tensor (B, T, D)
xs_lens: input length (B)
decoding_chunk_size: decoding chunk size for dynamic chunk
0: default for training, use random dynamic chunk.
<0: for decoding, use full chunk.
>0: for decoding, use fixed chunk size as set.
num_decoding_left_chunks: number of left chunks, this is for decoding,
the chunk size is decoding_chunk_size.
>=0: use num_decoding_left_chunks
<0: use all left chunks
Returns:
encoder output tensor xs, and subsampled masks
xs: padded output tensor (B, T' ~= T/subsample_rate, D)
masks: torch.Tensor batch padding mask after subsample
(B, 1, T' ~= T/subsample_rate)
NOTE(xcsong):
We pass the `__call__` method of the modules instead of `forward` to the
checkpointing API because `__call__` attaches all the hooks of the module.
https://discuss.pytorch.org/t/any-different-between-model-input-and-model-forward-input/3690/2
"""
T = xs.size(1)
masks = ~make_pad_mask(xs_lens, T).unsqueeze(1) # (B, 1, T)
xs, pos_emb, masks = self.embed(xs, masks)
if context.size(1) != 0:
assert self.training is False, 'you have passed context, make sure that you are running inference mode'
context_masks = torch.ones(1, 1, context.size(1)).to(masks)
context, _, _ = self.embed(context, context_masks, offset=xs.size(1))
mask_pad = masks # (B, 1, T/subsample_rate)
chunk_masks = add_optional_chunk_mask(xs, masks, False, False, 0, self.static_chunk_size if streaming is True else 0, -1)
# lookahead + conformer encoder
xs = self.pre_lookahead_layer(xs, context=context)
xs = self.forward_layers(xs, chunk_masks, pos_emb, mask_pad)
# upsample + conformer encoder
xs = xs.transpose(1, 2).contiguous()
xs, xs_lens = self.up_layer(xs, xs_lens)
xs = xs.transpose(1, 2).contiguous()
T = xs.size(1)
masks = ~make_pad_mask(xs_lens, T).unsqueeze(1) # (B, 1, T)
xs, pos_emb, masks = self.up_embed(xs, masks)
mask_pad = masks # (B, 1, T/subsample_rate)
chunk_masks = add_optional_chunk_mask(xs, masks, False, False, 0, self.static_chunk_size * self.up_layer.stride if streaming is True else 0, -1)
xs = self.forward_up_layers(xs, chunk_masks, pos_emb, mask_pad)
xs = self.after_norm(xs)
# Here we assume the mask is not changed in encoder layers, so just
# return the masks before encoder layers, and the masks will be used
# for cross attention with decoder later
return xs, masks
def forward_layers(self, xs: torch.Tensor, chunk_masks: torch.Tensor,
pos_emb: torch.Tensor,
mask_pad: torch.Tensor) -> torch.Tensor:
for layer in self.encoders:
xs, chunk_masks, _, _ = layer(xs, chunk_masks, pos_emb, mask_pad)
return xs
def forward_up_layers(self, xs: torch.Tensor, chunk_masks: torch.Tensor,
pos_emb: torch.Tensor,
mask_pad: torch.Tensor) -> torch.Tensor:
for layer in self.up_encoders:
xs, chunk_masks, _, _ = layer(xs, chunk_masks, pos_emb, mask_pad)
return xs
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/modules/hifigan.py | Python | # Copyright (c) 2024 Alibaba Inc (authors: Xiang Lyu, Kai Hu)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""HIFI-GAN"""
from typing import Dict, List
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from scipy.signal import get_window
from torch.nn import Conv1d, ConvTranspose1d
from torch.nn.utils import remove_weight_norm
try:
from torch.nn.utils.parametrizations import weight_norm
except ImportError:
from torch.nn.utils import weight_norm # noqa
from flashcosyvoice.modules.hifigan_components.layers import (
ResBlock, SourceModuleHnNSF, SourceModuleHnNSF2, init_weights)
class ConvRNNF0Predictor(nn.Module):
def __init__(self,
num_class: int = 1,
in_channels: int = 80,
cond_channels: int = 512
):
super().__init__()
self.num_class = num_class
self.condnet = nn.Sequential(
weight_norm( # noqa
nn.Conv1d(in_channels, cond_channels, kernel_size=3, padding=1)
),
nn.ELU(),
weight_norm( # noqa
nn.Conv1d(cond_channels, cond_channels, kernel_size=3, padding=1)
),
nn.ELU(),
weight_norm( # noqa
nn.Conv1d(cond_channels, cond_channels, kernel_size=3, padding=1)
),
nn.ELU(),
weight_norm( # noqa
nn.Conv1d(cond_channels, cond_channels, kernel_size=3, padding=1)
),
nn.ELU(),
weight_norm( # noqa
nn.Conv1d(cond_channels, cond_channels, kernel_size=3, padding=1)
),
nn.ELU(),
)
self.classifier = nn.Linear(in_features=cond_channels, out_features=self.num_class)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.condnet(x)
x = x.transpose(1, 2)
return torch.abs(self.classifier(x).squeeze(-1))
class HiFTGenerator(nn.Module):
"""
HiFTNet Generator: Neural Source Filter + ISTFTNet
https://arxiv.org/abs/2309.09493
"""
def __init__(
self,
in_channels: int = 80,
base_channels: int = 512,
nb_harmonics: int = 8,
sampling_rate: int = 24000,
nsf_alpha: float = 0.1,
nsf_sigma: float = 0.003,
nsf_voiced_threshold: float = 10,
upsample_rates: List[int] = [8, 5, 3], # noqa
upsample_kernel_sizes: List[int] = [16, 11, 7], # noqa
istft_params: Dict[str, int] = {"n_fft": 16, "hop_len": 4}, # noqa
resblock_kernel_sizes: List[int] = [3, 7, 11], # noqa
resblock_dilation_sizes: List[List[int]] = [[1, 3, 5], [1, 3, 5], [1, 3, 5]], # noqa
source_resblock_kernel_sizes: List[int] = [7, 7, 11], # noqa
source_resblock_dilation_sizes: List[List[int]] = [[1, 3, 5], [1, 3, 5], [1, 3, 5]], # noqa
lrelu_slope: float = 0.1,
audio_limit: float = 0.99,
f0_predictor: torch.nn.Module = None,
):
super(HiFTGenerator, self).__init__()
self.out_channels = 1
self.nb_harmonics = nb_harmonics
self.sampling_rate = sampling_rate
self.istft_params = istft_params
self.lrelu_slope = lrelu_slope
self.audio_limit = audio_limit
self.num_kernels = len(resblock_kernel_sizes)
self.num_upsamples = len(upsample_rates)
# NOTE in CosyVoice2, we use the original SourceModuleHnNSF implementation
this_SourceModuleHnNSF = SourceModuleHnNSF if self.sampling_rate == 22050 else SourceModuleHnNSF2
self.m_source = this_SourceModuleHnNSF(
sampling_rate=sampling_rate,
upsample_scale=np.prod(upsample_rates) * istft_params["hop_len"],
harmonic_num=nb_harmonics,
sine_amp=nsf_alpha,
add_noise_std=nsf_sigma,
voiced_threshod=nsf_voiced_threshold)
self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates) * istft_params["hop_len"])
self.conv_pre = weight_norm( # noqa
Conv1d(in_channels, base_channels, 7, 1, padding=3)
)
# Up
self.ups = nn.ModuleList()
for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)):
self.ups.append(
weight_norm( # noqa
ConvTranspose1d(
base_channels // (2**i),
base_channels // (2**(i + 1)),
k,
u,
padding=(k - u) // 2,
)
)
)
# Down
self.source_downs = nn.ModuleList()
self.source_resblocks = nn.ModuleList()
downsample_rates = [1] + upsample_rates[::-1][:-1]
downsample_cum_rates = np.cumprod(downsample_rates)
for i, (u, k, d) in enumerate(zip(downsample_cum_rates[::-1], source_resblock_kernel_sizes, source_resblock_dilation_sizes)):
if u == 1:
self.source_downs.append(
Conv1d(istft_params["n_fft"] + 2, base_channels // (2 ** (i + 1)), 1, 1)
)
else:
self.source_downs.append(
Conv1d(istft_params["n_fft"] + 2, base_channels // (2 ** (i + 1)), u * 2, u, padding=(u // 2))
)
self.source_resblocks.append(
ResBlock(base_channels // (2 ** (i + 1)), k, d)
)
self.resblocks = nn.ModuleList()
for i in range(len(self.ups)):
ch = base_channels // (2**(i + 1))
for _, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)):
self.resblocks.append(ResBlock(ch, k, d))
self.conv_post = weight_norm(Conv1d(ch, istft_params["n_fft"] + 2, 7, 1, padding=3)) # noqa
self.ups.apply(init_weights)
self.conv_post.apply(init_weights)
self.reflection_pad = nn.ReflectionPad1d((1, 0))
self.stft_window = torch.from_numpy(get_window("hann", istft_params["n_fft"], fftbins=True).astype(np.float32))
self.f0_predictor = ConvRNNF0Predictor() if f0_predictor is None else f0_predictor
def remove_weight_norm(self):
print('Removing weight norm...')
for up in self.ups:
remove_weight_norm(up)
for resblock in self.resblocks:
resblock.remove_weight_norm()
remove_weight_norm(self.conv_pre)
remove_weight_norm(self.conv_post)
self.m_source.remove_weight_norm()
for source_down in self.source_downs:
remove_weight_norm(source_down)
for source_resblock in self.source_resblocks:
source_resblock.remove_weight_norm()
def _stft(self, x):
spec = torch.stft(
x,
self.istft_params["n_fft"], self.istft_params["hop_len"], self.istft_params["n_fft"], window=self.stft_window.to(x.device),
return_complex=True)
spec = torch.view_as_real(spec) # [B, F, TT, 2]
return spec[..., 0], spec[..., 1]
def _istft(self, magnitude, phase):
magnitude = torch.clip(magnitude, max=1e2)
real = magnitude * torch.cos(phase)
img = magnitude * torch.sin(phase)
inverse_transform = torch.istft(torch.complex(real, img), self.istft_params["n_fft"], self.istft_params["hop_len"],
self.istft_params["n_fft"], window=self.stft_window.to(magnitude.device))
return inverse_transform
def decode(self, x: torch.Tensor, s: torch.Tensor = torch.zeros(1, 1, 0)) -> torch.Tensor:
s_stft_real, s_stft_imag = self._stft(s.squeeze(1))
s_stft = torch.cat([s_stft_real, s_stft_imag], dim=1)
x = self.conv_pre(x)
for i in range(self.num_upsamples):
x = F.leaky_relu(x, self.lrelu_slope)
x = self.ups[i](x)
if i == self.num_upsamples - 1:
x = self.reflection_pad(x)
# fusion
si = self.source_downs[i](s_stft)
si = self.source_resblocks[i](si)
x = x + si
xs = None
for j in range(self.num_kernels):
if xs is None:
xs = self.resblocks[i * self.num_kernels + j](x)
else:
xs += self.resblocks[i * self.num_kernels + j](x)
x = xs / self.num_kernels
x = F.leaky_relu(x)
x = self.conv_post(x)
magnitude = torch.exp(x[:, :self.istft_params["n_fft"] // 2 + 1, :])
phase = torch.sin(x[:, self.istft_params["n_fft"] // 2 + 1:, :]) # actually, sin is redundancy
x = self._istft(magnitude, phase)
x = torch.clamp(x, -self.audio_limit, self.audio_limit)
return x
@torch.inference_mode()
def forward(self, speech_feat: torch.Tensor, cache_source: torch.Tensor = torch.zeros(1, 1, 0)) -> torch.Tensor:
# mel->f0
f0 = self.f0_predictor(speech_feat)
# f0->source
s = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t
s, _, _ = self.m_source(s)
s = s.transpose(1, 2)
# use cache_source to avoid glitch
if cache_source.shape[2] != 0:
s[:, :, :cache_source.shape[2]] = cache_source
generated_speech = self.decode(x=speech_feat, s=s)
return generated_speech, s
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/modules/hifigan_components/layers.py | Python | from typing import List
import numpy as np
import torch
import torch.nn as nn
from torch.distributions.uniform import Uniform
from torch.nn import Conv1d
from torch.nn.utils import remove_weight_norm
try:
from torch.nn.utils.parametrizations import weight_norm
except ImportError:
from torch.nn.utils import weight_norm # noqa
def get_padding(kernel_size, dilation=1):
return int((kernel_size * dilation - dilation) / 2)
def init_weights(m, mean=0.0, std=0.01):
classname = m.__class__.__name__
if classname.find("Conv") != -1:
m.weight.data.normal_(mean, std)
"""hifigan based generator implementation.
This code is modified from https://github.com/jik876/hifi-gan
,https://github.com/kan-bayashi/ParallelWaveGAN and
https://github.com/NVIDIA/BigVGAN
"""
# Implementation adapted from https://github.com/EdwardDixon/snake under the MIT license.
# LICENSE is in incl_licenses directory.
class Snake(nn.Module):
'''
Implementation of a sine-based periodic activation function
Shape:
- Input: (B, C, T)
- Output: (B, C, T), same shape as the input
Parameters:
- alpha - trainable parameter
References:
- This activation function is from this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda:
https://arxiv.org/abs/2006.08195
Examples:
>>> a1 = snake(256)
>>> x = torch.randn(256)
>>> x = a1(x)
Args:
in_features: shape of the input
alpha: trainable parameter
alpha_trainable: whether alpha is trainable
alpha_logscale: whether to use log scale for alpha
alpha is initialized to 1 by default, higher values = higher-frequency.
alpha will be trained along with the rest of your model.
'''
def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False):
super(Snake, self).__init__()
self.in_features = in_features
# initialize alpha
self.alpha_logscale = alpha_logscale
if self.alpha_logscale: # log scale alphas initialized to zeros
self.alpha = nn.Parameter(torch.zeros(in_features) * alpha)
else: # linear scale alphas initialized to ones
self.alpha = nn.Parameter(torch.ones(in_features) * alpha)
self.alpha.requires_grad = alpha_trainable
self.no_div_by_zero = 0.000000001
def forward(self, x):
'''
Forward pass of the function.
Applies the function to the input elementwise.
Snake ∶= x + 1/a * sin^2 (xa)
'''
alpha = self.alpha.unsqueeze(0).unsqueeze(-1) # line up with x to [B, C, T]
if self.alpha_logscale:
alpha = torch.exp(alpha)
x = x + (1.0 / (alpha + self.no_div_by_zero)) * torch.pow(torch.sin(x * alpha), 2)
return x
class ResBlock(torch.nn.Module):
"""Residual block module in HiFiGAN/BigVGAN."""
def __init__(
self,
channels: int = 512,
kernel_size: int = 3,
dilations: List[int] = [1, 3, 5], # noqa
):
super(ResBlock, self).__init__()
self.convs1 = nn.ModuleList()
self.convs2 = nn.ModuleList()
for dilation in dilations:
self.convs1.append(
weight_norm( # noqa
Conv1d(
channels,
channels,
kernel_size,
1,
dilation=dilation,
padding=get_padding(kernel_size, dilation)
)
)
)
self.convs2.append(
weight_norm( # noqa
Conv1d(
channels,
channels,
kernel_size,
1,
dilation=1,
padding=get_padding(kernel_size, 1)
)
)
)
self.convs1.apply(init_weights)
self.convs2.apply(init_weights)
self.activations1 = nn.ModuleList([
Snake(channels, alpha_logscale=False)
for _ in range(len(self.convs1))
])
self.activations2 = nn.ModuleList([
Snake(channels, alpha_logscale=False)
for _ in range(len(self.convs2))
])
def forward(self, x: torch.Tensor) -> torch.Tensor:
for idx in range(len(self.convs1)):
xt = self.activations1[idx](x)
xt = self.convs1[idx](xt)
xt = self.activations2[idx](xt)
xt = self.convs2[idx](xt)
x = xt + x
return x
def remove_weight_norm(self):
for idx in range(len(self.convs1)):
remove_weight_norm(self.convs1[idx])
remove_weight_norm(self.convs2[idx])
class SineGen(torch.nn.Module):
""" Definition of sine generator
SineGen(samp_rate, harmonic_num = 0,
sine_amp = 0.1, noise_std = 0.003,
voiced_threshold = 0,
flag_for_pulse=False)
samp_rate: sampling rate in Hz
harmonic_num: number of harmonic overtones (default 0)
sine_amp: amplitude of sine-wavefrom (default 0.1)
noise_std: std of Gaussian noise (default 0.003)
voiced_thoreshold: F0 threshold for U/V classification (default 0)
flag_for_pulse: this SinGen is used inside PulseGen (default False)
Note: when flag_for_pulse is True, the first time step of a voiced
segment is always sin(np.pi) or cos(0)
"""
def __init__(self, samp_rate, harmonic_num=0,
sine_amp=0.1, noise_std=0.003,
voiced_threshold=0):
super(SineGen, self).__init__()
self.sine_amp = sine_amp
self.noise_std = noise_std
self.harmonic_num = harmonic_num
self.sampling_rate = samp_rate
self.voiced_threshold = voiced_threshold
def _f02uv(self, f0):
# generate uv signal
uv = (f0 > self.voiced_threshold).type(torch.float32)
return uv
@torch.no_grad()
def forward(self, f0):
"""
:param f0: [B, 1, sample_len], Hz
:return: [B, 1, sample_len]
"""
F_mat = torch.zeros((f0.size(0), self.harmonic_num + 1, f0.size(-1))).to(f0.device)
for i in range(self.harmonic_num + 1):
F_mat[:, i: i + 1, :] = f0 * (i + 1) / self.sampling_rate
theta_mat = 2 * np.pi * (torch.cumsum(F_mat, dim=-1) % 1)
u_dist = Uniform(low=-np.pi, high=np.pi)
phase_vec = u_dist.sample(sample_shape=(f0.size(0), self.harmonic_num + 1, 1)).to(F_mat.device)
phase_vec[:, 0, :] = 0
# generate sine waveforms
sine_waves = self.sine_amp * torch.sin(theta_mat + phase_vec)
# generate uv signal
uv = self._f02uv(f0)
# noise: for unvoiced should be similar to sine_amp
# std = self.sine_amp/3 -> max value ~ self.sine_amp
# . for voiced regions is self.noise_std
noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
noise = noise_amp * torch.randn_like(sine_waves)
# first: set the unvoiced part to 0 by uv
# then: additive noise
sine_waves = sine_waves * uv + noise
return sine_waves, uv, noise
class SourceModuleHnNSF(torch.nn.Module):
""" SourceModule for hn-nsf
SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
add_noise_std=0.003, voiced_threshod=0)
sampling_rate: sampling_rate in Hz
harmonic_num: number of harmonic above F0 (default: 0)
sine_amp: amplitude of sine source signal (default: 0.1)
add_noise_std: std of additive Gaussian noise (default: 0.003)
note that amplitude of noise in unvoiced is decided
by sine_amp
voiced_threshold: threhold to set U/V given F0 (default: 0)
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
F0_sampled (batchsize, length, 1)
Sine_source (batchsize, length, 1)
noise_source (batchsize, length 1)
uv (batchsize, length, 1)
"""
def __init__(self, sampling_rate, upsample_scale, harmonic_num=0, sine_amp=0.1,
add_noise_std=0.003, voiced_threshod=0):
super(SourceModuleHnNSF, self).__init__()
self.sine_amp = sine_amp
self.noise_std = add_noise_std
# to produce sine waveforms
self.l_sin_gen = SineGen(sampling_rate, harmonic_num,
sine_amp, add_noise_std, voiced_threshod)
# to merge source harmonics into a single excitation
self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
self.l_tanh = torch.nn.Tanh()
def forward(self, x):
"""
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
F0_sampled (batchsize, length, 1)
Sine_source (batchsize, length, 1)
noise_source (batchsize, length 1)
"""
# source for harmonic branch
with torch.no_grad():
sine_wavs, uv, _ = self.l_sin_gen(x.transpose(1, 2))
sine_wavs = sine_wavs.transpose(1, 2)
uv = uv.transpose(1, 2)
sine_merge = self.l_tanh(self.l_linear(sine_wavs))
# source for noise branch, in the same shape as uv
noise = torch.randn_like(uv) * self.sine_amp / 3
return sine_merge, noise, uv
class SineGen2(torch.nn.Module):
""" Definition of sine generator
SineGen(samp_rate, harmonic_num = 0,
sine_amp = 0.1, noise_std = 0.003,
voiced_threshold = 0,
flag_for_pulse=False)
samp_rate: sampling rate in Hz
harmonic_num: number of harmonic overtones (default 0)
sine_amp: amplitude of sine-wavefrom (default 0.1)
noise_std: std of Gaussian noise (default 0.003)
voiced_thoreshold: F0 threshold for U/V classification (default 0)
flag_for_pulse: this SinGen is used inside PulseGen (default False)
Note: when flag_for_pulse is True, the first time step of a voiced
segment is always sin(np.pi) or cos(0)
"""
def __init__(self, samp_rate, upsample_scale, harmonic_num=0,
sine_amp=0.1, noise_std=0.003,
voiced_threshold=0,
flag_for_pulse=False):
super(SineGen2, self).__init__()
self.sine_amp = sine_amp
self.noise_std = noise_std
self.harmonic_num = harmonic_num
self.dim = self.harmonic_num + 1
self.sampling_rate = samp_rate
self.voiced_threshold = voiced_threshold
self.flag_for_pulse = flag_for_pulse
self.upsample_scale = upsample_scale
def _f02uv(self, f0):
# generate uv signal
uv = (f0 > self.voiced_threshold).type(torch.float32)
return uv
def _f02sine(self, f0_values):
""" f0_values: (batchsize, length, dim)
where dim indicates fundamental tone and overtones
"""
# convert to F0 in rad. The interger part n can be ignored
# because 2 * np.pi * n doesn't affect phase
rad_values = (f0_values / self.sampling_rate) % 1
# initial phase noise (no noise for fundamental component)
rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], device=f0_values.device)
rand_ini[:, 0] = 0
rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini
# instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad)
if not self.flag_for_pulse:
rad_values = torch.nn.functional.interpolate(rad_values.transpose(1, 2),
scale_factor=1 / self.upsample_scale,
mode="linear").transpose(1, 2)
phase = torch.cumsum(rad_values, dim=1) * 2 * np.pi
phase = torch.nn.functional.interpolate(phase.transpose(1, 2) * self.upsample_scale,
scale_factor=self.upsample_scale, mode="linear").transpose(1, 2)
sines = torch.sin(phase)
else:
# If necessary, make sure that the first time step of every
# voiced segments is sin(pi) or cos(0)
# This is used for pulse-train generation
# identify the last time step in unvoiced segments
uv = self._f02uv(f0_values)
uv_1 = torch.roll(uv, shifts=-1, dims=1)
uv_1[:, -1, :] = 1
u_loc = (uv < 1) * (uv_1 > 0)
# get the instantanouse phase
tmp_cumsum = torch.cumsum(rad_values, dim=1)
# different batch needs to be processed differently
for idx in range(f0_values.shape[0]):
temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :]
temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :]
# stores the accumulation of i.phase within
# each voiced segments
tmp_cumsum[idx, :, :] = 0
tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum
# rad_values - tmp_cumsum: remove the accumulation of i.phase
# within the previous voiced segment.
i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1)
# get the sines
sines = torch.cos(i_phase * 2 * np.pi)
return sines
def forward(self, f0):
""" sine_tensor, uv = forward(f0)
input F0: tensor(batchsize=1, length, dim=1)
f0 for unvoiced steps should be 0
output sine_tensor: tensor(batchsize=1, length, dim)
output uv: tensor(batchsize=1, length, 1)
"""
# fundamental component
fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device))
# generate sine waveforms
sine_waves = self._f02sine(fn) * self.sine_amp
# generate uv signal
uv = self._f02uv(f0)
# noise: for unvoiced should be similar to sine_amp
# std = self.sine_amp/3 -> max value ~ self.sine_amp
# . for voiced regions is self.noise_std
noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3
noise = noise_amp * torch.randn_like(sine_waves)
# first: set the unvoiced part to 0 by uv
# then: additive noise
sine_waves = sine_waves * uv + noise
return sine_waves, uv, noise
class SourceModuleHnNSF2(torch.nn.Module):
""" SourceModule for hn-nsf
SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1,
add_noise_std=0.003, voiced_threshod=0)
sampling_rate: sampling_rate in Hz
harmonic_num: number of harmonic above F0 (default: 0)
sine_amp: amplitude of sine source signal (default: 0.1)
add_noise_std: std of additive Gaussian noise (default: 0.003)
note that amplitude of noise in unvoiced is decided
by sine_amp
voiced_threshold: threhold to set U/V given F0 (default: 0)
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
F0_sampled (batchsize, length, 1)
Sine_source (batchsize, length, 1)
noise_source (batchsize, length 1)
uv (batchsize, length, 1)
"""
def __init__(self, sampling_rate, upsample_scale, harmonic_num=0, sine_amp=0.1,
add_noise_std=0.003, voiced_threshod=0):
super(SourceModuleHnNSF2, self).__init__()
self.sine_amp = sine_amp
self.noise_std = add_noise_std
# to produce sine waveforms
self.l_sin_gen = SineGen2(sampling_rate, upsample_scale, harmonic_num,
sine_amp, add_noise_std, voiced_threshod)
# to merge source harmonics into a single excitation
self.l_linear = torch.nn.Linear(harmonic_num + 1, 1)
self.l_tanh = torch.nn.Tanh()
def forward(self, x):
"""
Sine_source, noise_source = SourceModuleHnNSF(F0_sampled)
F0_sampled (batchsize, length, 1)
Sine_source (batchsize, length, 1)
noise_source (batchsize, length 1)
"""
# source for harmonic branch
with torch.no_grad():
sine_wavs, uv, _ = self.l_sin_gen(x)
sine_merge = self.l_tanh(self.l_linear(sine_wavs))
# source for noise branch, in the same shape as uv
noise = torch.randn_like(uv) * self.sine_amp / 3
return sine_merge, noise, uv
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/modules/qwen2.py | Python | # Copyright (c) 2025 Tsinghua Univ. (authors: Xingchen Song)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from torch import nn
from transformers import AutoConfig
from flashcosyvoice.config import CosyVoice2LLMConfig
from flashcosyvoice.modules.qwen2_components.layers import (
ParallelLMHead, Qwen2DecoderLayer, RMSNorm, VocabParallelEmbedding)
class Qwen2Model(nn.Module):
def __init__(
self,
config: CosyVoice2LLMConfig,
):
super().__init__()
self.vocab_size = config.vocab_size
self.embed_tokens = VocabParallelEmbedding(config.vocab_size, config.hidden_size)
self.layers = nn.ModuleList([Qwen2DecoderLayer(config) for _ in range(config.num_hidden_layers)])
self.norm = RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
def forward(
self,
input_ids: torch.Tensor,
positions: torch.Tensor,
) -> torch.Tensor:
hidden_states = self.embed_tokens(input_ids)
residual = None
for layer in self.layers:
hidden_states, residual = layer(
positions,
hidden_states,
residual,
)
hidden_states, _ = self.norm(hidden_states, residual)
return hidden_states
class Qwen2ForCausalLM(nn.Module):
packed_modules_mapping = {
"q_proj": ("qkv_proj", "q"),
"k_proj": ("qkv_proj", "k"),
"v_proj": ("qkv_proj", "v"),
"gate_proj": ("gate_up_proj", 0),
"up_proj": ("gate_up_proj", 1),
}
def __init__(
self,
config: CosyVoice2LLMConfig | AutoConfig
):
super().__init__()
self.model = Qwen2Model(config)
if hasattr(config, "speech_vocab_size"):
self.lm_head = ParallelLMHead(config.speech_vocab_size, config.hidden_size, bias=getattr(config, "lm_head_bias", True))
self.model_type = "speech_llm"
else:
self.lm_head = ParallelLMHead(config.vocab_size, config.hidden_size, bias=False)
self.model_type = "text_llm"
self.tie_word_embeddings = config.tie_word_embeddings
if self.tie_word_embeddings:
if self.model_type == "speech_llm":
assert config.vocab_size == config.speech_vocab_size, "vocab_size and speech_vocab_size must be the same when tie_word_embeddings is True"
self.lm_head.weight.data = self.model.embed_tokens.weight.data
def forward(
self,
input_ids: torch.Tensor,
positions: torch.Tensor,
) -> torch.Tensor:
hidden_states = self.model(input_ids, positions)
return hidden_states
def compute_logits(
self,
hidden_states: torch.Tensor,
) -> torch.Tensor:
logits = self.lm_head(hidden_states)
return logits
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/modules/qwen2_components/layers.py | Python | from functools import lru_cache
import torch
import torch.distributed as dist
import torch.nn as nn
import torch.nn.functional as F
import triton
import triton.language as tl
from flash_attn import flash_attn_varlen_func, flash_attn_with_kvcache
from flashcosyvoice.config import CosyVoice2LLMConfig
from flashcosyvoice.utils.context import get_context
class SiluAndMul(nn.Module):
def __init__(self):
super().__init__()
@torch.compile
def forward(self, x: torch.Tensor) -> torch.Tensor:
x, y = x.chunk(2, -1)
return F.silu(x) * y
class RMSNorm(nn.Module):
def __init__(
self,
hidden_size: int,
eps: float = 1e-6,
) -> None:
super().__init__()
self.hidden_size = hidden_size
self.eps = eps
self.weight = nn.Parameter(torch.ones(hidden_size))
@torch.compile
def rms_forward(
self,
x: torch.Tensor,
) -> torch.Tensor:
orig_dtype = x.dtype
x = x.to(torch.float32)
var = x.pow(2).mean(dim=-1, keepdim=True)
x.mul_(torch.rsqrt(var + self.eps))
x = x.to(orig_dtype).mul_(self.weight)
return x
@torch.compile
def add_rms_forward(
self,
x: torch.Tensor,
residual: torch.Tensor,
) -> torch.Tensor | tuple[torch.Tensor, torch.Tensor]:
orig_dtype = x.dtype
x = x.to(torch.float32).add_(residual.to(torch.float32))
residual = x.to(orig_dtype)
var = x.pow(2).mean(dim=-1, keepdim=True)
x.mul_(torch.rsqrt(var + self.eps))
x = x.to(orig_dtype).mul_(self.weight)
return x, residual
def forward(
self,
x: torch.Tensor,
residual: torch.Tensor | None = None,
) -> torch.Tensor | tuple[torch.Tensor, torch.Tensor]:
if residual is None:
return self.rms_forward(x)
else:
return self.add_rms_forward(x, residual)
@triton.jit
def store_kvcache_kernel(
key_ptr,
key_stride,
value_ptr,
value_stride,
k_cache_ptr,
v_cache_ptr,
slot_mapping_ptr,
D: tl.constexpr,
):
idx = tl.program_id(0)
key_offsets = idx * key_stride + tl.arange(0, D)
value_offsets = idx * value_stride + tl.arange(0, D)
key = tl.load(key_ptr + key_offsets)
value = tl.load(value_ptr + value_offsets)
slot = tl.load(slot_mapping_ptr + idx)
cache_offsets = slot * D + tl.arange(0, D)
tl.store(k_cache_ptr + cache_offsets, key)
tl.store(v_cache_ptr + cache_offsets, value)
def store_kvcache(key: torch.Tensor, value: torch.Tensor, k_cache: torch.Tensor, v_cache: torch.Tensor, slot_mapping: torch.Tensor):
N, num_heads, head_dim = key.shape
D = num_heads * head_dim
assert key.stride(-1) == 1 and value.stride(-1) == 1
assert key.stride(1) == head_dim and value.stride(1) == head_dim
assert k_cache.stride(1) == D and v_cache.stride(1) == D
assert slot_mapping.numel() == N
store_kvcache_kernel[(N,)](key, key.stride(0), value, value.stride(0), k_cache, v_cache, slot_mapping, D)
class Attention(nn.Module):
def __init__(
self,
num_heads,
head_dim,
scale,
num_kv_heads,
):
super().__init__()
self.num_heads = num_heads
self.head_dim = head_dim
self.scale = scale
self.num_kv_heads = num_kv_heads
self.k_cache = self.v_cache = torch.tensor([])
def forward(self, q: torch.Tensor, k: torch.Tensor, v: torch.Tensor):
o: torch.Tensor
q = q.view(-1, self.num_heads, self.head_dim)
k = k.view(-1, self.num_kv_heads, self.head_dim)
v = v.view(-1, self.num_kv_heads, self.head_dim)
context = get_context()
k_cache, v_cache = self.k_cache, self.v_cache
if k_cache.numel() and v_cache.numel():
store_kvcache(k, v, k_cache, v_cache, context.slot_mapping)
if context.is_prefill:
if context.block_tables is not None: # prefix cache
k, v = k_cache, v_cache
o = flash_attn_varlen_func(q, k, v,
max_seqlen_q=context.max_seqlen_q, cu_seqlens_q=context.cu_seqlens_q,
max_seqlen_k=context.max_seqlen_k, cu_seqlens_k=context.cu_seqlens_k,
softmax_scale=self.scale, causal=True, block_table=context.block_tables)
else: # decode
o = flash_attn_with_kvcache(q.unsqueeze(1), k_cache, v_cache,
cache_seqlens=context.context_lens, block_table=context.block_tables,
softmax_scale=self.scale, causal=True)
o = o.view(-1, self.num_heads * self.head_dim)
return o
class VocabParallelEmbedding(nn.Module):
def __init__(
self,
num_embeddings: int,
embedding_dim: int,
):
super().__init__()
# TODO(xcsong): support tp > 1
self.tp_rank = 0 # dist.get_rank()
self.tp_size = 1 # dist.get_world_size()
assert num_embeddings % self.tp_size == 0
self.num_embeddings = num_embeddings
self.num_embeddings_per_partition = self.num_embeddings // self.tp_size
self.vocab_start_idx = self.num_embeddings_per_partition * self.tp_rank
self.vocab_end_idx = self.vocab_start_idx + self.num_embeddings_per_partition
self.embedding_dim = embedding_dim
self.weight = nn.Parameter(torch.empty(self.num_embeddings_per_partition, embedding_dim))
self.weight.weight_loader = self.weight_loader
def weight_loader(self, param: nn.Parameter, loaded_weight: torch.Tensor):
param_data = param.data
shard_size = param_data.size(0)
start_idx = self.tp_rank * shard_size
loaded_weight = loaded_weight.narrow(0, start_idx, shard_size)
assert param_data.size() == loaded_weight.size()
param_data.copy_(loaded_weight)
def forward(self, x: torch.Tensor):
if self.tp_size > 1:
mask = (x >= self.vocab_start_idx) & (x < self.vocab_end_idx)
x = mask * (x - self.vocab_start_idx)
y = F.embedding(x, self.weight)
if self.tp_size > 1:
y = mask.unsqueeze(1) * y
dist.all_reduce(y)
return y
class ParallelLMHead(VocabParallelEmbedding):
def __init__(
self,
num_embeddings: int,
embedding_dim: int,
bias: bool = False,
):
super().__init__(num_embeddings, embedding_dim)
if bias:
self.bias = nn.Parameter(torch.empty(self.num_embeddings_per_partition))
self.bias.weight_loader = self.weight_loader
else:
self.register_parameter("bias", None)
def forward(self, x: torch.Tensor):
context = get_context()
if context.is_prefill:
last_indices = context.cu_seqlens_q[1:] - 1
x = x[last_indices].contiguous()
logits = F.linear(x, self.weight, self.bias)
if self.tp_size > 1:
all_logits = [torch.empty_like(logits) for _ in range(self.tp_size)] if self.tp_rank == 0 else None
dist.gather(logits, all_logits, 0)
logits = torch.cat(all_logits, -1) if self.tp_rank == 0 else None
return logits
def divide(numerator, denominator):
assert numerator % denominator == 0
return numerator // denominator
class LinearBase(nn.Module):
def __init__(
self,
input_size: int,
output_size: int,
tp_dim: int | None = None,
):
super().__init__()
self.input_size = input_size
self.output_size = output_size
self.tp_dim = tp_dim
# TODO(xcsong): support tp > 1
self.tp_rank = 0 # dist.get_rank()
self.tp_size = 1 # dist.get_world_size()
def forward(self, x: torch.Tensor) -> torch.Tensor:
raise NotImplementedError
class ReplicatedLinear(LinearBase):
def __init__(
self,
input_size: int,
output_size: int,
bias: bool = False,
):
super().__init__(input_size, output_size)
self.weight = nn.Parameter(torch.empty(self.output_size, self.input_size))
self.weight.weight_loader = self.weight_loader
if bias:
self.bias = nn.Parameter(torch.empty(self.output_size))
self.bias.weight_loader = self.weight_loader
else:
self.register_parameter("bias", None)
def weight_loader(self, param: nn.Parameter, loaded_weight: torch.Tensor):
assert param.size() == loaded_weight.size()
param.data.copy_(loaded_weight)
def forward(self, x: torch.Tensor) -> torch.Tensor:
return F.linear(x, self.weight, self.bias)
class ColumnParallelLinear(LinearBase):
def __init__(
self,
input_size: int,
output_size: int,
bias: bool = False,
):
super().__init__(input_size, output_size, 0)
self.input_size_per_partition = input_size
self.output_size_per_partition = divide(output_size, self.tp_size)
self.output_partition_sizes = [self.output_size_per_partition]
if hasattr(self, "output_sizes"):
self.output_partition_sizes = [
divide(output_size, self.tp_size)
for output_size in self.output_sizes
]
self.weight = nn.Parameter(torch.empty(self.output_size_per_partition, self.input_size))
self.weight.weight_loader = self.weight_loader
if bias:
self.bias = nn.Parameter(torch.empty(self.output_size_per_partition))
self.bias.weight_loader = self.weight_loader
else:
self.register_parameter("bias", None)
def weight_loader(self, param: nn.Parameter, loaded_weight: torch.Tensor):
param_data = param.data
shard_size = param_data.size(self.tp_dim)
start_idx = self.tp_rank * shard_size
loaded_weight = loaded_weight.narrow(self.tp_dim, start_idx, shard_size)
assert param_data.size() == loaded_weight.size()
param_data.copy_(loaded_weight)
def forward(self, x: torch.Tensor) -> torch.Tensor:
return F.linear(x, self.weight, self.bias)
class MergedColumnParallelLinear(ColumnParallelLinear):
def __init__(
self,
input_size: int,
output_sizes: list[int],
bias: bool = False,
):
self.output_sizes = output_sizes
super().__init__(input_size, sum(output_sizes), bias=bias)
def weight_loader(self, param: nn.Parameter, loaded_weight: torch.Tensor, loaded_shard_id: int):
param_data = param.data
shard_offset = sum(self.output_sizes[:loaded_shard_id]) // self.tp_size
shard_size = self.output_sizes[loaded_shard_id] // self.tp_size
param_data = param_data.narrow(self.tp_dim, shard_offset, shard_size)
loaded_weight = loaded_weight.chunk(self.tp_size, self.tp_dim)[self.tp_rank]
assert param_data.size() == loaded_weight.size()
param_data.copy_(loaded_weight)
class QKVParallelLinear(ColumnParallelLinear):
def __init__(
self,
hidden_size: int,
head_size: int,
total_num_heads: int,
total_num_kv_heads: int | None = None,
bias: bool = False,
):
self.hidden_size = hidden_size
self.head_size = head_size
self.total_num_heads = total_num_heads
if total_num_kv_heads is None:
total_num_kv_heads = total_num_heads
self.total_num_kv_heads = total_num_kv_heads
# TODO(xcsong): support tp > 1
tp_size = 1 # dist.get_world_size()
self.num_heads = divide(self.total_num_heads, tp_size)
self.num_kv_heads = divide(self.total_num_kv_heads, tp_size)
input_size = self.hidden_size
output_size = (self.num_heads + 2 * self.num_kv_heads) * tp_size * self.head_size
self.output_sizes = [
self.num_heads * self.head_size * tp_size, # q_proj
self.num_kv_heads * self.head_size * tp_size, # k_proj
self.num_kv_heads * self.head_size * tp_size, # v_proj
]
super().__init__(input_size, output_size, bias)
def weight_loader(self, param: nn.Parameter, loaded_weight: torch.Tensor, loaded_shard_id: str):
param_data = param.data
assert loaded_shard_id in ["q", "k", "v"]
if loaded_shard_id == "q":
shard_size = self.num_heads * self.head_size
shard_offset = 0
elif loaded_shard_id == "k":
shard_size = self.num_kv_heads * self.head_size
shard_offset = self.num_heads * self.head_size
else:
shard_size = self.num_kv_heads * self.head_size
shard_offset = self.num_heads * self.head_size + self.num_kv_heads * self.head_size
param_data = param_data.narrow(self.tp_dim, shard_offset, shard_size)
loaded_weight = loaded_weight.chunk(self.tp_size, self.tp_dim)[self.tp_rank]
assert param_data.size() == loaded_weight.size()
param_data.copy_(loaded_weight)
class RowParallelLinear(LinearBase):
def __init__(
self,
input_size: int,
output_size: int,
bias: bool = False,
):
super().__init__(input_size, output_size, 1)
self.input_size_per_partition = divide(input_size, self.tp_size)
self.output_size_per_partition = output_size
self.output_partition_sizes = [output_size]
self.weight = nn.Parameter(torch.empty(self.output_size, self.input_size_per_partition))
self.weight.weight_loader = self.weight_loader
if bias:
self.bias = nn.Parameter(torch.empty(self.output_size))
self.bias.weight_loader = self.weight_loader
else:
self.register_parameter("bias", None)
def weight_loader(self, param: nn.Parameter, loaded_weight: torch.Tensor):
param_data = param.data
shard_size = param_data.size(self.tp_dim)
start_idx = self.tp_rank * shard_size
loaded_weight = loaded_weight.narrow(self.tp_dim, start_idx, shard_size)
assert param_data.size() == loaded_weight.size()
param_data.copy_(loaded_weight)
def forward(self, x: torch.Tensor) -> torch.Tensor:
y = F.linear(x, self.weight, self.bias if self.tp_rank == 0 else None)
if self.tp_size > 1:
dist.all_reduce(y)
return y
def apply_rotary_emb(
x: torch.Tensor,
cos: torch.Tensor,
sin: torch.Tensor,
) -> torch.Tensor:
cos = cos.unsqueeze(-2)
sin = sin.unsqueeze(-2)
x1, x2 = torch.chunk(x.to(torch.float32), 2, dim=-1)
y1 = x1 * cos - x2 * sin
y2 = x2 * cos + x1 * sin
return torch.cat((y1, y2), dim=-1).to(x.dtype)
class RotaryEmbedding(nn.Module):
def __init__(
self,
head_size: int,
rotary_dim: int,
max_position_embeddings: int,
base: float,
) -> None:
super().__init__()
self.head_size = head_size
assert rotary_dim == head_size
inv_freq = 1.0 / (base**(torch.arange(0, rotary_dim, 2, dtype=torch.float) / rotary_dim))
t = torch.arange(max_position_embeddings, dtype=torch.float)
freqs = torch.einsum("i,j -> ij", t, inv_freq)
cos = freqs.cos()
sin = freqs.sin()
cache = torch.cat((cos, sin), dim=-1)
self.register_buffer("cos_sin_cache", cache, persistent=False)
@torch.compile
def forward(
self,
positions: torch.Tensor,
query: torch.Tensor,
key: torch.Tensor,
) -> tuple[torch.Tensor, torch.Tensor]:
positions = positions.flatten()
num_tokens = positions.shape[0]
cos_sin = self.cos_sin_cache[positions]
cos, sin = cos_sin.chunk(2, dim=-1)
query_shape = query.shape
query = query.view(num_tokens, -1, self.head_size)
query = apply_rotary_emb(query, cos, sin).view(query_shape)
key_shape = key.shape
key = key.view(num_tokens, -1, self.head_size)
key = apply_rotary_emb(key, cos, sin).view(key_shape)
return query, key
@lru_cache(1)
def get_rope(
head_size: int,
rotary_dim: int,
max_position: int,
base: float,
rope_scaling: dict | None = None,
):
assert rope_scaling is None
rotary_emb = RotaryEmbedding(head_size, rotary_dim, max_position, base)
return rotary_emb
class Qwen2Attention(nn.Module):
def __init__(
self,
hidden_size: int,
num_heads: int,
num_kv_heads: int,
max_position: int = 4096 * 32,
head_dim: int | None = None,
rms_norm_eps: float = 1e-06,
qkv_bias: bool = True,
rope_theta: float = 1000000.0,
rope_scaling: tuple | None = None,
) -> None:
super().__init__()
self.hidden_size = hidden_size
# TODO(xcsong): support tp > 1
tp_size = 1 # dist.get_world_size()
self.total_num_heads = num_heads
assert self.total_num_heads % tp_size == 0
self.num_heads = self.total_num_heads // tp_size
self.total_num_kv_heads = num_kv_heads
assert self.total_num_kv_heads % tp_size == 0
self.num_kv_heads = max(1, self.total_num_kv_heads // tp_size)
self.head_dim = head_dim or hidden_size // self.total_num_heads
self.q_size = self.num_heads * self.head_dim
self.kv_size = self.num_kv_heads * self.head_dim
self.scaling = self.head_dim**-0.5
self.rope_theta = rope_theta
self.qkv_proj = QKVParallelLinear(
hidden_size,
self.head_dim,
self.total_num_heads,
self.total_num_kv_heads,
bias=qkv_bias,
)
self.o_proj = RowParallelLinear(
self.total_num_heads * self.head_dim,
hidden_size,
bias=False,
)
self.rotary_emb = get_rope(
self.head_dim,
rotary_dim=self.head_dim,
max_position=max_position,
base=self.rope_theta,
rope_scaling=rope_scaling,
)
self.attn = Attention(self.num_heads,
self.head_dim,
self.scaling,
num_kv_heads=self.num_kv_heads)
def forward(
self,
positions: torch.Tensor,
hidden_states: torch.Tensor,
) -> torch.Tensor:
qkv = self.qkv_proj(hidden_states)
q, k, v = qkv.split([self.q_size, self.kv_size, self.kv_size], dim=-1)
q, k = self.rotary_emb(positions, q, k)
o = self.attn(q, k, v)
output = self.o_proj(o)
return output
class Qwen2MLP(nn.Module):
def __init__(
self,
hidden_size: int,
intermediate_size: int,
hidden_act: str,
) -> None:
super().__init__()
self.gate_up_proj = MergedColumnParallelLinear(
hidden_size,
[intermediate_size] * 2,
bias=False,
)
self.down_proj = RowParallelLinear(
intermediate_size,
hidden_size,
bias=False,
)
assert hidden_act == "silu"
self.act_fn = SiluAndMul()
def forward(self, x):
gate_up = self.gate_up_proj(x)
x = self.act_fn(gate_up)
x = self.down_proj(x)
return x
class Qwen2DecoderLayer(nn.Module):
def __init__(
self,
config: CosyVoice2LLMConfig,
) -> None:
super().__init__()
self.hidden_size = config.hidden_size
self.self_attn = Qwen2Attention(
hidden_size=self.hidden_size,
num_heads=config.num_attention_heads,
num_kv_heads=config.num_key_value_heads,
max_position=config.max_position_embeddings,
rms_norm_eps=config.rms_norm_eps,
qkv_bias=getattr(config, "qkv_bias", True),
head_dim=getattr(config, "head_dim", None),
rope_theta=getattr(config, "rope_theta", 1000000.0),
rope_scaling=getattr(config, "rope_scaling", None),
)
self.mlp = Qwen2MLP(
hidden_size=config.hidden_size,
intermediate_size=config.intermediate_size,
hidden_act=config.hidden_act,
)
self.input_layernorm = RMSNorm(config.hidden_size,
eps=config.rms_norm_eps)
self.post_attention_layernorm = RMSNorm(config.hidden_size,
eps=config.rms_norm_eps)
def forward(
self,
positions: torch.Tensor,
hidden_states: torch.Tensor,
residual: torch.Tensor | None,
) -> tuple[torch.Tensor, torch.Tensor]:
if residual is None:
residual = hidden_states
hidden_states = self.input_layernorm(hidden_states)
else:
hidden_states, residual = self.input_layernorm(hidden_states, residual)
hidden_states = self.self_attn(
positions=positions,
hidden_states=hidden_states,
)
hidden_states, residual = self.post_attention_layernorm(hidden_states, residual)
hidden_states = self.mlp(hidden_states)
return hidden_states, residual
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/modules/sampler.py | Python | import torch
from torch import nn
class Sampler(nn.Module):
"""
Optimized sampler implementation using vectorized operations instead of loops, significantly improving performance
Performance optimizations:
1. Using batch processing instead of sequence loops, reducing Python loop overhead
2. Using PyTorch's vectorized operations (like torch.sort, torch.gather) for parallel computation
3. Using mask operations to apply top-k filtering at once, avoiding per-sequence processing
"""
def __init__(self):
super().__init__()
def forward(self, logits: torch.Tensor, temperatures: torch.Tensor, top_k: int = None):
"""
Perform sampling operation using vectorized method for top-k filtering
Args:
logits: Logits tensor with shape [batch_size, vocab_size]
temperatures: Temperature parameters with shape [batch_size]
top_k: Top-k value for filtering (uniform across all sequences)
Returns:
Sampled token IDs
"""
logits = logits.to(torch.float)
greedy_tokens = logits.argmax(dim=-1) # Greedy decoding result, used when temperature=0
logits.div_(temperatures.unsqueeze(dim=1)) # Apply temperature scaling
# Apply uniform top-k filtering if top_k is provided
if top_k is not None and top_k > 0:
vocab_size = logits.size(-1)
# Create a mask to store which positions should be kept
mask = torch.zeros_like(logits, dtype=torch.bool)
# Batch sorting for all sequences at once
sorted_logits, sorted_indices = torch.sort(logits, dim=-1, descending=True)
# Get threshold for each sequence (the k-th largest value)
k_value = min(top_k, vocab_size) # Ensure k doesn't exceed vocab size
thresholds = sorted_logits[:, k_value-1:k_value] # Shape [batch_size, 1]
thresholds = thresholds.expand(-1, vocab_size) # Expand to match logits shape
# Create mask: only keep logits greater than or equal to threshold
mask = logits >= thresholds
# Apply mask: set logits not in top-k to negative infinity
logits = torch.where(mask, logits, torch.tensor(float('-inf'), device=logits.device))
probs = torch.softmax(logits, dim=-1, dtype=torch.float)
# logprobs = torch.log_softmax(logits, dim=-1, dtype=torch.float)
sample_tokens = probs.div_(torch.empty_like(probs).exponential_(1)).argmax(dim=-1)
return torch.where(temperatures == 0, greedy_tokens, sample_tokens)
class RasSampler(nn.Module):
"""
Optimized Repetition Aware Sampling implementation
Performance optimizations:
1. Using vectorized nucleus sampling instead of loop implementation, improving sampling efficiency
2. Using tensor operations to calculate repetition rate, reducing Python loop overhead
3. Optimizing EOS handling logic, reducing unnecessary resampling
4. Using PyTorch's vectorized operations for parallel computation
5. Batch processing for all sequences, dramatically improving throughput
6. Robust handling for sequences of any length, including empty sequences
"""
def __init__(self):
super().__init__()
def forward(self, logits: torch.Tensor, decoded_tokens_list: list,
win_size: int = 10, tau_r: float = 0.1,
top_p: float = 0.8, top_k: int = 25,
eos_token: int = 6561, min_tokens: list[int] = None):
"""
Execute repetition-aware sampling using optimized vectorized operations with batch processing
Args:
logits: Input logits with shape [batch_size, vocab_size]
decoded_tokens_list: List of decoded tokens, each element is a token list for a batch
win_size: Window size for repetition detection (uniform across all batch items)
tau_r: Repetition threshold (uniform across all batch items)
top_p: Nucleus sampling probability threshold (uniform across all batch items)
top_k: Nucleus sampling top-k threshold (uniform across all batch items)
eos_token: End of sequence token ID (uniform across all batch items)
min_tokens: List of minimum tokens to generate before allowing EOS, one per batch item
Returns:
Selected token IDs
"""
batch_size = logits.size(0)
device = logits.device
result = torch.zeros(batch_size, dtype=torch.long, device=device)
# Set default values if not provided
if min_tokens is None:
min_tokens = [2] * batch_size
# Ensure min_tokens list has the correct length
assert len(min_tokens) == batch_size, f"min_tokens length {len(min_tokens)} != batch_size {batch_size}"
# Force continue decode first token
for i in range(batch_size):
if i < len(decoded_tokens_list) and len(decoded_tokens_list[i]) == 0:
logits[i, eos_token] = -float('inf')
# 1. First, perform nucleus sampling for all sequences
probs = torch.softmax(logits, dim=-1)
# Use vectorized nucleus sampling for all sequences
# This can be done in batch since top_p and top_k are uniform
sorted_probs, sorted_indices = probs.sort(dim=-1, descending=True)
cumulative_probs = torch.cumsum(sorted_probs, dim=-1)
# Create masks for top-p and top-k filtering
top_p_mask = cumulative_probs <= top_p
# Create top-k mask (first top_k positions are True)
top_k_mask = torch.zeros_like(top_p_mask)
top_k_mask[:, :top_k] = True
# Combine masks
mask = top_p_mask & top_k_mask
# Ensure at least one token is selected per sequence
first_token_mask = torch.zeros_like(mask)
first_token_mask[:, 0] = True
mask = mask | first_token_mask
# Sample from the filtered distribution
sample_probs = torch.where(mask, sorted_probs, torch.zeros_like(sorted_probs))
sample_probs = sample_probs / sample_probs.sum(dim=-1, keepdim=True)
# Sample indices from the filtered distribution
sampled_indices = torch.multinomial(sample_probs, 1).squeeze(-1)
top_ids = torch.gather(sorted_indices, -1, sampled_indices.unsqueeze(-1)).squeeze(-1)
# 2. Check for repetitions and apply random sampling if needed
# Extract recent tokens for each sequence, handling empty or short sequences
recent_tokens_list = []
for i in range(batch_size):
# Handle index out of range or empty tokens
if i < len(decoded_tokens_list):
tokens = decoded_tokens_list[i]
if len(tokens) > 0:
start_idx = max(0, len(tokens) - win_size)
recent_tokens_list.append(tokens[start_idx:])
else:
recent_tokens_list.append([]) # Empty list for empty tokens
else:
recent_tokens_list.append([]) # Empty list for missing batch items
# Check if we have any tokens to process for repetition detection
if any(len(tokens) > 0 for tokens in recent_tokens_list):
# Convert to padded tensor for batch processing
max_recent_len = max(len(tokens) for tokens in recent_tokens_list)
if max_recent_len > 0: # Only proceed if we have tokens
recent_tokens_tensor = torch.zeros((batch_size, max_recent_len), dtype=torch.long, device=device) - 1
for i, tokens in enumerate(recent_tokens_list):
if len(tokens) > 0:
recent_tokens_tensor[i, -len(tokens):] = torch.tensor(tokens, device=device)
# Create a mask for valid positions and to avoid division by zero
valid_positions_mask = torch.zeros_like(recent_tokens_tensor, dtype=torch.bool)
for i, tokens in enumerate(recent_tokens_list):
if len(tokens) > 0:
valid_positions_mask[i, -len(tokens):] = True
# Check repetition rates
repetition_counts = torch.zeros(batch_size, device=device)
for i in range(batch_size):
if len(recent_tokens_list[i]) > 0:
repetition_counts[i] = (recent_tokens_tensor[i] == top_ids[i]).sum()
# Calculate repetition rates, avoiding division by zero
recent_lengths = torch.tensor([max(1, len(tokens)) for tokens in recent_tokens_list], device=device)
repetition_rates = repetition_counts / recent_lengths
# Identify sequences needing random sampling
need_random = repetition_rates >= tau_r
# Apply random sampling where needed
if need_random.any():
random_indices = torch.multinomial(probs[need_random], 1).squeeze(-1)
top_ids[need_random] = random_indices
# 3. Handle EOS tokens
# Create mask for sequences that should ignore EOS tokens
ignore_eos_mask = torch.zeros(batch_size, dtype=torch.bool, device=device)
for i in range(batch_size):
if i < len(decoded_tokens_list):
ignore_eos_mask[i] = len(decoded_tokens_list[i]) < min_tokens[i]
else:
ignore_eos_mask[i] = True # Default to ignoring EOS for missing sequences
is_eos_mask = top_ids == eos_token
need_resample = ignore_eos_mask & is_eos_mask
# Resample for sequences that need it
if need_resample.any():
max_trials = 100
for attempt in range(max_trials):
# Break if no more resampling needed
if not need_resample.any():
break
# Sample new tokens for sequences that need resampling
new_samples = torch.multinomial(probs[need_resample], 1).squeeze(-1)
# Update top_ids with new samples
top_ids[need_resample] = new_samples
# Update which sequences still need resampling
is_eos_mask = top_ids == eos_token
need_resample = ignore_eos_mask & is_eos_mask
# If still have EOS tokens that should be ignored, force them to be non-EOS
if need_resample.any():
# Force to a non-EOS token (e.g., the second most likely token)
for i in range(batch_size):
if need_resample[i]:
# Get second most likely token (or first if only one token)
second_best_idx = 1 if sorted_indices.size(1) > 1 else 0
top_ids[i] = sorted_indices[i, second_best_idx]
result = top_ids
return result
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/utils/audio.py | Python | import numpy as np
import torch
from librosa.filters import mel as librosa_mel_fn
from scipy.io.wavfile import read
MAX_WAV_VALUE = 32768.0
def load_wav(full_path):
sampling_rate, data = read(full_path)
return data, sampling_rate
def dynamic_range_compression(x, C=1, clip_val=1e-5):
return np.log(np.clip(x, a_min=clip_val, a_max=None) * C)
def dynamic_range_decompression(x, C=1):
return np.exp(x) / C
def dynamic_range_compression_torch(x, C=1, clip_val=1e-5):
return torch.log(torch.clamp(x, min=clip_val) * C)
def dynamic_range_decompression_torch(x, C=1):
return torch.exp(x) / C
def spectral_normalize_torch(magnitudes):
output = dynamic_range_compression_torch(magnitudes)
return output
def spectral_de_normalize_torch(magnitudes):
output = dynamic_range_decompression_torch(magnitudes)
return output
mel_basis = {}
hann_window = {}
def mel_spectrogram(y, n_fft=1920, num_mels=80, sampling_rate=24000, hop_size=480,
win_size=1920, fmin=0, fmax=8000, center=False):
global mel_basis, hann_window # pylint: disable=global-statement
if f"{str(fmax)}_{str(y.device)}" not in mel_basis:
mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax)
mel_basis[str(fmax) + "_" + str(y.device)] = torch.from_numpy(mel).float().to(y.device)
hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device)
y = torch.nn.functional.pad(
y.unsqueeze(1), (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), mode="reflect"
)
y = y.squeeze(1)
spec = torch.view_as_real(
torch.stft(
y,
n_fft,
hop_length=hop_size,
win_length=win_size,
window=hann_window[str(y.device)],
center=center,
pad_mode="reflect",
normalized=False,
onesided=True,
return_complex=True,
)
)
spec = torch.sqrt(spec.pow(2).sum(-1) + (1e-9))
spec = torch.matmul(mel_basis[str(fmax) + "_" + str(y.device)], spec)
spec = spectral_normalize_torch(spec)
return spec
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/utils/context.py | Python | from dataclasses import dataclass
import torch
@dataclass
class Context:
is_prefill: bool = False
cu_seqlens_q: torch.Tensor | None = None
cu_seqlens_k: torch.Tensor | None = None
max_seqlen_q: int = 0
max_seqlen_k: int = 0
slot_mapping: torch.Tensor | None = None
context_lens: torch.Tensor | None = None
block_tables: torch.Tensor | None = None
_CONTEXT = Context()
def get_context():
return _CONTEXT
def set_context(is_prefill, cu_seqlens_q=None, cu_seqlens_k=None, max_seqlen_q=0, max_seqlen_k=0, slot_mapping=None, context_lens=None, block_tables=None):
global _CONTEXT
_CONTEXT = Context(is_prefill, cu_seqlens_q, cu_seqlens_k, max_seqlen_q, max_seqlen_k, slot_mapping, context_lens, block_tables)
def reset_context():
global _CONTEXT
_CONTEXT = Context()
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/utils/loader.py | Python | import os
from glob import glob
import torch
from safetensors import safe_open
from torch import nn
from flashcosyvoice.config import CosyVoice2LLMConfig
def default_weight_loader(param: nn.Parameter, loaded_weight: torch.Tensor):
param.data.copy_(loaded_weight)
def load_text_llm(model: nn.Module, path: str):
packed_modules_mapping = getattr(model, "packed_modules_mapping", {})
for file in glob(os.path.join(path, "*.safetensors")):
with safe_open(file, "pt", "cpu") as f:
for weight_name in f.keys():
for k in packed_modules_mapping:
if k in weight_name:
v, shard_id = packed_modules_mapping[k]
param_name = weight_name.replace(k, v)
param = model.get_parameter(param_name)
weight_loader = param.weight_loader
weight_loader(param, f.get_tensor(weight_name), shard_id)
break
else:
param = model.get_parameter(weight_name)
weight_loader = getattr(param, "weight_loader", default_weight_loader)
weight_loader(param, f.get_tensor(weight_name))
def load_speech_llm(model: nn.Module, path: str, hf_config: CosyVoice2LLMConfig):
packed_modules_mapping = getattr(model, "packed_modules_mapping", {})
# NOTE(xcsong): 1. load speech embedding + sos/taskid embedding + lm head
embedding_weights = {}
tmp_weights = torch.load(f"{path}/llm.pt", map_location="cpu", weights_only=True)
missed, missed_names = 0, []
for k, v in tmp_weights.items():
if k == "speech_embedding.weight": # torch.Size([6564, 896])
speech_embedding_size = hf_config.speech_vocab_size # 6562
# NOTE(xcsong): padding to 6592 for vllm tensor parallel
if speech_embedding_size != v.shape[0]: # [6564, 896] -> [6562, 896]
assert speech_embedding_size <= v.shape[0], f"speech_embedding_size should be less than or equal to {v.shape[0]}, but got {speech_embedding_size}"
v = v[:speech_embedding_size, :]
embedding_weights["speech_embedding.weight"] = v
elif k == "llm_embedding.weight": # torch.Size([2, 896]), eos and task_id
assert v.shape[0] == 2, f"llm_embedding.weight should be of shape [2, 896], but got {v.shape}"
embedding_weights["llm_embedding.weight"] = v
elif k == "llm.model.model.embed_tokens.weight": # torch.Size([151936, 896])
embedding_weights["model.embed_tokens.weight"] = v
elif k == "llm_decoder.weight": # torch.Size([6564, 896])
lm_head_size = hf_config.speech_vocab_size # 6562
if lm_head_size != v.shape[0]: # [6564, 896] -> [6562, 896]
assert lm_head_size <= v.shape[0], f"lm_head_size should be less than or equal to {v.shape[0]}, but got {lm_head_size}"
v = v[:lm_head_size, :]
param = model.get_parameter("lm_head.weight")
weight_loader = getattr(param, "weight_loader", default_weight_loader)
weight_loader(param, v)
elif k == "llm_decoder.bias": # torch.Size([6564])
lm_head_size = hf_config.speech_vocab_size # 6562
if lm_head_size != v.shape[0]: # [6564] -> [6562]
assert lm_head_size <= v.shape[0], f"lm_head_size should be less than or equal to {v.shape[0]}, but got {lm_head_size}"
v = v[:lm_head_size]
param = model.get_parameter("lm_head.bias")
weight_loader = getattr(param, "weight_loader", default_weight_loader)
weight_loader(param, v)
elif "llm.model." in k:
weight_name = k.replace("llm.model.", "")
for kk in packed_modules_mapping:
if kk in weight_name:
vv, shard_id = packed_modules_mapping[kk]
param_name = weight_name.replace(kk, vv)
try:
param = model.get_parameter(param_name)
weight_loader = param.weight_loader
weight_loader(param, v, shard_id)
break
except Exception as e:
print(e)
print(f"skip parameter (1): {weight_name}")
continue
else:
try:
param = model.get_parameter(weight_name)
weight_loader = getattr(param, "weight_loader", default_weight_loader)
weight_loader(param, v)
except Exception as e:
print(e)
print(f"skip parameter (2): {weight_name}")
continue
else:
missed += 1
missed_names.append(weight_name)
continue
print(f"missed {missed} parameters: {missed_names}")
# NOTE(xcsong): 2. merge text embedding, sos/taskid embedding, and speech embedding
text_embedding_weight = embedding_weights["model.embed_tokens.weight"].cpu() # [151936, 896]
sos_taskid_embedding_weight = embedding_weights["llm_embedding.weight"].cpu() # [2, 896]
speech_embedding_weight = embedding_weights["speech_embedding.weight"].cpu() # [6562, 896]
final_embedding_weight = torch.cat([speech_embedding_weight, sos_taskid_embedding_weight, text_embedding_weight], dim=0) # [158500, 896]
param = model.get_parameter("model.embed_tokens.weight")
weight_loader = getattr(param, "weight_loader", default_weight_loader)
weight_loader(param, final_embedding_weight)
def load_model(model: nn.Module, path: str, hf_config: CosyVoice2LLMConfig | None = None):
if model.model_type == "speech_llm":
load_speech_llm(model, path, hf_config)
elif model.model_type == "text_llm":
load_text_llm(model, path)
else:
raise ValueError(f"Unsupported model type: {model.model_type}")
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
flashcosyvoice/utils/memory.py | Python | import os
import torch
from pynvml import * # noqa
def get_gpu_memory():
torch.cuda.synchronize()
nvmlInit()
visible_device = list(map(int, os.getenv("CUDA_VISIBLE_DEVICES", "0,1,2,3,4,5,6,7").split(',')))
cuda_device_idx = torch.cuda.current_device()
cuda_device_idx = visible_device[cuda_device_idx]
handle = nvmlDeviceGetHandleByIndex(cuda_device_idx)
mem_info = nvmlDeviceGetMemoryInfo(handle)
total_memory = mem_info.total
used_memory = mem_info.used
free_memory = mem_info.free
nvmlShutdown()
return total_memory, used_memory, free_memory
| xingchensong/FlashCosyVoice | 242 | FlashCosyVoice: A lightweight vLLM implementation built from scratch for CosyVoice. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
s3tokenizer/__init__.py | Python | # Copyright (c) 2023 OpenAI. (authors: Whisper Team)
# 2024 Tsinghua Univ. (authors: Xingchen Song)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Modified from
https://github.com/openai/whisper/blob/main/whisper/__init__.py
"""
import hashlib
import os
import urllib
import warnings
from typing import List, Union
from tqdm import tqdm
from s3tokenizer.model_v2 import S3TokenizerV2
from s3tokenizer.model_v3 import S3TokenizerV3
from .model import S3Tokenizer
from .utils import (load_audio, log_mel_spectrogram, make_non_pad_mask,
mask_to_bias, merge_tokenized_segments, onnx2torch,
onnx2torch_v3, padding)
__all__ = [
'load_audio', 'log_mel_spectrogram', 'make_non_pad_mask', 'mask_to_bias',
'onnx2torch', 'onnx2torch_v3', 'padding', 'merge_tokenized_segments'
]
_MODELS = {
"speech_tokenizer_v1":
"https://www.modelscope.cn/models/iic/cosyvoice-300m/"
"resolve/master/speech_tokenizer_v1.onnx",
"speech_tokenizer_v1_25hz":
"https://www.modelscope.cn/models/iic/CosyVoice-300M-25Hz/"
"resolve/master/speech_tokenizer_v1.onnx",
"speech_tokenizer_v2_25hz":
"https://www.modelscope.cn/models/iic/CosyVoice2-0.5B/"
"resolve/master/speech_tokenizer_v2.onnx",
"speech_tokenizer_v3_25hz":
"https://www.modelscope.cn/models/FunAudioLLM/Fun-CosyVoice3-0.5B-2512/"
"resolve/master/speech_tokenizer_v3.onnx",
}
_SHA256S = {
"speech_tokenizer_v1":
"23b5a723ed9143aebfd9ffda14ac4c21231f31c35ef837b6a13bb9e5488abb1e",
"speech_tokenizer_v1_25hz":
"56285ddd4a83e883ee0cb9f8d69c1089b53a94b1f78ff7e4a0224a27eb4cb486",
"speech_tokenizer_v2_25hz":
"d43342aa12163a80bf07bffb94c9de2e120a8df2f9917cd2f642e7f4219c6f71",
"speech_tokenizer_v3_25hz":
"23236a74175dbdda47afc66dbadd5bcb41303c467a57c261cb8539ad9db9208d",
}
def _download(name: str, root: str) -> Union[bytes, str]:
os.makedirs(root, exist_ok=True)
expected_sha256 = _SHA256S[name]
url = _MODELS[name]
download_target = os.path.join(root, f"{name}.onnx")
if os.path.exists(download_target) and not os.path.isfile(download_target):
raise RuntimeError(
f"{download_target} exists and is not a regular file")
if os.path.isfile(download_target):
with open(download_target, "rb") as f:
model_bytes = f.read()
if hashlib.sha256(model_bytes).hexdigest() == expected_sha256:
return download_target
else:
warnings.warn(
f"{download_target} exists, but the SHA256 checksum does not"
" match; re-downloading the file")
with urllib.request.urlopen(url) as source, open(download_target,
"wb") as output:
with tqdm(
total=int(source.info().get("Content-Length")),
ncols=80,
unit="iB",
unit_scale=True,
unit_divisor=1024,
desc="Downloading onnx checkpoint",
) as loop:
while True:
buffer = source.read(8192)
if not buffer:
break
output.write(buffer)
loop.update(len(buffer))
model_bytes = open(download_target, "rb").read()
if hashlib.sha256(model_bytes).hexdigest() != expected_sha256:
raise RuntimeError(
"Model has been downloaded but the SHA256 checksum does not not"
" match. Please retry loading the model.")
return download_target
def available_models() -> List[str]:
"""Returns the names of available models"""
return list(_MODELS.keys())
def load_model(
name: str,
download_root: str = None,
) -> S3Tokenizer:
"""
Load a S3Tokenizer ASR model
Parameters
----------
name : str
one of the official model names listed by
`s3tokenizer.available_models()`, or path to a model checkpoint
containing the model dimensions and the model state_dict.
download_root: str
path to download the model files; by default,
it uses "~/.cache/s3tokenizer"
Returns
-------
model : S3Tokenizer
The S3Tokenizer model instance
"""
if download_root is None:
default = os.path.join(os.path.expanduser("~"), ".cache")
download_root = os.path.join(os.getenv("XDG_CACHE_HOME", default),
"s3tokenizer")
if name in _MODELS:
checkpoint_file = _download(name, download_root)
elif os.path.isfile(name):
checkpoint_file = name
else:
raise RuntimeError(
f"Model {name} not found; available models = {available_models()}")
if 'v3' in name:
model = S3TokenizerV3(name)
elif 'v2' in name:
model = S3TokenizerV2(name)
else:
model = S3Tokenizer(name)
model.init_from_onnx(checkpoint_file)
return model
| xingchensong/S3Tokenizer | 505 | Reverse Engineering of Supervised Semantic Speech Tokenizer (S3Tokenizer) proposed in CosyVoice | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
s3tokenizer/cli.py | Python | # Copyright (c) 2024 Tsinghua Univ. (authors: Xingchen Song)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Example Usage
cpu:
s3tokenizer --wav_scp xxx.scp \
--device "cpu" \
--output_dir "./" \
--batch_size 32
gpu:
torchrun --nproc_per_node=8 --nnodes=1 \
--rdzv_id=2024 --rdzv_backend="c10d" --rdzv_endpoint="localhost:0" \
`which s3tokenizer` --wav_scp xxx.scp \
--device "cuda" \
--output_dir "./" \
--batch_size 32
"""
import argparse
import json
import os
import torch
import torch.distributed as dist
from torch.utils.data import DataLoader, Dataset, DistributedSampler
from tqdm import tqdm
import s3tokenizer
class AudioDataset(Dataset):
def __init__(self, wav_scp):
self.data = []
self.keys = []
with open(wav_scp, 'r', encoding='utf-8') as f:
for line in f:
key, file_path = line.strip().split()
self.data.append(file_path)
self.keys.append(key)
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
file_path = self.data[idx]
key = self.keys[idx]
audio = s3tokenizer.load_audio(file_path)
mel = s3tokenizer.log_mel_spectrogram(audio)
return key, mel
def collate_fn(batch):
keys = [item[0] for item in batch]
mels = [item[1] for item in batch]
mels, mels_lens = s3tokenizer.padding(mels)
return keys, mels, mels_lens
def init_distributed():
world_size = int(os.environ.get('WORLD_SIZE', 1))
local_rank = int(os.environ.get('LOCAL_RANK', 0))
rank = int(os.environ.get('RANK', 0))
print('Inference on multiple gpus, this gpu {}'.format(local_rank) +
', rank {}, world_size {}'.format(rank, world_size))
torch.cuda.set_device(local_rank)
dist.init_process_group("nccl")
return world_size, local_rank, rank
def get_args():
parser = argparse.ArgumentParser(description='extract speech code')
parser.add_argument('--model',
required=True,
type=str,
choices=[
"speech_tokenizer_v1", "speech_tokenizer_v1_25hz",
"speech_tokenizer_v2_25hz"
],
help='model version')
parser.add_argument('--wav_scp',
required=True,
type=str,
help='each line contains `wav_name wav_path`')
parser.add_argument('--device',
required=True,
type=str,
choices=["cuda", "cpu"],
help='device for inference')
parser.add_argument('--output_dir',
required=True,
type=str,
help='dir to save result')
parser.add_argument('--batch_size',
required=True,
type=int,
help='batch size (per-device) for inference')
parser.add_argument('--num_workers',
type=int,
default=4,
help='workers for dataloader')
parser.add_argument('--prefetch',
type=int,
default=5,
help='prefetch for dataloader')
args = parser.parse_args()
return args
def main():
args = get_args()
os.makedirs(args.output_dir, exist_ok=True)
if args.device == "cuda":
assert (torch.cuda.is_available())
world_size, local_rank, rank = init_distributed()
else:
world_size, local_rank, rank = 1, 0, 0
device = torch.device(args.device)
model = s3tokenizer.load_model(args.model).to(device)
dataset = AudioDataset(args.wav_scp)
if args.device == "cuda":
model = torch.nn.parallel.DistributedDataParallel(
model, device_ids=[local_rank])
sampler = DistributedSampler(dataset,
num_replicas=world_size,
rank=rank)
else:
sampler = None
dataloader = DataLoader(dataset,
batch_size=args.batch_size,
sampler=sampler,
shuffle=False,
num_workers=args.num_workers,
prefetch_factor=args.prefetch,
collate_fn=collate_fn)
total_steps = len(dataset)
if rank == 0:
progress_bar = tqdm(total=total_steps, desc="Processing", unit="wavs")
writer = open(f"{args.output_dir}/part_{rank + 1}_of_{world_size}", "w")
for keys, mels, mels_lens in dataloader:
codes, codes_lens = model(mels.to(device), mels_lens.to(device))
for i, k in enumerate(keys):
code = codes[i, :codes_lens[i].item()].tolist()
writer.write(
json.dumps({
"key": k,
"code": code
}, ensure_ascii=False) + "\n")
if rank == 0:
progress_bar.update(world_size * len(keys))
if rank == 0:
progress_bar.close()
writer.close()
if args.device == "cuda":
dist.barrier()
dist.destroy_process_group()
if __name__ == "__main__":
main()
| xingchensong/S3Tokenizer | 505 | Reverse Engineering of Supervised Semantic Speech Tokenizer (S3Tokenizer) proposed in CosyVoice | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
s3tokenizer/model.py | Python | # Copyright (c) 2023 OpenAI. (authors: Whisper Team)
# 2024 Tsinghua Univ. (authors: Xingchen Song)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Modified from https://github.com/openai/whisper/blob/main/whisper/model.py
Add EuclideanCodebook & VectorQuantization
"""
from dataclasses import dataclass
from typing import Iterable, Optional, Tuple
import numpy as np
import torch
import torch.nn.functional as F
from einops import rearrange
from torch import Tensor, nn
from .utils import make_non_pad_mask, mask_to_bias, onnx2torch, merge_tokenized_segments
@dataclass
class ModelConfig:
n_mels: int = 128
n_audio_ctx: int = 1500
n_audio_state: int = 1280
n_audio_head: int = 20
n_audio_layer: int = 6
n_codebook_size: int = 4096
use_sdpa: bool = False
class LayerNorm(nn.LayerNorm):
def forward(self, x: Tensor) -> Tensor:
return F.layer_norm(
x.float(),
self.normalized_shape,
self.weight.float() if self.weight is not None else None,
self.bias.float() if self.bias is not None else None,
self.eps,
).type(x.dtype)
class Linear(nn.Linear):
def forward(self, x: Tensor) -> Tensor:
return F.linear(
x,
self.weight.to(x.dtype),
None if self.bias is None else self.bias.to(x.dtype),
)
class Conv1d(nn.Conv1d):
def _conv_forward(self, x: Tensor, weight: Tensor,
bias: Optional[Tensor]) -> Tensor:
return super()._conv_forward(
x, weight.to(x.dtype), None if bias is None else bias.to(x.dtype))
def sinusoids(length, channels, max_timescale=10000):
"""Returns sinusoids for positional embedding"""
assert channels % 2 == 0
log_timescale_increment = np.log(max_timescale) / (channels // 2 - 1)
inv_timescales = torch.exp(-log_timescale_increment *
torch.arange(channels // 2))
scaled_time = torch.arange(length)[:, np.newaxis] * inv_timescales[
np.newaxis, :]
return torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], dim=1)
class MultiHeadAttention(nn.Module):
def __init__(self, n_state: int, n_head: int, use_sdpa: bool = False):
super().__init__()
self.n_head = n_head
self.query = Linear(n_state, n_state)
self.key = Linear(n_state, n_state, bias=False)
self.value = Linear(n_state, n_state)
self.out = Linear(n_state, n_state)
self.use_sdpa = use_sdpa
def forward(
self,
x: Tensor,
mask: Optional[Tensor] = None,
):
q = self.query(x)
k = self.key(x)
v = self.value(x)
wv, qk = self.qkv_attention(q, k, v, mask)
return self.out(wv), qk
def qkv_attention(self,
q: Tensor,
k: Tensor,
v: Tensor,
mask: Optional[Tensor] = None):
_, _, D = q.shape
scale = (D // self.n_head)**-0.25
q = q.view(*q.shape[:2], self.n_head, -1).permute(0, 2, 1, 3) * scale
k = k.view(*k.shape[:2], self.n_head, -1)
v = v.view(*v.shape[:2], self.n_head, -1).permute(0, 2, 1, 3)
if not self.use_sdpa:
k = k.permute(0, 2, 3, 1) * scale
qk = q @ k # (B, n_head, T, T)
if mask is not None:
qk = qk + mask
qk = qk.float()
w = torch.nn.functional.softmax(qk, dim=-1).to(q.dtype)
return (w @ v).permute(0, 2, 1,
3).flatten(start_dim=2), qk.detach()
else:
k = k.permute(0, 2, 1, 3) * scale
assert mask is not None
output = torch.nn.functional.scaled_dot_product_attention(
q,
k,
v,
attn_mask=mask,
dropout_p=0.,
scale=1.,
)
output = (output.transpose(1,
2).contiguous().view(q.size(0), -1, D)
) # (batch, time1, d_model)
return output, None
class ResidualAttentionBlock(nn.Module):
def __init__(self, n_state: int, n_head: int, use_sdpa: bool):
super().__init__()
self.attn = MultiHeadAttention(n_state, n_head, use_sdpa=use_sdpa)
self.attn_ln = LayerNorm(n_state)
n_mlp = n_state * 4
self.mlp = nn.Sequential(Linear(n_state, n_mlp), nn.GELU(),
Linear(n_mlp, n_state))
self.mlp_ln = LayerNorm(n_state)
def forward(
self,
x: Tensor,
mask: Optional[Tensor] = None,
):
x = x + self.attn(self.attn_ln(x), mask=mask)[0]
x = x + self.mlp(self.mlp_ln(x))
return x
class AudioEncoder(nn.Module):
def __init__(
self,
n_mels: int,
n_ctx: int,
n_state: int,
n_head: int,
n_layer: int,
stride: int,
use_sdpa: bool,
):
super().__init__()
self.stride = stride
self.conv1 = Conv1d(n_mels,
n_state,
kernel_size=3,
stride=stride,
padding=1)
self.conv2 = Conv1d(n_state,
n_state,
kernel_size=3,
stride=2,
padding=1)
self.register_buffer("positional_embedding", sinusoids(n_ctx, n_state))
self.blocks: Iterable[ResidualAttentionBlock] = nn.ModuleList([
ResidualAttentionBlock(n_state, n_head, use_sdpa=use_sdpa)
for _ in range(n_layer)
])
def forward(self, x: Tensor, x_len: Tensor) -> Tuple[Tensor, Tensor]:
"""
x : torch.Tensor, shape = (batch_size, n_mels, T)
the mel spectrogram of the audio
x_len: torch.Tensor, shape = (batch_size,)
length of each audio in x
"""
mask = make_non_pad_mask(x_len).unsqueeze(1)
x = F.gelu(self.conv1(x * mask))
x_len = (x_len + 2 - 1 * (3 - 1) - 1) // self.stride + 1
mask = make_non_pad_mask(x_len).unsqueeze(1)
x = F.gelu(self.conv2(x * mask))
x_len = (x_len + 2 - 1 * (3 - 1) - 1) // 2 + 1
mask = make_non_pad_mask(x_len).unsqueeze(1)
x = x.permute(0, 2, 1) # (B, T // 2, n_state)
mask = mask_to_bias(mask, x.dtype)
x = (x + self.positional_embedding[:x.shape[1], :]).to(x.dtype)
for block in self.blocks:
x = block(x, mask.unsqueeze(1))
return x, x_len
class EuclideanCodebook(nn.Module):
"""Codebook with Euclidean distance (inference-only).
Args:
dim (int): Dimension.
codebook_size (int): Codebook size.
"""
def __init__(self, dim: int, codebook_size: int):
super().__init__()
embed = torch.zeros(codebook_size, dim)
self.codebook_size = codebook_size
self.register_buffer("embed", embed)
@torch.inference_mode()
def preprocess(self, x: Tensor) -> Tensor:
x = rearrange(x, "... d -> (...) d")
return x
@torch.inference_mode()
def quantize(self, x: Tensor) -> Tensor:
embed = self.embed.t().to(x.dtype)
dist = -(x.pow(2).sum(1, keepdim=True) - 2 * x @ embed +
embed.pow(2).sum(0, keepdim=True))
embed_ind = dist.max(dim=-1).indices
return embed_ind
@torch.inference_mode()
def postprocess_emb(self, embed_ind, shape):
return embed_ind.view(*shape[:-1])
@torch.inference_mode()
def dequantize(self, embed_ind: Tensor) -> Tensor:
quantize = F.embedding(embed_ind, self.embed)
return quantize
@torch.inference_mode()
def encode(self, x: Tensor) -> Tensor:
shape = x.shape
# pre-process
x = self.preprocess(x)
# quantize
embed_ind = self.quantize(x)
# post-process
embed_ind = self.postprocess_emb(embed_ind, shape)
return embed_ind
@torch.inference_mode()
def decode(self, embed_ind: Tensor) -> Tensor:
quantize = self.dequantize(embed_ind)
return quantize
class VectorQuantization(nn.Module):
"""Vector quantization implementation (inference-only).
Args:
dim (int): Dimension
codebook_size (int): Codebook size
"""
def __init__(self, dim: int, codebook_size: int):
super().__init__()
self._codebook = EuclideanCodebook(dim=dim,
codebook_size=codebook_size)
self.codebook_size = codebook_size
@property
def codebook(self):
return self._codebook.embed
@torch.inference_mode()
def encode(self, x: Tensor) -> Tensor:
x = F.normalize(x.float(), p=2, dim=-1)
embed_in = self._codebook.encode(x)
return embed_in
@torch.inference_mode()
def decode(self, embed_ind: Tensor) -> Tensor:
quantize = self._codebook.decode(embed_ind)
quantize = rearrange(quantize, "b n d -> b d n")
return quantize
class S3Tokenizer(nn.Module):
"""S3 tokenizer implementation (inference-only).
Args:
config (ModelConfig): Config
"""
def __init__(self, name: str, config: ModelConfig = ModelConfig()):
super().__init__()
self.name = name # Store model name for token_rate determination
self.config = config
self.encoder = AudioEncoder(
self.config.n_mels,
self.config.n_audio_ctx,
self.config.n_audio_state,
self.config.n_audio_head,
self.config.n_audio_layer,
2 if name == "speech_tokenizer_v1_25hz" else 1,
self.config.use_sdpa,
)
self.quantizer = VectorQuantization(self.config.n_audio_state,
self.config.n_codebook_size)
def forward(self, mel: Tensor, mel_len: Tensor) -> Tuple[Tensor, Tensor]:
return self.quantize(mel, mel_len)
@torch.inference_mode()
def quantize(self, mel: Tensor, mel_len: Tensor) -> Tuple[Tensor, Tensor]:
"""
Quantize mel spectrogram to tokens, with automatic long audio handling.
Args:
mel: mel spectrogram tensor, shape (batch_size, n_mels, T)
mel_len: mel length tensor, shape (batch_size,)
Returns:
code: quantized tokens, shape (batch_size, T')
code_len: token length, shape (batch_size,)
"""
# Check if any audio in the batch exceeds 30 seconds
# Assuming 16kHz sample rate and hop_length=160, 30s = 30*16000/160 = 3000 frames
max_frames = 3000
# Check which samples are long audio
long_audio_mask = mel_len > max_frames
if long_audio_mask.any():
# Has long audio - need special processing
return self._quantize_mixed_batch(mel, mel_len, long_audio_mask,
max_frames)
else:
# All short audio - use original method
hidden, code_len = self.encoder(mel, mel_len)
code = self.quantizer.encode(hidden)
return code, code_len
@torch.inference_mode()
def _quantize_mixed_batch(self, mel: Tensor, mel_len: Tensor,
long_audio_mask: Tensor,
max_frames: int) -> Tuple[Tensor, Tensor]:
"""
Handle mixed batch with both short and long audio using unified batch processing.
Args:
mel: mel spectrogram tensor, shape (batch_size, n_mels, T)
mel_len: mel length tensor, shape (batch_size,)
long_audio_mask: boolean mask for long audio, shape (batch_size,)
max_frames: maximum frames for short audio
Returns:
code: quantized tokens, shape (batch_size, T')
code_len: token length, shape (batch_size,)
"""
batch_size = mel.size(0)
# Parameters for sliding window
sample_rate = 16000
hop_length = 160 # Default hop length for mel spectrogram
window_size = 30 # seconds
overlap = 4 # seconds
# Calculate frame-based parameters
frames_per_window = window_size * sample_rate // hop_length # 3000 frames
frames_per_overlap = overlap * sample_rate // hop_length # 400 frames
frames_per_stride = frames_per_window - frames_per_overlap # 2600 frames
# Collect all segments to process (including short and long audio segments)
all_segments = []
all_segments_len = []
segment_info = [
] # Record which audio each segment belongs to and whether it's long audio
# Process all audio in the batch
for batch_idx in range(batch_size):
audio_mel = mel[batch_idx]
audio_mel_len = mel_len[batch_idx]
is_long_audio = long_audio_mask[batch_idx].item()
if not is_long_audio:
# Short audio: process directly as a single segment
segment = audio_mel[:, :audio_mel_len]
seg_len = audio_mel_len.item()
# Pad to max_frames if necessary
if seg_len < frames_per_window:
pad_size = frames_per_window - seg_len
segment = F.pad(segment, (0, pad_size))
all_segments.append(segment)
all_segments_len.append(
torch.tensor(seg_len, device=mel.device))
segment_info.append({
'batch_idx': batch_idx,
'is_long_audio': False,
'segment_idx': 0,
'total_segments': 1
})
else:
# Long audio: split into multiple segments
start = 0
segment_idx = 0
while start < audio_mel_len:
end = min(start + frames_per_window, audio_mel_len)
segment = audio_mel[:, start:end]
seg_len = segment.size(1)
# Pad if necessary
if seg_len < frames_per_window:
pad_size = frames_per_window - seg_len
segment = F.pad(segment, (0, pad_size))
all_segments.append(segment)
all_segments_len.append(
torch.tensor(seg_len, device=mel.device))
segment_info.append({
'batch_idx': batch_idx,
'is_long_audio': True,
'segment_idx': segment_idx,
'total_segments': None # Will be filled later
})
segment_idx += 1
start += frames_per_stride
# Update total_segments info
total_segments = segment_idx
for info in segment_info:
if info['batch_idx'] == batch_idx and info['is_long_audio']:
info['total_segments'] = total_segments
if not all_segments:
# Fallback if no segments
return torch.zeros(batch_size,
0,
dtype=torch.long,
device=mel.device), torch.zeros(
batch_size,
dtype=torch.long,
device=mel.device)
# Unified batch processing for all segments
unified_batch_mel = torch.stack(all_segments)
unified_batch_lens = torch.stack(all_segments_len)
# Process all segments at once
hidden, code_len = self.encoder(unified_batch_mel, unified_batch_lens)
codes = self.quantizer.encode(hidden)
# Reorganize results based on segment_info
results = {} # batch_idx -> (code_tensor, code_len)
for seg_idx, info in enumerate(segment_info):
batch_idx = info['batch_idx']
is_long_audio = info['is_long_audio']
segment_idx = info['segment_idx']
# Get codes for current segment
segment_code = codes[
seg_idx, :code_len[seg_idx].item()].cpu().numpy().tolist()
if not is_long_audio:
# Short audio: use directly
code_tensor = torch.tensor(segment_code,
dtype=torch.long,
device=mel.device)
results[batch_idx] = (code_tensor, len(segment_code))
else:
# Long audio: collect all segments
if batch_idx not in results:
results[batch_idx] = []
results[batch_idx].append(segment_code)
# Process long audio segment merging
for batch_idx in range(batch_size):
if long_audio_mask[batch_idx].item():
# Merge long audio segments
audio_codes = results[batch_idx]
# Determine token rate based on model name
if hasattr(self,
'name') and self.name == "speech_tokenizer_v1":
token_rate = 50
else:
token_rate = 25
merged_codes = merge_tokenized_segments(audio_codes,
overlap=overlap,
token_rate=token_rate)
# Convert to tensor
merged_codes_tensor = torch.tensor(merged_codes,
dtype=torch.long,
device=mel.device)
results[batch_idx] = (merged_codes_tensor, len(merged_codes))
# Construct final output
max_code_len = max(code_info[1] for code_info in results.values())
output_codes = torch.zeros(batch_size,
max_code_len,
dtype=torch.long,
device=mel.device)
output_codes_len = torch.zeros(batch_size,
dtype=torch.long,
device=mel.device)
for batch_idx, (code_tensor, code_len) in results.items():
output_codes[batch_idx, :code_len] = code_tensor
output_codes_len[batch_idx] = code_len
return output_codes, output_codes_len
@property
def device(self):
return next(self.parameters()).device
def init_from_onnx(self, onnx_path: str):
ckpt = onnx2torch(onnx_path, None, False)
self.load_state_dict(ckpt, strict=True)
def init_from_pt(self, ckpt_path: str):
ckpt = torch.load(ckpt_path, map_location="cpu", mmap=True)
self.load_state_dict(ckpt, strict=True)
def freeze(self):
for _, param in self.named_parameters():
param.requires_grad = False
| xingchensong/S3Tokenizer | 505 | Reverse Engineering of Supervised Semantic Speech Tokenizer (S3Tokenizer) proposed in CosyVoice | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
s3tokenizer/model_v2.py | Python | # Copyright (c) (Mddct: Dinghao Zhou)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Optional, Tuple
import torch
from einops import rearrange
from s3tokenizer.model import Conv1d, LayerNorm, Linear, MultiHeadAttention
from s3tokenizer.utils import make_non_pad_mask, mask_to_bias, onnx2torch, merge_tokenized_segments
@dataclass
class ModelConfig:
n_mels: int = 128
n_audio_ctx: int = 1500
n_audio_state: int = 1280
n_audio_head: int = 20
n_audio_layer: int = 6
n_codebook_size: int = 3**8
use_sdpa: bool = False
def precompute_freqs_cis(dim: int,
end: int,
theta: float = 10000.0,
scaling=None):
freqs = 1.0 / (theta**(torch.arange(0, dim, 2)[:(dim // 2)].float() / dim))
t = torch.arange(end, device=freqs.device) # type: ignore
if scaling is not None:
t = t * scaling
freqs = torch.outer(t, freqs).float() # type: ignore
freqs_cis = torch.polar(torch.ones_like(freqs), freqs) # complex64
return torch.cat((freqs_cis, freqs_cis), dim=-1)
def apply_rotary_emb(
xq: torch.Tensor,
xk: torch.Tensor,
freqs_cis: torch.Tensor,
) -> Tuple[torch.Tensor, torch.Tensor]:
real = torch.view_as_real(freqs_cis)
cos, sin = real[:, :, 0], real[:, :, 1]
cos = cos.unsqueeze(0).unsqueeze(2).to(xq.dtype)
sin = sin.unsqueeze(0).unsqueeze(2).to(xq.dtype)
D = xq.shape[-1]
half_l, half_r = xq[:, :, :, :D // 2], xq[:, :, :, D // 2:]
xq_r = torch.cat((-half_r, half_l), dim=-1)
D = xk.shape[-1]
half_l, half_r = xk[:, :, :, :D // 2], xk[:, :, :, D // 2:]
xk_r = torch.cat((-half_r, half_l), dim=-1)
return xq * cos + xq_r * sin, xk * cos + xk_r * sin
def reshape_for_broadcast(freqs_cis: torch.Tensor, x: torch.Tensor):
ndim = x.ndim
assert 0 <= 1 < ndim
assert freqs_cis.shape == (x.shape[1], x.shape[-1])
shape = [
d if i == 1 or i == ndim - 1 else 1 for i, d in enumerate(x.shape)
]
return freqs_cis.view(*shape)
class FSQCodebook(torch.nn.Module):
def __init__(self, dim: int, level: int = 3):
super().__init__()
self.project_down = torch.nn.Linear(dim, 8)
self.level = level
self.embed = None
@torch.inference_mode()
def preprocess(self, x: torch.Tensor) -> torch.Tensor:
x = rearrange(x, "... d -> (...) d")
return x
@torch.inference_mode()
def encode(self, x: torch.Tensor) -> torch.Tensor:
x_shape = x.shape
# pre-process
x = self.preprocess(x)
# quantize
h = self.project_down(x).float()
h = h.tanh()
h = h * 0.9990000128746033
h = h.round() + 1
# h = ((self.level - 1) * h).round() # range [-k, k]
powers = torch.pow(
self.level,
torch.arange(2**self.level, device=x.device, dtype=h.dtype))
mu = torch.sum(h * powers.unsqueeze(0), dim=-1)
ind = mu.reshape(x_shape[0], x_shape[1]).int()
return ind
@torch.inference_mode()
def decode(self, embed_ind: torch.Tensor) -> torch.Tensor:
raise NotImplementedError(
'There is no official up project component provided')
class FSQVectorQuantization(torch.nn.Module):
"""Vector quantization implementation (inference-only).
Args:
dim (int): Dimension
codebook_size (int): Codebook size
"""
def __init__(
self,
dim: int,
codebook_size: int,
):
super().__init__()
assert 3**8 == codebook_size
self._codebook = FSQCodebook(dim=dim, level=3)
self.codebook_size = codebook_size
@property
def codebook(self):
return self._codebook.embed
@torch.inference_mode()
def encode(self, x: torch.Tensor) -> torch.Tensor:
return self._codebook.encode(x)
@torch.inference_mode()
def decode(self, embed_ind: torch.Tensor) -> torch.Tensor:
quantize = self._codebook.decode(embed_ind)
quantize = rearrange(quantize, "b n d -> b d n")
return quantize
class FSMNMultiHeadAttention(MultiHeadAttention):
def __init__(
self,
n_state: int,
n_head: int,
kernel_size: int = 31,
use_sdpa: bool = False,
):
super().__init__(n_state, n_head)
self.fsmn_block = torch.nn.Conv1d(n_state,
n_state,
kernel_size,
stride=1,
padding=0,
groups=n_state,
bias=False)
self.left_padding = (kernel_size - 1) // 2
self.right_padding = kernel_size - 1 - self.left_padding
self.pad_fn = torch.nn.ConstantPad1d(
(self.left_padding, self.right_padding), 0.0)
self.use_sdpa = use_sdpa
self.key = Linear(n_state, n_state, bias=False)
def forward_fsmn(self,
inputs: torch.Tensor,
mask: Optional[torch.Tensor] = None):
b, t, _, _ = inputs.size()
inputs = inputs.view(b, t, -1)
if mask is not None and mask.size(2) > 0: # time2 > 0
inputs = inputs * mask
x = inputs.transpose(1, 2)
x = self.pad_fn(x)
x = self.fsmn_block(x)
x = x.transpose(1, 2)
x += inputs
return x * mask
def qkv_attention(self,
q: torch.Tensor,
k: torch.Tensor,
v: torch.Tensor,
mask: Optional[torch.Tensor] = None,
mask_pad: Optional[torch.Tensor] = None,
freqs_cis: Optional[torch.Tensor] = None):
_, _, D = q.shape
scale = (D // self.n_head)**-0.25
q = q.view(*q.shape[:2], self.n_head, -1)
k = k.view(*k.shape[:2], self.n_head, -1)
v = v.view(*v.shape[:2], self.n_head, -1)
if freqs_cis is not None:
q, k = apply_rotary_emb(q, k, freqs_cis=freqs_cis)
fsm_memory = self.forward_fsmn(v, mask_pad)
q = q.permute(0, 2, 1, 3) * scale
v = v.permute(0, 2, 1, 3)
if not self.use_sdpa:
k = k.permute(0, 2, 3, 1) * scale
qk = q @ k # (B, n_head, T, T)
if mask is not None:
qk = qk + mask
qk = qk.float()
w = torch.nn.functional.softmax(qk, dim=-1).to(q.dtype)
return (w @ v).permute(
0, 2, 1, 3).flatten(start_dim=2), qk.detach(), fsm_memory
else:
k = k.permute(0, 2, 1, 3) * scale
assert mask is not None
output = torch.nn.functional.scaled_dot_product_attention(
q,
k,
v,
attn_mask=mask,
dropout_p=0.,
scale=1.,
)
output = (output.transpose(1,
2).contiguous().view(q.size(0), -1, D)
) # (batch, time1, d_model)
return output, None, fsm_memory
def forward(self,
x: torch.Tensor,
mask: Optional[torch.Tensor] = None,
mask_pad: Optional[torch.Tensor] = None,
freqs_cis: Optional[torch.Tensor] = None):
q = self.query(x)
k = self.key(x)
v = self.value(x)
wv, qk, fsm_memory = self.qkv_attention(q, k, v, mask, mask_pad,
freqs_cis)
return self.out(wv) + fsm_memory, qk
class ResidualAttentionBlock(torch.nn.Module):
def __init__(
self,
n_state: int,
n_head: int,
kernel_size: int = 31,
use_sdpa: bool = False,
):
super().__init__()
self.attn = FSMNMultiHeadAttention(n_state,
n_head,
kernel_size,
use_sdpa=use_sdpa)
self.attn_ln = LayerNorm(n_state, eps=1e-5)
n_mlp = n_state * 4
self.mlp = torch.nn.Sequential(Linear(n_state, n_mlp), torch.nn.GELU(),
Linear(n_mlp, n_state))
self.mlp_ln = LayerNorm(n_state)
def forward(
self,
x: torch.Tensor,
mask: Optional[torch.Tensor] = None,
mask_pad: Optional[torch.Tensor] = None,
freqs_cis: Optional[torch.Tensor] = None,
):
x = x + self.attn(
self.attn_ln(x), mask=mask, mask_pad=mask_pad,
freqs_cis=freqs_cis)[0]
x = x + self.mlp(self.mlp_ln(x))
return x
class AudioEncoderV2(torch.nn.Module):
def __init__(
self,
n_mels: int,
n_state: int,
n_head: int,
n_layer: int,
stride: int,
use_sdpa: bool,
):
super().__init__()
self.stride = stride
self.conv1 = Conv1d(n_mels,
n_state,
kernel_size=3,
stride=stride,
padding=1)
self.conv2 = Conv1d(n_state,
n_state,
kernel_size=3,
stride=2,
padding=1)
self.freqs_cis = precompute_freqs_cis(64, 1024 * 2)
self.blocks = torch.nn.ModuleList([
ResidualAttentionBlock(n_state, n_head, use_sdpa=use_sdpa)
for _ in range(n_layer)
])
def forward(self, x: torch.Tensor,
x_len: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""
x : torch.Tensor, shape = (batch_size, n_mels, T)
the mel spectrogram of the audio
x_len: torch.Tensor, shape = (batch_size,)
length of each audio in x
"""
T = x.shape[-1]
mask = make_non_pad_mask(x_len, T).unsqueeze(1)
x = torch.nn.functional.gelu(self.conv1(x * mask))
x_len = (x_len + 2 - 1 * (3 - 1) - 1) // self.stride + 1
x_slen = (T + 2 - 1 * (3 - 1) - 1) // self.stride + 1
mask = make_non_pad_mask(x_len, x_slen).unsqueeze(1)
x = torch.nn.functional.gelu(self.conv2(x * mask))
x_len = (x_len + 2 - 1 * (3 - 1) - 1) // 2 + 1
x_slen = (x_slen + 2 - 1 * (3 - 1) - 1) // self.stride + 1
mask = make_non_pad_mask(x_len, x_slen).unsqueeze(1)
x = x.permute(0, 2, 1) # (B, T // 2, n_state)
freqs_cis = self.freqs_cis.to(x.device)
mask_pad = mask.transpose(1, 2)
mask = mask_to_bias(mask, x.dtype)
tmp = torch.view_as_real(freqs_cis)
cos, sin = tmp[:, :, 0], tmp[:, :, 1]
cos = torch.cat((cos, cos), dim=-1)
sin = torch.cat((sin, sin), dim=-1)
cos = cos.unsqueeze(0).unsqueeze(2)
sin = sin.unsqueeze(0).unsqueeze(2)
for block in self.blocks:
x = block(x, mask.unsqueeze(1), mask_pad, freqs_cis[:x.size(1)])
return x, x_len
class S3TokenizerV2(torch.nn.Module):
"""S3 tokenizer v2 implementation (inference-only).
Args:
config (ModelConfig): Config
"""
def __init__(self, name: str, config: ModelConfig = ModelConfig()):
super().__init__()
self.name = name # Store model name for token_rate determination
if 'v1' not in name:
assert 'v2' in name
# TODO(Mddct): make it configureable
config.n_codebook_size = 3**8
self.config = config
self.encoder = AudioEncoderV2(
self.config.n_mels,
self.config.n_audio_state,
self.config.n_audio_head,
self.config.n_audio_layer,
2,
self.config.use_sdpa,
)
self.quantizer = FSQVectorQuantization(
self.config.n_audio_state,
self.config.n_codebook_size,
)
def forward(self, mel: torch.Tensor,
mel_len: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
return self.quantize(mel, mel_len)
@torch.inference_mode()
def quantize(self, mel: torch.Tensor,
mel_len: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Quantize mel spectrogram to tokens, with automatic long audio handling.
Args:
mel: mel spectrogram tensor, shape (batch_size, n_mels, T)
mel_len: mel length tensor, shape (batch_size,)
Returns:
code: quantized tokens, shape (batch_size, T')
code_len: token length, shape (batch_size,)
"""
# Check if any audio in the batch exceeds 30 seconds
# Assuming 16kHz sample rate and hop_length=160, 30s = 30*16000/160 = 3000 frames
max_frames = 3000
# Check which samples are long audio
long_audio_mask = mel_len > max_frames
if long_audio_mask.any():
# Has long audio - need special processing
return self._quantize_mixed_batch(mel, mel_len, long_audio_mask,
max_frames)
else:
# All short audio - use original method
hidden, code_len = self.encoder(mel, mel_len)
code = self.quantizer.encode(hidden)
return code, code_len
@torch.inference_mode()
def _quantize_mixed_batch(
self, mel: torch.Tensor, mel_len: torch.Tensor,
long_audio_mask: torch.Tensor,
max_frames: int) -> Tuple[torch.Tensor, torch.Tensor]:
"""
Handle mixed batch with both short and long audio using unified batch processing.
Args:
mel: mel spectrogram tensor, shape (batch_size, n_mels, T)
mel_len: mel length tensor, shape (batch_size,)
long_audio_mask: boolean mask for long audio, shape (batch_size,)
max_frames: maximum frames for short audio
Returns:
code: quantized tokens, shape (batch_size, T')
code_len: token length, shape (batch_size,)
"""
batch_size = mel.size(0)
# Parameters for sliding window
sample_rate = 16000
hop_length = 160 # Default hop length for mel spectrogram
window_size = 30 # seconds
overlap = 4 # seconds
# Calculate frame-based parameters
frames_per_window = window_size * sample_rate // hop_length # 3000 frames
frames_per_overlap = overlap * sample_rate // hop_length # 400 frames
frames_per_stride = frames_per_window - frames_per_overlap # 2600 frames
# Collect all segments to process (including short and long audio segments)
all_segments = []
all_segments_len = []
segment_info = [
] # Record which audio each segment belongs to and whether it's long audio
# Process all audio in the batch
for batch_idx in range(batch_size):
audio_mel = mel[batch_idx]
audio_mel_len = mel_len[batch_idx]
is_long_audio = long_audio_mask[batch_idx].item()
if not is_long_audio:
# Short audio: process directly as a single segment
segment = audio_mel[:, :audio_mel_len]
seg_len = audio_mel_len.item()
# Pad to max_frames if necessary
if seg_len < frames_per_window:
pad_size = frames_per_window - seg_len
segment = torch.nn.functional.pad(segment, (0, pad_size))
all_segments.append(segment)
all_segments_len.append(
torch.tensor(seg_len, device=mel.device))
segment_info.append({
'batch_idx': batch_idx,
'is_long_audio': False,
'segment_idx': 0,
'total_segments': 1
})
else:
# Long audio: split into multiple segments
start = 0
segment_idx = 0
while start < audio_mel_len:
end = min(start + frames_per_window, audio_mel_len)
segment = audio_mel[:, start:end]
seg_len = segment.size(1)
# Pad if necessary
if seg_len < frames_per_window:
pad_size = frames_per_window - seg_len
segment = torch.nn.functional.pad(
segment, (0, pad_size))
all_segments.append(segment)
all_segments_len.append(
torch.tensor(seg_len, device=mel.device))
segment_info.append({
'batch_idx': batch_idx,
'is_long_audio': True,
'segment_idx': segment_idx,
'total_segments': None # Will be filled later
})
segment_idx += 1
start += frames_per_stride
# Update total_segments info
total_segments = segment_idx
for info in segment_info:
if info['batch_idx'] == batch_idx and info['is_long_audio']:
info['total_segments'] = total_segments
if not all_segments:
# Fallback if no segments
return torch.zeros(batch_size,
0,
dtype=torch.long,
device=mel.device), torch.zeros(
batch_size,
dtype=torch.long,
device=mel.device)
# Unified batch processing for all segments
unified_batch_mel = torch.stack(all_segments)
unified_batch_lens = torch.stack(all_segments_len)
# Process all segments at once
hidden, code_len = self.encoder(unified_batch_mel, unified_batch_lens)
codes = self.quantizer.encode(hidden)
# Reorganize results based on segment_info
results = {} # batch_idx -> (code_tensor, code_len)
for seg_idx, info in enumerate(segment_info):
batch_idx = info['batch_idx']
is_long_audio = info['is_long_audio']
segment_idx = info['segment_idx']
# Get codes for current segment
segment_code = codes[
seg_idx, :code_len[seg_idx].item()].cpu().numpy().tolist()
if not is_long_audio:
# Short audio: use directly
code_tensor = torch.tensor(segment_code,
dtype=torch.long,
device=mel.device)
results[batch_idx] = (code_tensor, len(segment_code))
else:
# Long audio: collect all segments
if batch_idx not in results:
results[batch_idx] = []
results[batch_idx].append(segment_code)
# Process long audio segment merging
for batch_idx in range(batch_size):
if long_audio_mask[batch_idx].item():
# Merge long audio segments
audio_codes = results[batch_idx]
# V2 models use 25Hz token rate
token_rate = 25
merged_codes = merge_tokenized_segments(audio_codes,
overlap=overlap,
token_rate=token_rate)
# Convert to tensor
merged_codes_tensor = torch.tensor(merged_codes,
dtype=torch.long,
device=mel.device)
results[batch_idx] = (merged_codes_tensor, len(merged_codes))
# Construct final output
max_code_len = max(code_info[1] for code_info in results.values())
output_codes = torch.zeros(batch_size,
max_code_len,
dtype=torch.long,
device=mel.device)
output_codes_len = torch.zeros(batch_size,
dtype=torch.long,
device=mel.device)
for batch_idx, (code_tensor, code_len) in results.items():
output_codes[batch_idx, :code_len] = code_tensor
output_codes_len[batch_idx] = code_len
return output_codes, output_codes_len
@property
def device(self):
return next(self.parameters()).device
def init_from_onnx(self, onnx_path: str):
ckpt = onnx2torch(onnx_path, None, False)
self.load_state_dict(ckpt, strict=True)
def init_from_pt(self, ckpt_path: str):
ckpt = torch.load(ckpt_path, map_location="cpu", mmap=True)
self.load_state_dict(ckpt, strict=True)
def freeze(self):
for _, param in self.named_parameters():
param.requires_grad = False
| xingchensong/S3Tokenizer | 505 | Reverse Engineering of Supervised Semantic Speech Tokenizer (S3Tokenizer) proposed in CosyVoice | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
s3tokenizer/model_v3.py | Python | # Copyright (c) (Mddct: Dinghao Zhou)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from dataclasses import dataclass
from typing import Optional, Tuple
import torch
from s3tokenizer.model import Conv1d, LayerNorm, Linear
# Re-use V2 components where possible, but we might need specific V3 tweaks
from s3tokenizer.model_v2 import (FSMNMultiHeadAttention,
FSQVectorQuantization, precompute_freqs_cis)
from s3tokenizer.utils import (make_non_pad_mask, mask_to_bias,
merge_tokenized_segments, onnx2torch_v3)
@dataclass
class ModelConfigV3:
n_mels: int = 128
n_audio_ctx: int = 1500
n_audio_state: int = 1280
n_audio_head: int = 20
n_audio_layer: int = 12 # V3 has 12 layers
n_codebook_size: int = 3**8
use_sdpa: bool = False
class MultiHeadAttentionV3(FSMNMultiHeadAttention):
def __init__(self,
n_state: int,
n_head: int,
kernel_size: int = 31,
use_sdpa: bool = False):
super().__init__(n_state, n_head, kernel_size, use_sdpa)
# Override linears: query/value/out use bias=True, key uses bias=False
self.query = Linear(n_state, n_state)
self.key = Linear(n_state, n_state, bias=False)
self.value = Linear(n_state, n_state)
self.out = Linear(n_state, n_state)
class ResidualAttentionBlockV3(torch.nn.Module):
def __init__(self,
n_state: int,
n_head: int,
kernel_size: int = 31,
use_sdpa: bool = False):
super().__init__()
self.attn = MultiHeadAttentionV3(n_state,
n_head,
kernel_size,
use_sdpa=use_sdpa)
self.attn_ln = LayerNorm(n_state, eps=1e-5)
n_mlp = n_state * 4
# Set bias=True for MLP Linear layers
self.mlp = torch.nn.Sequential(Linear(n_state, n_mlp), torch.nn.GELU(),
Linear(n_mlp, n_state))
self.mlp_ln = LayerNorm(n_state, eps=1e-5)
def forward(self,
x: torch.Tensor,
mask: Optional[torch.Tensor] = None,
mask_pad: Optional[torch.Tensor] = None,
freqs_cis: Optional[torch.Tensor] = None):
x = x + self.attn(
self.attn_ln(x), mask=mask, mask_pad=mask_pad,
freqs_cis=freqs_cis)[0]
x = x + self.mlp(self.mlp_ln(x))
return x
class AudioEncoderV3(torch.nn.Module):
def __init__(
self,
n_mels: int,
n_state: int,
n_head: int,
n_layer: int,
stride: int,
use_sdpa: bool,
):
super().__init__()
self.stride = stride
self.conv1 = Conv1d(n_mels,
n_state,
kernel_size=3,
stride=stride,
padding=1)
self.conv2 = Conv1d(n_state,
n_state,
kernel_size=3,
stride=2,
padding=1)
self.freqs_cis = precompute_freqs_cis(64, 1024 * 2)
# V3 uses the same ResidualAttentionBlock structure but more layers
self.blocks = torch.nn.ModuleList([
ResidualAttentionBlockV3(n_state, n_head, use_sdpa=use_sdpa)
for _ in range(n_layer)
])
def forward(self, x: torch.Tensor,
x_len: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
"""
x : torch.Tensor, shape = (batch_size, n_mels, T)
the mel spectrogram of the audio
x_len: torch.Tensor, shape = (batch_size,)
length of each audio in x
"""
T = x.shape[-1]
mask = make_non_pad_mask(x_len, T).unsqueeze(1)
x = torch.nn.functional.gelu(self.conv1(x * mask))
x_len = (x_len + 2 - 1 * (3 - 1) - 1) // self.stride + 1
x_slen = (T + 2 - 1 * (3 - 1) - 1) // self.stride + 1
mask = make_non_pad_mask(x_len, x_slen).unsqueeze(1)
x = torch.nn.functional.gelu(self.conv2(x * mask))
x_len = (x_len + 2 - 1 * (3 - 1) - 1) // 2 + 1
x_slen = (x_slen + 2 - 1 * (3 - 1) - 1) // 2 + 1
mask = make_non_pad_mask(x_len, x_slen).unsqueeze(1)
x = x.permute(0, 2, 1) # (B, T // 2, n_state)
freqs_cis = self.freqs_cis.to(x.device)
mask_pad = mask.transpose(1, 2)
mask = mask_to_bias(mask, x.dtype)
for block in self.blocks:
x = block(x, mask.unsqueeze(1), mask_pad, freqs_cis[:x.size(1)])
return x, x_len
class S3TokenizerV3(torch.nn.Module):
"""S3 tokenizer v3 implementation (inference-only).
Args:
config (ModelConfigV3): Config
"""
def __init__(self, name: str, config: ModelConfigV3 = ModelConfigV3()):
super().__init__()
self.name = name
self.config = config
self.encoder = AudioEncoderV3(
self.config.n_mels,
self.config.n_audio_state,
self.config.n_audio_head,
self.config.n_audio_layer,
2,
self.config.use_sdpa,
)
self.quantizer = FSQVectorQuantization(
self.config.n_audio_state,
self.config.n_codebook_size,
)
def forward(self, mel: torch.Tensor,
mel_len: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
return self.quantize(mel, mel_len)
@torch.inference_mode()
def quantize(self, mel: torch.Tensor,
mel_len: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]:
# Re-use logic from V2 (copy-paste or inheritance? Inheritance is trickier with imports)
# Using exact same logic as V2 for now
max_frames = 3000
long_audio_mask = mel_len > max_frames
if long_audio_mask.any():
return self._quantize_mixed_batch(mel, mel_len, long_audio_mask)
else:
hidden, code_len = self.encoder(mel, mel_len)
code = self.quantizer.encode(hidden)
return code, code_len
@torch.inference_mode()
def _quantize_mixed_batch(
self, mel: torch.Tensor, mel_len: torch.Tensor,
long_audio_mask: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor]:
batch_size = mel.size(0)
sample_rate = 16000
hop_length = 160
window_size = 30
overlap = 4
frames_per_window = window_size * sample_rate // hop_length
frames_per_overlap = overlap * sample_rate // hop_length
frames_per_stride = frames_per_window - frames_per_overlap
all_segments = []
all_segments_len = []
segment_info = []
for batch_idx in range(batch_size):
audio_mel = mel[batch_idx]
audio_mel_len = mel_len[batch_idx]
is_long_audio = long_audio_mask[batch_idx].item()
if not is_long_audio:
segment = audio_mel[:, :audio_mel_len]
seg_len = audio_mel_len.item()
if seg_len < frames_per_window:
pad_size = frames_per_window - seg_len
segment = torch.nn.functional.pad(segment, (0, pad_size))
all_segments.append(segment)
all_segments_len.append(
torch.tensor(seg_len, device=mel.device))
segment_info.append({
'batch_idx': batch_idx,
'is_long_audio': False,
'segment_idx': 0,
'total_segments': 1
})
else:
start = 0
segment_idx = 0
while start < audio_mel_len:
end = min(start + frames_per_window, audio_mel_len)
segment = audio_mel[:, start:end]
seg_len = segment.size(1)
if seg_len < frames_per_window:
pad_size = frames_per_window - seg_len
segment = torch.nn.functional.pad(
segment, (0, pad_size))
all_segments.append(segment)
all_segments_len.append(
torch.tensor(seg_len, device=mel.device))
segment_info.append({
'batch_idx': batch_idx,
'is_long_audio': True,
'segment_idx': segment_idx,
'total_segments': None
})
segment_idx += 1
start += frames_per_stride
total_segments = segment_idx
for info in segment_info:
if info['batch_idx'] == batch_idx and info['is_long_audio']:
info['total_segments'] = total_segments
if not all_segments:
return torch.zeros(batch_size,
0,
dtype=torch.long,
device=mel.device), torch.zeros(
batch_size,
dtype=torch.long,
device=mel.device)
unified_batch_mel = torch.stack(all_segments)
unified_batch_lens = torch.stack(all_segments_len)
hidden, code_len = self.encoder(unified_batch_mel, unified_batch_lens)
codes = self.quantizer.encode(hidden)
results = {}
for seg_idx, info in enumerate(segment_info):
batch_idx = info['batch_idx']
is_long_audio = info['is_long_audio']
segment_code = codes[
seg_idx, :code_len[seg_idx].item()].cpu().numpy().tolist()
if not is_long_audio:
code_tensor = torch.tensor(segment_code,
dtype=torch.long,
device=mel.device)
results[batch_idx] = (code_tensor, len(segment_code))
else:
if batch_idx not in results:
results[batch_idx] = []
results[batch_idx].append(segment_code)
for batch_idx in range(batch_size):
if long_audio_mask[batch_idx].item():
audio_codes = results[batch_idx]
token_rate = 25
merged_codes = merge_tokenized_segments(audio_codes,
overlap=overlap,
token_rate=token_rate)
merged_codes_tensor = torch.tensor(merged_codes,
dtype=torch.long,
device=mel.device)
results[batch_idx] = (merged_codes_tensor, len(merged_codes))
max_code_len = max(code_info[1] for code_info in results.values())
output_codes = torch.zeros(batch_size,
max_code_len,
dtype=torch.long,
device=mel.device)
output_codes_len = torch.zeros(batch_size,
dtype=torch.long,
device=mel.device)
for batch_idx, (code_tensor, code_len) in results.items():
output_codes[batch_idx, :code_len] = code_tensor
output_codes_len[batch_idx] = code_len
return output_codes, output_codes_len
@property
def device(self):
return next(self.parameters()).device
def init_from_onnx(self, onnx_path: str):
ckpt = onnx2torch_v3(onnx_path, None,
False) # Set verbose back to False
self.load_state_dict(ckpt, strict=False)
def init_from_pt(self, ckpt_path: str):
ckpt = torch.load(ckpt_path, map_location="cpu", mmap=True)
self.load_state_dict(ckpt, strict=True)
def load_state_dict(self, state_dict, strict=True):
# Allow loading state dict with missing keys (like LN bias) if strictly necessary
# But for now we try to stick to standard behavior
return super().load_state_dict(state_dict, strict=strict)
def freeze(self):
for _, param in self.named_parameters():
param.requires_grad = False
| xingchensong/S3Tokenizer | 505 | Reverse Engineering of Supervised Semantic Speech Tokenizer (S3Tokenizer) proposed in CosyVoice | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
s3tokenizer/utils.py | Python | # Copyright (c) 2023 OpenAI. (authors: Whisper Team)
# 2024 Tsinghua Univ. (authors: Xingchen Song)
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Modified from https://github.com/openai/whisper/blob/main/whisper/audio.py
Add rename_weights() & onnx2torch() & make_non_pad_mask() & mask_to_bias()
Copy merge_tokenized_segments() from https://github.com/Mddct/s3tokenizer-long/blob/main/example.py
"""
import os
from functools import lru_cache
from typing import List, Optional, Union
import numpy as np
import onnx
import torch
import torch.nn.functional as F
import torchaudio
from torch.nn.utils.rnn import pad_sequence
def _rename_weights(weights_dict: dict):
"""
Rename onnx weights to pytorch format.
Parameters
----------
weight_dict: dict
The dict containing weights in onnx format
Returns
-------
A new weight dict containing the weights in pytorch format.
"""
new_weight_dict = {}
for k in weights_dict.keys():
if "quantizer" in k: # vq or fsq
if k == "/quantizer/rq/model/layers.0/_codebook/Pow_1":
new_weight_dict["quantizer._codebook.embed"] = weights_dict[k]
elif 'project_down' in k: # v2
new_weight_dict[k] = weights_dict[k]
elif "positional_embedding" in k: # positional emb
new_weight_dict[k] = weights_dict[k]
elif "conv" in k: # 1/2 or 1/4 subsample
new_weight_dict[k] = weights_dict[k]
else: # transformer blocks
assert "blocks" in k
new_k = (k[1:].replace('/', '.').replace(
'MatMul', 'weight').replace('Add_1', 'bias').replace(
'Mul', 'weight').replace('Add', 'bias').replace(
'mlp.mlp', 'mlp')).replace('fsmn_block.Conv',
'fsmn_block.weight')
new_weight_dict[f"encoder.{new_k}"] = weights_dict[k]
return new_weight_dict
def onnx2torch(onnx_path: str, torch_path: str = None, verbose: bool = False):
"""
Open an onnx file and convert to pytorch format.
Parameters
----------
onnx_path: str
The onnx file to open, typically `speech_tokenizer_v1.onnx`
torch_path: str
The path to save the torch-formated checkpoint.
verbose: bool
Logging info or not.
Returns
-------
A checkpoint dict containing the weights and their names, if torch_path is
None. Otherwise save checkpoint dict to the desired path.
"""
onnx_model = onnx.load(onnx_path)
weights_dict = {}
initializer_map = {
initializer.name: initializer
for initializer in onnx_model.graph.initializer
}
for node in onnx_model.graph.node:
for input_name in node.input:
if input_name in initializer_map:
ln_bias_name, ln_weight_name = None, None # for v2 ln
initializer = initializer_map[input_name]
if input_name in [
"onnx::Conv_1519",
"encoders.conv1.weight",
"onnx::Conv_2216",
]: # v1_50hz, v1_25hz, v2_25hz
weight_name = "encoder.conv1.weight"
elif input_name in [
"onnx::Conv_1520",
"encoders.conv1.bias",
"onnx::Conv_2217",
]: # v1_50hz, v1_25hz, v2_25hz
weight_name = "encoder.conv1.bias"
elif input_name in [
"onnx::Conv_1521",
"encoders.conv2.weight",
"onnx::Conv_2218",
]:
weight_name = "encoder.conv2.weight"
elif input_name in [
"onnx::Conv_1522",
"encoders.conv2.bias",
"onnx::Conv_2219",
]:
weight_name = "encoder.conv2.bias"
elif input_name == "encoders.positional_embedding":
weight_name = "encoder.positional_embedding"
elif input_name == 'quantizer.project_in.bias':
weight_name = "quantizer._codebook.project_down.bias"
elif input_name == 'onnx::MatMul_2536':
weight_name = "quantizer._codebook.project_down.weight"
else:
if node.op_type == 'LayerNormalization': # in input_name:
ln_name = node.name.replace('/LayerNormalization', '')
ln_weight_name = ln_name + '.weight'
ln_bias_name = ln_name + '.bias'
else:
weight_name = node.name
if ln_weight_name is not None and ln_bias_name is not None:
ln_inputs = node.input
scale_name = ln_inputs[1]
bias_name = ln_inputs[2]
scale = onnx.numpy_helper.to_array(
initializer_map[scale_name]).copy(
) if scale_name in initializer_map else None
bias = onnx.numpy_helper.to_array(
initializer_map[bias_name]).copy(
) if bias_name in initializer_map else None
scale.flags.writeable = True
bias.flags.writeable = True
weight_tensor = torch.from_numpy(scale)
bias_tensor = torch.from_numpy(bias)
weights_dict[ln_bias_name] = bias_tensor
weights_dict[ln_weight_name] = weight_tensor
else:
weight_array = onnx.numpy_helper.to_array(
initializer).copy()
weight_array.flags.writeable = True
weight_tensor = torch.from_numpy(weight_array)
if len(weight_tensor.shape) > 2 or weight_name in [
"encoder.positional_embedding"
]:
weights_dict[weight_name] = weight_tensor
else:
weights_dict[weight_name] = weight_tensor.t()
new_weights_dict = _rename_weights(weights_dict)
if verbose:
for k, v in new_weights_dict.items():
print(f"{k} : {v.shape} {v.dtype}")
print(f"PyTorch weights saved to {torch_path}")
del weights_dict, onnx_model
if torch_path:
torch.save(new_weights_dict, torch_path)
else:
return new_weights_dict
def onnx2torch_v3(onnx_path: str,
torch_path: str = None,
verbose: bool = False):
"""
Convert V3 ONNX to PyTorch format.
"""
onnx_model = onnx.load(onnx_path)
weights_dict = {}
initializer_map = {
initializer.name: initializer
for initializer in onnx_model.graph.initializer
}
# Build node map for Constants to support biases stored as Constants
constant_map = {}
for node in onnx_model.graph.node:
if node.op_type == 'Constant':
for attr in node.attribute:
if attr.name == 'value':
constant_map[node.output[0]] = onnx.numpy_helper.to_array(
attr.t)
# Helper to load tensor from initializer or Constant
def get_tensor(name, transpose=False):
if name in initializer_map:
arr = onnx.numpy_helper.to_array(initializer_map[name]).copy()
elif name in constant_map:
arr = constant_map[name].copy()
else:
return None
t = torch.from_numpy(arr)
if transpose and t.ndim == 2:
t = t.t()
return t
def get_bias_tensor(node):
"""Helper to find bias tensor for an Add node.
Checks both inputs to see which one is a parameter."""
for inp in node.input:
t = get_tensor(inp)
if t is not None:
return t
return None
# Iterate nodes to find mappings
for node in onnx_model.graph.node:
name = node.name
op = node.op_type
inputs = node.input
# 1. Conv layers
if name == '/conv1/Conv':
weights_dict['encoder.conv1.weight'] = get_tensor(inputs[1])
if len(inputs) > 2:
weights_dict['encoder.conv1.bias'] = get_tensor(inputs[2])
elif name == '/conv2/Conv':
weights_dict['encoder.conv2.weight'] = get_tensor(inputs[1])
if len(inputs) > 2:
weights_dict['encoder.conv2.bias'] = get_tensor(inputs[2])
# 2. Blocks
elif name.startswith('/blocks.'):
# Parse block index: /blocks.0/... -> 0
parts = name.split('/') # ['', 'blocks.0', ...]
block_part = parts[1] # blocks.0
block_idx = block_part.split('.')[1] # 0
prefix = f"encoder.blocks.{block_idx}"
# LayerNorms (attn_ln, mlp_ln)
# Pattern: /blocks.0/attn_ln/Mul (weight)
if 'attn_ln/Mul' in name and op == 'Mul':
weights_dict[f"{prefix}.attn_ln.weight"] = get_tensor(
inputs[1])
elif 'attn_ln/Add' in name and op == 'Add':
t = get_bias_tensor(node)
if t is not None and t.numel() > 1:
weights_dict[f"{prefix}.attn_ln.bias"] = t
elif 'mlp_ln/Mul' in name and op == 'Mul':
weights_dict[f"{prefix}.mlp_ln.weight"] = get_tensor(inputs[1])
elif 'mlp_ln/Add' in name and op == 'Add':
t = get_bias_tensor(node)
if t is not None and t.numel() > 1:
weights_dict[f"{prefix}.mlp_ln.bias"] = t
# Attn weights
# query
elif 'attn/query/MatMul' in name:
weights_dict[f"{prefix}.attn.query.weight"] = get_tensor(
inputs[1], transpose=True)
elif 'attn/query/Add' in name:
weights_dict[f"{prefix}.attn.query.bias"] = get_bias_tensor(
node)
# key
elif 'attn/key/MatMul' in name:
weights_dict[f"{prefix}.attn.key.weight"] = get_tensor(
inputs[1], transpose=True)
elif 'attn/key/Add' in name:
weights_dict[f"{prefix}.attn.key.bias"] = get_bias_tensor(node)
# value
elif 'attn/value/MatMul' in name:
weights_dict[f"{prefix}.attn.value.weight"] = get_tensor(
inputs[1], transpose=True)
elif 'attn/value/Add' in name:
weights_dict[f"{prefix}.attn.value.bias"] = get_bias_tensor(
node)
# out (attn output)
elif 'attn/out/MatMul' in name:
weights_dict[f"{prefix}.attn.out.weight"] = get_tensor(
inputs[1], transpose=True)
elif 'attn/out/Add' in name:
weights_dict[f"{prefix}.attn.out.bias"] = get_bias_tensor(node)
# MLP
elif 'mlp/mlp.0/MatMul' in name:
weights_dict[f"{prefix}.mlp.0.weight"] = get_tensor(
inputs[1], transpose=True)
elif 'mlp/mlp.0/Add' in name:
weights_dict[f"{prefix}.mlp.0.bias"] = get_bias_tensor(node)
elif 'mlp/mlp.2/MatMul' in name:
weights_dict[f"{prefix}.mlp.2.weight"] = get_tensor(
inputs[1], transpose=True)
elif 'mlp/mlp.2/Add' in name:
weights_dict[f"{prefix}.mlp.2.bias"] = get_bias_tensor(node)
# 3. FSMN weights
if 'fsmn_block/Conv' in name:
pass
# Handle explicit FSMN weights and Quantizer weights that might not be caught above
for init_name in initializer_map:
if 'fsmn_block.weight' in init_name:
weights_dict[f"encoder.{init_name}"] = get_tensor(init_name)
if 'quantizer.project_in.bias' in init_name:
weights_dict["quantizer._codebook.project_down.bias"] = get_tensor(
init_name)
# Scan for Quantizer project down MatMul
for node in onnx_model.graph.node:
if 'quantizer' in node.name and 'MatMul' in node.op_type:
# Likely project_down
weights_dict[
"quantizer._codebook.project_down.weight"] = get_tensor(
node.input[1], transpose=True)
# Filter out None values
weights_dict = {k: v for k, v in weights_dict.items() if v is not None}
if verbose:
for k, v in weights_dict.items():
if v is not None:
print(f"{k} : {v.shape} {v.dtype}")
print(f"PyTorch weights saved to {torch_path}")
del onnx_model
if torch_path:
torch.save(weights_dict, torch_path)
else:
return weights_dict
def load_audio(file: str, sr: int = 16000):
"""
Open an audio file and read as mono waveform, resampling as necessary
Parameters
----------
file: str
The audio file to open
sr: int
The sample rate to resample the audio if necessary
Returns
-------
A torch.Tensor containing the audio waveform, in float32 dtype.
"""
audio, sample_rate = torchaudio.load(file)
if sample_rate != sr:
audio = torchaudio.transforms.Resample(sample_rate, sr)(audio)
audio = audio[0] # get the first channel
return audio
@lru_cache(maxsize=None)
def _mel_filters(device, n_mels: int) -> torch.Tensor:
"""
load the mel filterbank matrix for projecting STFT into a Mel spectrogram.
Allows decoupling librosa dependency; saved using:
np.savez_compressed(
"mel_filters.npz",
mel_80=librosa.filters.mel(sr=16000, n_fft=400, n_mels=80),
mel_128=librosa.filters.mel(sr=16000, n_fft=400, n_mels=128),
)
"""
assert n_mels in {80, 128}, f"Unsupported n_mels: {n_mels}"
filters_path = os.path.join(os.path.dirname(__file__), "assets",
"mel_filters.npz")
with np.load(filters_path, allow_pickle=False) as f:
return torch.from_numpy(f[f"mel_{n_mels}"]).to(device)
def log_mel_spectrogram(
audio: Union[str, np.ndarray, torch.Tensor],
n_mels: int = 128,
padding: int = 0,
device: Optional[Union[str, torch.device]] = None,
):
"""
Compute the log-Mel spectrogram of
Parameters
----------
audio: Union[str, np.ndarray, torch.Tensor], shape = (*)
The path to audio or either a NumPy array or Tensor containing the
audio waveform in 16 kHz
n_mels: int
The number of Mel-frequency filters, only 80 is supported
padding: int
Number of zero samples to pad to the right
device: Optional[Union[str, torch.device]]
If given, the audio tensor is moved to this device before STFT
Returns
-------
torch.Tensor, shape = (128, n_frames)
A Tensor that contains the Mel spectrogram
"""
if not torch.is_tensor(audio):
if isinstance(audio, str):
audio = load_audio(audio)
if device is not None:
audio = audio.to(device)
if padding > 0:
audio = F.pad(audio, (0, padding))
window = torch.hann_window(400).to(audio.device)
stft = torch.stft(audio, 400, 160, window=window, return_complex=True)
magnitudes = stft[..., :-1].abs()**2
filters = _mel_filters(audio.device, n_mels)
mel_spec = filters @ magnitudes
log_spec = torch.clamp(mel_spec, min=1e-10).log10()
log_spec = torch.maximum(log_spec, log_spec.max() - 8.0)
log_spec = (log_spec + 4.0) / 4.0
return log_spec
def make_non_pad_mask(lengths: torch.Tensor, max_len: int = 0) -> torch.Tensor:
"""Make mask tensor containing indices of non-padded part.
The sequences in a batch may have different lengths. To enable
batch computing, padding is need to make all sequence in same
size. To avoid the padding part pass value to context dependent
block such as attention or convolution , this padding part is
masked.
1 for non-padded part and 0 for padded part.
Parameters
----------
lengths (torch.Tensor): Batch of lengths (B,).
Returns:
-------
torch.Tensor: Mask tensor containing indices of padded part (B, max_T).
Examples:
>>> import torch
>>> import s3tokenizer
>>> lengths = torch.tensor([5, 3, 2])
>>> masks = s3tokenizer.make_non_pad_mask(lengths)
masks = [[1, 1, 1, 1, 1],
[1, 1, 1, 0, 0],
[1, 1, 0, 0, 0]]
"""
batch_size = lengths.size(0)
max_len = max_len if max_len > 0 else lengths.max().item()
seq_range = torch.arange(0,
max_len,
dtype=torch.int64,
device=lengths.device)
seq_range_expand = seq_range.unsqueeze(0).expand(batch_size, max_len)
seq_length_expand = lengths.unsqueeze(-1)
mask = seq_range_expand >= seq_length_expand
return ~mask
def mask_to_bias(mask: torch.Tensor, dtype: torch.dtype) -> torch.Tensor:
"""Convert bool-tensor to float-tensor for flash attention.
Parameters
----------
lengths (torch.Tensor): Batch of lengths (B, ?).
Returns:
-------
torch.Tensor: Mask tensor containing indices of padded part (B, ?).
Examples:
>>> import torch
>>> import s3tokenizer
>>> lengths = torch.tensor([5, 3, 2])
>>> masks = s3tokenizer.make_non_pad_mask(lengths)
masks = [[1, 1, 1, 1, 1],
[1, 1, 1, 0, 0],
[1, 1, 0, 0, 0]]
>>> new_masks = s3tokenizer.mask_to_bias(masks, torch.float32)
new_masks =
[[-0.0000e+00, -0.0000e+00, -0.0000e+00, -0.0000e+00, -0.0000e+00],
[-0.0000e+00, -0.0000e+00, -0.0000e+00, -1.0000e+10, -1.0000e+10],
[-0.0000e+00, -0.0000e+00, -1.0000e+10, -1.0000e+10, -1.0000e+10]]
"""
assert mask.dtype == torch.bool
assert dtype in [torch.float32, torch.bfloat16, torch.float16]
mask = mask.to(dtype)
# attention mask bias
# NOTE(Mddct): torch.finfo jit issues
# chunk_masks = (1.0 - chunk_masks) * torch.finfo(dtype).min
mask = (1.0 - mask) * -1.0e+10
return mask
def padding(data: List[torch.Tensor]):
""" Padding the data into batch data
Parameters
----------
data: List[Tensor], shape of Tensor (128, T)
Returns:
-------
feats [B, 128, T_max], feats lengths [B]
"""
sample = data
assert isinstance(sample, list)
feats_lengths = torch.tensor([s.size(1) for s in sample],
dtype=torch.int32)
feats = [s.t() for s in sample]
padded_feats = pad_sequence(feats, batch_first=True, padding_value=0)
return padded_feats.transpose(1, 2), feats_lengths
def merge_tokenized_segments(tokenized_segments, overlap, token_rate):
"""
Merges tokenized outputs by keeping the middle and dropping half of the overlapped tokens.
Args:
- tokenized_segments (List[List[int]]): List of tokenized sequences.
- overlap (int): Overlapping duration in seconds (default: 4s).
- token_rate (int): Number of tokens per second.
Returns:
- List[int]: A single merged token sequence.
"""
merged_tokens = []
overlap_tokens = (
overlap //
2) * token_rate # Tokens corresponding to half of the overlap duration
for i, tokens in enumerate(tokenized_segments):
l = 0 if i == 0 else overlap_tokens
r = -overlap_tokens if i != len(tokenized_segments) - 1 else len(
tokens)
# Keep only the middle part (drop overlap / 2 from both sides)
merged_tokens.extend(tokens[l:r])
return merged_tokens
| xingchensong/S3Tokenizer | 505 | Reverse Engineering of Supervised Semantic Speech Tokenizer (S3Tokenizer) proposed in CosyVoice | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
setup.py | Python | from pathlib import Path
from setuptools import find_packages, setup
def parse_requirements(filename):
"""Load requirements from a pip requirements file."""
with open(filename, 'r') as file:
lines = (line.strip() for line in file)
return [line for line in lines if line and not line.startswith('#')]
setup(
name="s3tokenizer",
version="0.3.0",
description=\
"Reverse Engineering of Supervised Semantic Speech Tokenizer (S3Tokenizer) proposed in CosyVoice", # noqa
long_description=open("README.md", encoding="utf-8").read(),
long_description_content_type="text/markdown",
python_requires=">=3.8",
author="xingchensong",
url="https://github.com/xingchensong/S3Tokenizer",
license="Apache2.0",
packages=find_packages(),
install_requires=parse_requirements(
Path(__file__).with_name("requirements.txt")),
entry_points={
"console_scripts": ["s3tokenizer=s3tokenizer.cli:main"],
},
include_package_data=True,
extras_require={"dev": ["pytest", "scipy", "black", "flake8", "isort"]},
classifiers=[
"Programming Language :: Python :: 3",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering",
],
)
| xingchensong/S3Tokenizer | 505 | Reverse Engineering of Supervised Semantic Speech Tokenizer (S3Tokenizer) proposed in CosyVoice | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
test/test_batch_efficiency.py | Python | #!/usr/bin/env python3
"""
Batch processing efficiency test
Test the efficiency improvement of new batch processing functionality for mixed long and short audio
"""
import time
import pytest
import s3tokenizer
import torch
def create_test_audio(duration_seconds=20, sample_rate=16000):
"""Create test audio"""
length = int(duration_seconds * sample_rate)
# Create meaningful audio signal (sine wave mixture)
t = torch.linspace(0, duration_seconds, length)
audio = 0.5 * torch.sin(2 * torch.pi * 440 * t) # 440Hz fundamental
audio += 0.3 * torch.sin(2 * torch.pi * 880 * t) # 880Hz second harmonic
audio += 0.1 * torch.randn(length) # Add some noise
return audio
@pytest.fixture
def test_audios():
"""Create test audio dataset"""
return [
create_test_audio(10), # Short audio
create_test_audio(20), # Medium audio
create_test_audio(40), # Long audio
create_test_audio(60), # Long audio
create_test_audio(15), # Short audio
create_test_audio(35), # Long audio
create_test_audio(25), # Medium audio
create_test_audio(50), # Long audio
]
@pytest.fixture
def long_audios():
"""Create long audio dataset"""
return [
create_test_audio(45.5),
create_test_audio(60),
create_test_audio(91.2),
create_test_audio(120),
]
@pytest.mark.parametrize("model_name", [
"speech_tokenizer_v1_25hz", "speech_tokenizer_v1",
"speech_tokenizer_v2_25hz", "speech_tokenizer_v3_25hz"
])
def test_batch_efficiency(test_audios, model_name):
"""Test batch processing efficiency for different models"""
print(f"\n=== Batch Processing Efficiency Test for {model_name} ===")
# Load model
model = s3tokenizer.load_model(model_name)
model.eval()
# Method 1: Individual processing
print(f"\n--- Method 1: Individual Processing ({model_name}) ---")
start_time = time.time()
individual_results = []
for i, audio in enumerate(test_audios):
mel = s3tokenizer.log_mel_spectrogram(audio)
mels = mel.unsqueeze(0)
mels_lens = torch.tensor([mel.size(1)])
with torch.no_grad():
codes, codes_lens = model.quantize(mels, mels_lens)
final_codes = codes[0, :codes_lens[0].item()].tolist()
individual_results.append(final_codes)
duration = audio.shape[0] / 16000
processing_type = "Long audio" if duration > 30 else "Short audio"
print(
f"Audio {i+1}: {duration:.1f}s, {len(final_codes)} tokens, {processing_type}"
)
individual_time = time.time() - start_time
print(f"Individual processing total time: {individual_time:.2f}s")
# Method 2: Batch processing
print(f"\n--- Method 2: Batch Processing ({model_name}) ---")
start_time = time.time()
# Prepare batch input
mels = []
for audio in test_audios:
mel = s3tokenizer.log_mel_spectrogram(audio)
mels.append(mel)
# Use padding to handle different lengths of mel
mels, mels_lens = s3tokenizer.padding(mels)
# Batch processing
with torch.no_grad():
codes, codes_lens = model.quantize(mels, mels_lens)
# Process results
batch_results = []
for i in range(len(test_audios)):
final_codes = codes[i, :codes_lens[i].item()].tolist()
batch_results.append(final_codes)
duration = test_audios[i].shape[0] / 16000
processing_type = "Long audio" if duration > 30 else "Short audio"
print(
f"Audio {i+1}: {duration:.1f}s, {len(final_codes)} tokens, {processing_type}"
)
batch_time = time.time() - start_time
print(f"Batch processing total time: {batch_time:.2f}s")
# Verify result consistency
print(f"\n--- Result Verification for {model_name} ---")
all_ok = True
for i in range(len(test_audios)):
individual_tokens = individual_results[i]
batch_tokens = batch_results[i]
# Calculate miss rate
if len(individual_tokens) != len(batch_tokens):
print(
f"❌ Audio {i+1} length mismatch: individual={len(individual_tokens)}, batch={len(batch_tokens)}"
)
all_ok = False
else:
mismatches = sum(1 for a, b in zip(individual_tokens, batch_tokens)
if a != b)
miss_rate = mismatches / len(individual_tokens) * 100 if len(
individual_tokens) > 0 else 0
if miss_rate < 0.2: # Less than 0.2% is considered OK
print(f"✅ Audio {i+1} miss rate: {miss_rate:.4f}% (OK)")
else:
print(f"❌ Audio {i+1} miss rate: {miss_rate:.4f}% (Too high)")
all_ok = False
# Efficiency improvement
speedup = individual_time / batch_time
print(f"\n--- Efficiency Improvement for {model_name} ---")
print(f"Batch processing speedup: {speedup:.2f}x")
if speedup > 1:
print("✅ Batch processing indeed improves efficiency!")
else:
print("⚠️ Batch processing doesn't significantly improve efficiency")
# Assertions for pytest
assert all_ok, f"Results don't match for model {model_name}"
assert len(individual_results) == len(
batch_results), "Number of results don't match"
assert all(
len(individual_results[i]) == len(batch_results[i])
for i in range(len(test_audios))), "Token counts don't match"
# Performance assertion - batch should be at least as fast as individual (allowing for some variance)
# assert batch_time <= individual_time * 1.1, f"Batch processing should not be significantly slower than individual processing for {model_name}"
@pytest.mark.parametrize("model_name", [
"speech_tokenizer_v1_25hz", "speech_tokenizer_v1",
"speech_tokenizer_v2_25hz", "speech_tokenizer_v3_25hz"
])
def test_pure_long_audio_batch(long_audios, model_name):
"""Test pure long audio batch processing for different models"""
print(f"\n=== Pure Long Audio Batch Processing Test for {model_name} ===")
model = s3tokenizer.load_model(model_name)
model.eval()
# Prepare batch input
mels = []
for audio in long_audios:
mel = s3tokenizer.log_mel_spectrogram(audio)
mels.append(mel)
mels, mels_lens = s3tokenizer.padding(mels)
# Batch process long audio
start_time = time.time()
with torch.no_grad():
codes, codes_lens = model.quantize(mels, mels_lens)
processing_time = time.time() - start_time
print(
f"Batch processing {len(long_audios)} long audios took: {processing_time:.2f}s"
)
results = []
for i in range(len(long_audios)):
duration = long_audios[i].shape[0] / 16000
tokens_count = codes_lens[i].item()
results.append((duration, tokens_count))
print(f"Long audio {i+1}: {duration:.1f}s → {tokens_count} tokens")
print(
f"✅ Pure long audio batch processing test completed for {model_name}")
# Assertions for pytest
assert codes is not None, f"Codes should not be None for model {model_name}"
assert codes_lens is not None, f"Codes lengths should not be None for model {model_name}"
assert len(results) == len(
long_audios), "Number of results should match number of input audios"
assert all(
tokens_count > 0
for _, tokens_count in results), "All audio should produce tokens"
assert processing_time > 0, "Processing time should be positive"
@pytest.mark.parametrize("model_name", [
"speech_tokenizer_v1_25hz", "speech_tokenizer_v1",
"speech_tokenizer_v2_25hz", "speech_tokenizer_v3_25hz"
])
def test_model_loading(model_name):
"""Test that all models can be loaded successfully"""
print(f"\n=== Model Loading Test for {model_name} ===")
model = s3tokenizer.load_model(model_name)
assert model is not None, f"Model {model_name} should load successfully"
# Test model can be set to eval mode
model.eval()
print(f"✅ Model {model_name} loaded and set to eval mode successfully")
@pytest.mark.parametrize("model_name", [
"speech_tokenizer_v1_25hz", "speech_tokenizer_v1",
"speech_tokenizer_v2_25hz", "speech_tokenizer_v3_25hz"
])
def test_single_audio_processing(model_name):
"""Test single audio processing for different models"""
print(f"\n=== Single Audio Processing Test for {model_name} ===")
# Create a single test audio
audio = create_test_audio(30) # 30 second audio
model = s3tokenizer.load_model(model_name)
model.eval()
# Process the audio
mel = s3tokenizer.log_mel_spectrogram(audio)
mels = mel.unsqueeze(0)
mels_lens = torch.tensor([mel.size(1)])
with torch.no_grad():
codes, codes_lens = model.quantize(mels, mels_lens)
final_codes = codes[0, :codes_lens[0].item()].tolist()
# Assertions
assert codes is not None, f"Codes should not be None for model {model_name}"
assert codes_lens is not None, f"Codes lengths should not be None for model {model_name}"
assert len(
final_codes) > 0, f"Should produce tokens for model {model_name}"
assert codes_lens[0].item() == len(
final_codes
), f"Codes length should match actual codes for model {model_name}"
duration = audio.shape[0] / 16000
print(
f"✅ Single audio processing test completed for {model_name}: {duration:.1f}s → {len(final_codes)} tokens"
)
if __name__ == "__main__":
# Run tests with pytest
pytest.main([__file__, "-v"])
| xingchensong/S3Tokenizer | 505 | Reverse Engineering of Supervised Semantic Speech Tokenizer (S3Tokenizer) proposed in CosyVoice | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
test/test_onnx.py | Python | #!/usr/bin/env python3
# -*- coding: utf-8 -*-
# Copyright [2024-09-27] <sxc19@mails.tsinghua.edu.cn, Xingchen Song>
import os
import time
from typing import Any, Dict
import numpy as np
import onnxruntime
import pytest
import s3tokenizer
import torch
def create_test_audio(duration_seconds: float = 20,
sample_rate: int = 16000) -> torch.Tensor:
"""Create synthetic test audio"""
length = int(duration_seconds * sample_rate)
# Create sinusoidal mixed audio
t = torch.linspace(0, duration_seconds, length)
audio = 0.5 * torch.sin(2 * torch.pi * 440 * t) # 440Hz fundamental
audio += 0.3 * torch.sin(2 * torch.pi * 880 * t) # 880Hz second harmonic
audio += 0.1 * torch.randn(length) # Add noise
return audio
@pytest.fixture
def test_audio_suite():
"""Create a suite of test audios with different lengths"""
return {
"short_audio_1": create_test_audio(5.0), # 5 seconds
"short_audio_2": create_test_audio(15.0), # 15 seconds
"medium_audio": create_test_audio(25.0), # 25 seconds
"medium_audio_2": create_test_audio(30.0), # 30 seconds
"long_audio": create_test_audio(
35.0), # 35 seconds - for torch and onnx, 2 segments with padding
"long_audio_2": create_test_audio(
56.0
), # 56 seconds - for torch and onnx, exactly 2 segments without padding
"very_long_audio": create_test_audio(
60.0), # 60 seconds - for torch and onnx, 3 segments with padding
}
def onnx_inference_short_audio(model_name: str, mel: torch.Tensor,
mel_len: torch.Tensor) -> torch.Tensor:
"""
ONNX inference for short audio (<=30s)
"""
# Load ONNX model
default = os.path.join(os.path.expanduser("~"), ".cache")
download_root = os.path.join(os.getenv("XDG_CACHE_HOME", default),
"s3tokenizer")
option = onnxruntime.SessionOptions()
option.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
option.intra_op_num_threads = 1
providers = ["CPUExecutionProvider"]
ort_session = onnxruntime.InferenceSession(
f"{download_root}/{model_name}.onnx",
sess_options=option,
providers=providers)
# Direct inference for short audio
onnx_output = ort_session.run(
None, {
ort_session.get_inputs()[0].name:
mel[:, :mel_len.item()].unsqueeze(0).detach().cpu().numpy(),
ort_session.get_inputs()[1].name:
np.array([mel_len.item()], dtype=np.int32)
})[0]
# Convert to numpy array to fix linter issues
onnx_output = np.array(onnx_output)
# Handle different output formats
if onnx_output.ndim == 2:
onnx_output = onnx_output[0, :]
elif onnx_output.ndim == 3:
onnx_output = onnx_output[0, 0, :]
return torch.tensor(onnx_output, dtype=torch.long)
def onnx_inference_long_audio(model_name: str, mel: torch.Tensor,
mel_len: torch.Tensor) -> torch.Tensor:
"""
ONNX inference for long audio (>30s) using sliding window approach
Based on _quantize_mixed_batch logic
Note: This may fail due to ONNX model limitations with dynamic lengths
"""
# Load ONNX model
default = os.path.join(os.path.expanduser("~"), ".cache")
download_root = os.path.join(os.getenv("XDG_CACHE_HOME", default),
"s3tokenizer")
option = onnxruntime.SessionOptions()
option.graph_optimization_level = onnxruntime.GraphOptimizationLevel.ORT_ENABLE_ALL
option.intra_op_num_threads = 1
providers = ["CPUExecutionProvider"]
ort_session = onnxruntime.InferenceSession(
f"{download_root}/{model_name}.onnx",
sess_options=option,
providers=providers)
# Parameters for sliding window (same as _quantize_mixed_batch)
sample_rate = 16000
hop_length = 160
window_size = 30 # seconds
overlap = 4 # seconds
# Calculate frame-based parameters
frames_per_window = window_size * sample_rate // hop_length # 3000 frames
frames_per_overlap = overlap * sample_rate // hop_length # 400 frames
frames_per_stride = frames_per_window - frames_per_overlap # 2600 frames
# Split into segments
segments = []
segments_len = []
start = 0
while start < mel_len.item():
end = min(start + frames_per_window, mel_len.item())
segment = mel[:, start:end]
if segment.size(1) < frames_per_window:
break
seg_len = segment.size(1)
segments.append(segment)
segments_len.append(seg_len)
start += frames_per_stride
if not segments:
raise ValueError("No valid segments for ONNX processing")
# Process each segment with ONNX
segment_results = []
for i, (segment, seg_len) in enumerate(zip(segments, segments_len)):
try:
onnx_output = ort_session.run(
None, {
ort_session.get_inputs()[0].name:
segment.unsqueeze(0).detach().cpu().numpy(),
ort_session.get_inputs()[1].name:
np.array([seg_len], dtype=np.int32)
})[0]
# Convert to numpy array to fix linter issues
onnx_output = np.array(onnx_output)
# Handle different output formats
if onnx_output.ndim == 2:
segment_codes = onnx_output[0, :].tolist()
elif onnx_output.ndim == 3:
segment_codes = onnx_output[0, 0, :].tolist()
else:
segment_codes = onnx_output.tolist()
segment_results.append(segment_codes)
except Exception as e:
print(f" ONNX error on segment {i+1}: {str(e)[:100]}...")
raise Exception(
f"ONNX inference failed on segment {i+1}: {str(e)}")
if not segment_results:
raise ValueError("All ONNX segments failed to process")
# Merge segments using the same logic as _quantize_mixed_batch
# Determine token rate based on model name
if model_name == "speech_tokenizer_v1":
token_rate = 50
else:
token_rate = 25
merged_codes = s3tokenizer.merge_tokenized_segments(
segment_results, overlap=overlap, token_rate=token_rate
)[:-overlap * token_rate] # NOTE(xcsong): drop the last overlap part.
return torch.tensor(merged_codes, dtype=torch.long)
def onnx_inference_with_long_audio_support(
model_name: str, mel: torch.Tensor,
mel_len: torch.Tensor) -> torch.Tensor:
"""
ONNX inference with automatic long audio support
"""
max_frames = 3000 # 30s * 16000 / 160 = 3000 frames
if mel_len.item() <= max_frames:
# Short audio - use direct inference
return onnx_inference_short_audio(model_name, mel, mel_len)
else:
# Long audio - use sliding window approach
return onnx_inference_long_audio(model_name, mel, mel_len)
def compare_torch_vs_onnx_single(model_name: str, audio: torch.Tensor,
audio_name: str) -> Dict[str, Any]:
"""Test single audio with both torch and onnx versions"""
duration = audio.shape[0] / 16000
# Load torch model
tokenizer = s3tokenizer.load_model(model_name)
tokenizer.eval()
# Prepare input
mel = s3tokenizer.log_mel_spectrogram(audio)
mels = mel.unsqueeze(0)
mels_lens = torch.tensor([mel.size(1)])
# Test torch version
start_time = time.time()
with torch.no_grad():
torch_codes, torch_codes_lens = tokenizer.quantize(mels, mels_lens)
torch_time = time.time() - start_time
torch_result = torch_codes[0, :torch_codes_lens[0].item()]
# Test onnx version with long audio support
try:
start_time = time.time()
onnx_result = onnx_inference_with_long_audio_support(
model_name, mel, mels_lens[0])
onnx_time = time.time() - start_time
# Compare results
min_len = min(len(torch_result), len(onnx_result))
torch_truncated = torch_result[:min_len]
onnx_truncated = onnx_result[:min_len]
are_equal = torch.equal(torch_truncated, onnx_truncated)
miss_rate = 0.0
if not are_equal:
miss_num = torch.sum(~(torch_truncated == onnx_truncated))
miss_rate = miss_num.item() * 100.0 / min_len
return {
"audio_name": audio_name,
"model_name": model_name,
"duration": duration,
"torch_tokens": torch_truncated,
"onnx_tokens": onnx_truncated,
"torch_time": torch_time,
"onnx_time": onnx_time,
"results_match": are_equal,
"miss_rate": miss_rate
}
except Exception as e:
return {
"audio_name": audio_name,
"model_name": model_name,
"duration": duration,
"torch_tokens": torch_result,
"onnx_tokens": [],
"torch_time": torch_time,
"onnx_time": 0.0,
"results_match": False,
"miss_rate": 100.0,
"error": str(e)
}
@pytest.mark.parametrize("model_name", [
"speech_tokenizer_v1", "speech_tokenizer_v1_25hz",
"speech_tokenizer_v2_25hz", "speech_tokenizer_v3_25hz"
])
def test_torch_vs_onnx_short_audio(model_name, test_audio_suite):
"""Test torch vs onnx for short audio (<=30s)"""
print(f"\n=== Testing {model_name} on Short Audio ===")
short_audios = {
k: v
for k, v in test_audio_suite.items() if v.shape[0] / 16000 <= 30
}
results = []
for audio_name, audio in short_audios.items():
result = compare_torch_vs_onnx_single(model_name, audio, audio_name)
results.append(result)
duration = result["duration"]
torch_tokens = result["torch_tokens"]
onnx_tokens = result["onnx_tokens"]
match_status = "✅" if result["results_match"] else "❌"
print(
f"{match_status} {audio_name}: {duration:.1f}s → torch:{len(torch_tokens)}, onnx:{len(onnx_tokens)}"
)
if not result["results_match"] and "error" not in result:
print(f" Miss rate: {result['miss_rate']:.2f}%")
print(
f" torch_tokens:\n{torch_tokens}\nonnx_tokens:\n{onnx_tokens}"
)
# Assertions
successful_tests = [r for r in results if "error" not in r]
assert len(successful_tests) == len(
short_audios
), f"successful tests ({len(successful_tests)}) for {model_name} should be equal to number of short audios ({len(short_audios)})" # noqa
# For short audio, we expect reasonable match rate
for r in results:
assert r[
'miss_rate'] < 0.5, f"Miss rate too high for {model_name}: {r['miss_rate']:.2f}%"
print(f"\n{model_name} Short Audio Summary:")
print(f" Successful tests: {len(successful_tests)}/{len(results)}")
@pytest.mark.parametrize("model_name", [
"speech_tokenizer_v1", "speech_tokenizer_v1_25hz",
"speech_tokenizer_v2_25hz", "speech_tokenizer_v3_25hz"
])
def test_torch_vs_onnx_long_audio(model_name, test_audio_suite):
"""Test torch vs onnx for long audio (>30s) with ONNX sliding window implementation"""
print(
f"\n=== Testing {model_name} on Long Audio (ONNX Sliding Window) ===")
long_audios = {
k: v
for k, v in test_audio_suite.items() if v.shape[0] / 16000 > 30
}
results = []
for audio_name, audio in long_audios.items():
result = compare_torch_vs_onnx_single(model_name, audio, audio_name)
results.append(result)
duration = result["duration"]
torch_tokens = result["torch_tokens"]
onnx_tokens = result["onnx_tokens"]
match_status = "✅" if result["results_match"] else "❌"
print(
f"{match_status} {audio_name}: {duration:.1f}s → torch:{len(torch_tokens)}, onnx:{len(onnx_tokens)}"
)
if not result["results_match"] and "error" not in result:
print(f" Miss rate: {result['miss_rate']:.2f}%")
print(
f" torch_tokens:\n{torch_tokens}\nonnx_tokens:\n{onnx_tokens}"
)
elif "error" in result:
print(f" Error: {result['error'][:100]}...")
# For long audio with ONNX, we document the current limitations
successful_tests = [r for r in results if "error" not in r]
assert len(successful_tests) == len(
long_audios
), f"successful tests ({len(successful_tests)}) for {model_name} should be equal to number of long audios ({len(long_audios)})" # noqa
print(f"\n{model_name} Long Audio Results:")
print(f" Total tests: {len(results)}")
print(f" Successful ONNX tests: {len(successful_tests)}")
for r in results:
# NOTE(xcsong): 0.5% is a reasonable miss rate for long audio, since we drop the last overlap part.
assert r[
'miss_rate'] < 0.5, f"Miss rate too high for {model_name}: {r['miss_rate']}%"
# The main requirement is that Torch always works
print(" ✅ Torch processing works reliably for all long audio")
if __name__ == "__main__":
# Run tests with pytest
pytest.main([__file__, "-v"])
| xingchensong/S3Tokenizer | 505 | Reverse Engineering of Supervised Semantic Speech Tokenizer (S3Tokenizer) proposed in CosyVoice | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/audio/pretrain/emilia/path.sh | Shell | cuda_prefix=/usr/local
cache_prefix=/mnt/user-ssd/songxingchen/share
. ./parse_options.sh || exit 1;
if [ ! -d "${cuda_prefix}/cuda" ]; then
echo "Error: CUDA_HOME directory does not exist: ${cuda_prefix}/cuda"
exit 1
fi
if [ ! -d "${cache_prefix}" ]; then
echo "Error: cache_prefix directory does not exist: ${cache_prefix}"
exit 1
fi
# cuda related
export CUDA_HOME=${cuda_prefix}/cuda
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH:-""}
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$CUDA_HOME/lib64/stubs:/usr/lib:/usr/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH
export CUDAToolkit_ROOT_DIR=$CUDA_HOME
export CUDAToolkit_ROOT=$CUDA_HOME
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export CUDA_TOOLKIT_ROOT=$CUDA_HOME
export CUDA_BIN_PATH=$CUDA_HOME
export CUDA_PATH=$CUDA_HOME
export CUDA_INC_PATH=$CUDA_HOME/targets/x86_64-linux
export CFLAGS=-I$CUDA_HOME/targets/x86_64-linux/include:$CFLAGS
export CXXFLAGS=-I$CUDA_HOME/targets/x86_64-linux/include:$CXXFLAGS
export LDFLAGS=-L$CUDA_HOME/lib64:$CUDA_HOME/lib64/stubs:/usr/lib:/usr/lib64:$LDFLAGS
export CUDAToolkit_TARGET_DIR=$CUDA_HOME/targets/x86_64-linux
# python related
export TOUCHNET_DIR=$PWD/../../../..
export PATH=$PWD:$PATH
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=../../../../:$PYTHONPATH
# export TORCH_NCCL_BLOCKING_WAIT=1
# export TORCH_NCCL_ASYNC_ERROR_HANDLING=1
# export NCCL_TIMEOUT=1800000000
# export NCCL_LAUNCH_TIMEOUT=6000000000000
# export NCCL_SOCKET_TIMEOUT=3000000000000
# torch related
export TORCH_NCCL_AVOID_RECORD_STREAMS=1 # see https://github.com/pytorch/torchtitan/blob/main/docs/composability.md#setting-torch_nccl_avoid_record_streams1-for-tp
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
export XDG_CACHE_HOME=${cache_prefix}/xdg
# huggingface related
export HF_HOME=${cache_prefix}/huggingface
export NUMBA_CACHE_DIR=${cache_prefix}/numba
export MPLCONFIGDIR=${cache_prefix}/matplotlib
echo "$0: CUDA_HOME: ${CUDA_HOME}"
echo "$0: HF_HOME: ${HF_HOME}"
echo "$0: TOUCHNET_DIR: ${TOUCHNET_DIR}"
echo "$0: XDG_CACHE_HOME: ${XDG_CACHE_HOME}"
echo "$0: NUMBA_CACHE_DIR: ${NUMBA_CACHE_DIR}"
echo "$0: MPLCONFIGDIR: ${MPLCONFIGDIR}"
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/audio/pretrain/wenetspeech/parse_options.sh | Shell | #!/bin/bash
# Copyright 2012 Johns Hopkins University (Author: Daniel Povey);
# Arnab Ghoshal, Karel Vesely
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED
# WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
# MERCHANTABLITY OR NON-INFRINGEMENT.
# See the Apache 2 License for the specific language governing permissions and
# limitations under the License.
# Parse command-line options.
# To be sourced by another script (as in ". parse_options.sh").
# Option format is: --option-name arg
# and shell variable "option_name" gets set to value "arg."
# The exception is --help, which takes no arguments, but prints the
# $help_message variable (if defined).
###
### The --config file options have lower priority to command line
### options, so we need to import them first...
###
# Now import all the configs specified by command-line, in left-to-right order
for ((argpos=1; argpos<$#; argpos++)); do
if [ "${!argpos}" == "--config" ]; then
argpos_plus1=$((argpos+1))
config=${!argpos_plus1}
[ ! -r $config ] && echo "$0: missing config '$config'" && exit 1
. $config # source the config file.
fi
done
###
### No we process the command line options
###
while true; do
[ -z "${1:-}" ] && break; # break if there are no arguments
case "$1" in
# If the enclosing script is called with --help option, print the help
# message and exit. Scripts should put help messages in $help_message
--help|-h) if [ -z "$help_message" ]; then echo "No help found." 1>&2;
else printf "$help_message\n" 1>&2 ; fi;
exit 0 ;;
--*=*) echo "$0: options to scripts must be of the form --name value, got '$1'"
exit 1 ;;
# If the first command-line argument begins with "--" (e.g. --foo-bar),
# then work out the variable name as $name, which will equal "foo_bar".
--*) name=`echo "$1" | sed s/^--// | sed s/-/_/g`;
# Next we test whether the variable in question is undefned-- if so it's
# an invalid option and we die. Note: $0 evaluates to the name of the
# enclosing script.
# The test [ -z ${foo_bar+xxx} ] will return true if the variable foo_bar
# is undefined. We then have to wrap this test inside "eval" because
# foo_bar is itself inside a variable ($name).
eval '[ -z "${'$name'+xxx}" ]' && echo "$0: invalid option $1" 1>&2 && exit 1;
oldval="`eval echo \\$$name`";
# Work out whether we seem to be expecting a Boolean argument.
if [ "$oldval" == "true" ] || [ "$oldval" == "false" ]; then
was_bool=true;
else
was_bool=false;
fi
# Set the variable to the right value-- the escaped quotes make it work if
# the option had spaces, like --cmd "queue.pl -sync y"
eval $name=\"$2\";
# Check that Boolean-valued arguments are really Boolean.
if $was_bool && [[ "$2" != "true" && "$2" != "false" ]]; then
echo "$0: expected \"true\" or \"false\": $1 $2" 1>&2
exit 1;
fi
shift 2;
;;
*) break;
esac
done
# Check for an empty argument to the --cmd option, which can easily occur as a
# result of scripting errors.
[ ! -z "${cmd+xxx}" ] && [ -z "$cmd" ] && echo "$0: empty argument to --cmd option" 1>&2 && exit 1;
true; # so this script returns exit code 0.
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/audio/pretrain/wenetspeech/path.sh | Shell | cuda_prefix=/usr/local
cache_prefix=/mnt/user-ssd/songxingchen/share
. ./parse_options.sh || exit 1;
if [ ! -d "${cuda_prefix}/cuda" ]; then
echo "Error: CUDA_HOME directory does not exist: ${cuda_prefix}/cuda"
exit 1
fi
if [ ! -d "${cache_prefix}" ]; then
echo "Error: cache_prefix directory does not exist: ${cache_prefix}"
exit 1
fi
# cuda related
export CUDA_HOME=${cuda_prefix}/cuda
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH:-""}
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$CUDA_HOME/lib64/stubs:/usr/lib:/usr/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH
export CUDAToolkit_ROOT_DIR=$CUDA_HOME
export CUDAToolkit_ROOT=$CUDA_HOME
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export CUDA_TOOLKIT_ROOT=$CUDA_HOME
export CUDA_BIN_PATH=$CUDA_HOME
export CUDA_PATH=$CUDA_HOME
export CUDA_INC_PATH=$CUDA_HOME/targets/x86_64-linux
export CFLAGS=-I$CUDA_HOME/targets/x86_64-linux/include:$CFLAGS
export CXXFLAGS=-I$CUDA_HOME/targets/x86_64-linux/include:$CXXFLAGS
export LDFLAGS=-L$CUDA_HOME/lib64:$CUDA_HOME/lib64/stubs:/usr/lib:/usr/lib64:$LDFLAGS
export CUDAToolkit_TARGET_DIR=$CUDA_HOME/targets/x86_64-linux
# python related
export TOUCHNET_DIR=$PWD/../../../..
export PATH=$PWD:$PATH
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=../../../../:$PYTHONPATH
# export TORCH_NCCL_BLOCKING_WAIT=1
# export TORCH_NCCL_ASYNC_ERROR_HANDLING=1
# export NCCL_TIMEOUT=1800000000
# export NCCL_LAUNCH_TIMEOUT=6000000000000
# export NCCL_SOCKET_TIMEOUT=3000000000000
# torch related
export TORCH_NCCL_AVOID_RECORD_STREAMS=1 # see https://github.com/pytorch/torchtitan/blob/main/docs/composability.md#setting-torch_nccl_avoid_record_streams1-for-tp
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
export XDG_CACHE_HOME=${cache_prefix}/xdg
# huggingface related
export HF_HOME=${cache_prefix}/huggingface
export NUMBA_CACHE_DIR=${cache_prefix}/numba
export MPLCONFIGDIR=${cache_prefix}/matplotlib
echo "$0: CUDA_HOME: ${CUDA_HOME}"
echo "$0: HF_HOME: ${HF_HOME}"
echo "$0: TOUCHNET_DIR: ${TOUCHNET_DIR}"
echo "$0: XDG_CACHE_HOME: ${XDG_CACHE_HOME}"
echo "$0: NUMBA_CACHE_DIR: ${NUMBA_CACHE_DIR}"
echo "$0: MPLCONFIGDIR: ${MPLCONFIGDIR}"
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/audio/pretrain/wenetspeech/run.sh | Shell | #!/bin/bash
# NOTE(xcsong): change xx_prefix and xx_version to ur setup
cache_prefix=/mnt/user-ssd/songxingchen/share
cuda_prefix=/usr/local
pretrained_weight_dir="" # for fromscratch training
# pretrained_weight_dir="/mnt/user-ssd/songxingchen/share/modelscope/Llama-3.2-1B-Instruct" # for continue pretrain
pretrained_tokenizer_dir="/mnt/user-ssd/songxingchen/share/modelscope/Llama-3.2-1B-Instruct"
if [ "${pretrained_weight_dir}" != "" ]; then
exp_suffix="frompretrain"
else
exp_suffix="fromscratch"
fi
# Automatically detect number of gpus
if command -v nvidia-smi &> /dev/null; then
num_gpus=$(nvidia-smi -L | wc -l)
gpu_list=$(seq -s, 0 $((num_gpus-1)))
else
num_gpus=-1
gpu_list="-1"
fi
# You can also manually specify CUDA_VISIBLE_DEVICES
# if you don't want to utilize all available GPU resources.
export CUDA_VISIBLE_DEVICES="${gpu_list}"
echo "$0: CUDA_VISIBLE_DEVICES is ${CUDA_VISIBLE_DEVICES}"
stage=1
stop_stage=2
# You should change the following two parameters for multiple machine training,
# see https://pytorch.org/docs/stable/elastic/run.html
HOST_NODE_ADDR="localhost:0"
num_nodes=1
job_id=2026
hf_data_repo="wenet-e2e/wenetspeech"
hf_data_name="default"
train_set=train_l
dev_set=dev
test_sets="test_net test_meeting"
param_dtype="bfloat16"
seed=2026
model_config=Llama-3_2-1B
tensorboard_dir=tensorboard
num_workers=12
prefetch=12
num_mel_bins=80
. ./parse_options.sh || exit 1;
. ./path.sh --cache_prefix ${cache_prefix} \
--cuda_prefix ${cuda_prefix} || exit 1
exp_id="wenetspeech_1x8192_noneac_cp1_tp1_dp8_pp1_stack5_stride4_flex_packloss_lagre1B_ar_std0.02_acc_normpreproc_wp2k_addpad_cb1024_emb16_${model_config}_${exp_suffix}_640k"
cp=$(echo $exp_id | grep -oP 'cp\d+' | grep -oP '\d+')
tp=$(echo $exp_id | grep -oP 'tp\d+' | grep -oP '\d+')
dp=$(echo $exp_id | grep -oP 'dp\d+' | grep -oP '\d+')
pp=$(echo $exp_id | grep -oP 'pp\d+' | grep -oP '\d+')
stack=$(echo $exp_id | grep -oP 'stack\d+' | grep -oP '\d+')
stride=$(echo $exp_id | grep -oP 'stride\d+' | grep -oP '\d+')
bs=$(echo $exp_id | grep -oP '\d+x\d+' | grep -oP '\d+' | head -n 1)
max_seq_len=$(echo $exp_id | grep -oP '\d+x\d+' | grep -oP '\d+' | tail -n 1)
echo "$0: ${exp_id}: cp=${cp}, tp=${tp}, dp=${dp}, pp=${pp}, stack=${stack}, stride=${stride}, bs=${bs}, max_seq_len=${max_seq_len}"
if [ ${stage} -le -1 ] && [ ${stop_stage} -ge -1 ]; then
echo "$0: stage -1: Data Download"
python download_wenetspeech.py
fi
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
for x in ${train_set} ${dev_set} ${test_sets}; do
if [ ! -f "data/${x}/data.list" ]; then
echo "$0: data/${x}/data.list does not exist. generate dataset."
mkdir -p data/${x}
python touchnet/bin/make_data.py \
--save_dir "data/${x}" \
--jsonl_path "/mnt/user-ssd/songxingchen/workspace/wenet/examples/wenetspeech/s0/data/${x}/data.list" \
--num_utt_per_shard 2000 \
--num_workers 64 \
--datatypes "audio+metainfo"
fi
done
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ] && [ "${pretrained_weight_dir}" != "" ]; then
echo "$0: Stage 1: create seed checkpoint for offline initialization"
rm -rf "exp/${exp_id}"
mkdir -p "exp/${exp_id}"
python touchnet/bin/convert_hf_to_dcp.py \
--ckpt_dir "exp/${exp_id}" \
--huggingface_model "${pretrained_weight_dir}"
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
echo "$0: Stage 2: start training"
echo "$0: num_nodes is $num_nodes, proc_per_node is $num_gpus"
# export TORCH_LOGS="+dynamo"
# export TORCHDYNAMO_VERBOSE=1
# FIXME(xcsong): Where to apply specaug ??? before quantize or after quantize
torchrun --nnodes=$num_nodes --nproc_per_node=$num_gpus \
--rdzv_id=$job_id --rdzv_backend="c10d" --rdzv_endpoint=$HOST_NODE_ADDR \
--local-ranks-filter "0" \
touchnet/bin/train.py \
--tokenizer_type "BestRQTokenizer" \
--tokenizer_bestrq_vocab_size 1024 \
--tokenizer_bestrq_input_size $(expr $stack \* $num_mel_bins) \
--tokenizer_bestrq_emb_size 16 \
--tokenizer_bestrq_init_seed ${seed} \
--tokenizer_bestrq_init_method "default" \
--datapipe_type "touch_audio" \
--datalist_path "data/${train_set}/data.list" \
--datalist_dev_path "data/${dev_set}/data.list" \
--datalist_sharding true \
--datalist_epoch 10000 \
--datalist_shuffling true \
--dataset_random_cut_audio false \
--dataset_random_cut_audio_min_length_in_ms 5000 \
--dataset_random_cut_audio_max_length_in_ms 3600000 \
--dataset_shuffling true \
--dataset_mmap true \
--dataset_batchsize ${bs} \
--dataset_audio_seqlen ${max_seq_len} \
--dataset_text_seqlen ${max_seq_len} \
--audio_max_length_in_ms_for_filter $(expr $max_seq_len \* $stride \* 10 - 200) \
--audio_min_length_in_ms_for_filter 200 \
--text_max_length_in_tokens_for_filter $(expr $max_seq_len - 1) \
--text_min_length_in_tokens_for_filter 1 \
--max_text_audio_ratio 1.0 \
--min_text_audio_ratio 0.0005 \
--audio_resample_rate 16000 \
--audio_speed_perturb true \
--audio_feat_type "fbank" \
--audiofeat_spec_aug false \
--audiofeat_spec_aug_num_t_mask 2 \
--audiofeat_spec_aug_num_f_mask 2 \
--audiofeat_spec_aug_max_t 50 \
--audiofeat_spec_aug_max_f 10 \
--audiofeat_spec_sub false \
--audiofeat_spec_sub_num_t_sub 3 \
--audiofeat_spec_sub_max_t 30 \
--audiofeat_spec_trim false \
--audiofeat_spec_trim_max_t 20 \
--audiofeat_num_mel_bins ${num_mel_bins} \
--audiofeat_frame_length 25 \
--audiofeat_frame_shift 10 \
--audiofeat_dither 0.0 \
--audiofeat_stack_length ${stack} \
--audiofeat_stride_length ${stride} \
--audiofeat_normalize true \
--dataloader_num_workers ${num_workers} \
--dataloader_prefetch_factor ${prefetch} \
--training_description "wenetspeech ssl" \
--training_seed "${seed}" \
--training_model_name "touch_audio" \
--training_model_config_path "config/${model_config}.json" \
--training_print_args true \
--training_trace_dump_folder "exp/${exp_id}" \
--training_fsdp_reshard_after_forward "default" \
--training_context_parallel_degree ${cp} \
--training_context_parallel_rotate_method "allgather" \
--training_tensor_parallel_degree ${tp} \
--training_enable_loss_parallel true \
--training_pipeline_parallel_degree ${pp} \
--training_pipeline_parallel_schedule "1F1B" \
--training_enable_ckpt true \
--training_ckpt_load_step -1 \
--training_ckpt_interval 2000 \
--training_ckpt_keep_latest_k 2 \
--training_log_freq 100 \
--training_enable_tensorboard true \
--training_save_tb_folder "tensorboard" \
--training_tb_rank_0_only true \
--training_mixed_precision_param "${param_dtype}" \
--training_mixed_precision_reduce "float32" \
--training_compile true \
--training_enable_compiled_autograd false \
--training_gc_freq 1000 \
--training_deterministic false \
--training_max_norm 5.0 \
--training_activation_checkpoint_mode "none" \
--training_activation_checkpoint_selective_ac_option "op" \
--training_enable_profiling true \
--training_profiling_traces_folder "profile_traces" \
--training_profiling_freq 100 \
--training_profiling_keep_first_k 10 \
--training_enable_memory_snapshot true \
--training_memory_snapshot_folder "memory_snapshot" \
--optimizer_name "AdamW" \
--optimizer_lr 8e-4 \
--optimizer_impl "fused" \
--lr_scheduler_steps 640000 \
--lr_scheduler_warmup_steps 2000 \
--lr_scheduler_decay_type "linear" \
--lr_scheduler_lr_min 0.0
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
echo "$0: Stage 3: convert dcp to huggingface-format"
python touchnet/bin/convert_dcp_to_hf.py \
--ckpt_dir "exp/${exp_id}" \
--step 260000 \
--config "config/${model_config}.json" \
--tokenizer_model "${pretrained_tokenizer_dir}"
fi
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/audio/sft/asr/wenetspeech/local/extract_trans_and_pred.py | Python | import argparse
import json
import os
from tqdm import tqdm
def main(jsonl_path):
out_dir = os.path.dirname(jsonl_path)
trans_path = os.path.join(out_dir, 'trans.txt')
raw_rec_path = os.path.join(out_dir, 'raw_rec.txt')
with open(jsonl_path, 'r', encoding='utf-8') as fin, \
open(trans_path, 'w', encoding='utf-8') as ftrans, \
open(raw_rec_path, 'w', encoding='utf-8') as fraw:
all_keys = []
for line in tqdm(fin.readlines()):
line = line.strip()
if not line:
continue
try:
result = json.loads(line)
label = json.loads(result['label'])
key = label['key']
txt = label['txt']
predict = result['predict']
# 有些predict可能是字符串,有些是json字符串
if isinstance(predict, str):
try:
predict = json.loads(predict)
except Exception:
pass
# 如果predict是字典,优先取transcription,否则直接str(predict)
if isinstance(predict, dict) and 'transcription' in predict:
pred_txt = predict['transcription']
else:
pred_txt = str(predict)
if key not in all_keys:
all_keys.append(key)
ftrans.write(f"{key} {txt}\n")
if len(pred_txt.replace(' ', '')) > 0:
fraw.write(f"{key} {pred_txt}\n")
except Exception as e:
print(f"[WARN] 跳过异常行: {e}")
continue
print(f"已生成: {trans_path} 和 {raw_rec_path}")
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='从jsonl文件提取trans.txt和raw_rec.txt')
parser.add_argument('--jsonl', type=str, required=True, help='输入的jsonl文件路径')
args = parser.parse_args()
main(args.jsonl)
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/audio/sft/asr/wenetspeech/parse_options.sh | Shell | #!/bin/bash
# Copyright 2012 Johns Hopkins University (Author: Daniel Povey);
# Arnab Ghoshal, Karel Vesely
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED
# WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
# MERCHANTABLITY OR NON-INFRINGEMENT.
# See the Apache 2 License for the specific language governing permissions and
# limitations under the License.
# Parse command-line options.
# To be sourced by another script (as in ". parse_options.sh").
# Option format is: --option-name arg
# and shell variable "option_name" gets set to value "arg."
# The exception is --help, which takes no arguments, but prints the
# $help_message variable (if defined).
###
### The --config file options have lower priority to command line
### options, so we need to import them first...
###
# Now import all the configs specified by command-line, in left-to-right order
for ((argpos=1; argpos<$#; argpos++)); do
if [ "${!argpos}" == "--config" ]; then
argpos_plus1=$((argpos+1))
config=${!argpos_plus1}
[ ! -r $config ] && echo "$0: missing config '$config'" && exit 1
. $config # source the config file.
fi
done
###
### No we process the command line options
###
while true; do
[ -z "${1:-}" ] && break; # break if there are no arguments
case "$1" in
# If the enclosing script is called with --help option, print the help
# message and exit. Scripts should put help messages in $help_message
--help|-h) if [ -z "$help_message" ]; then echo "No help found." 1>&2;
else printf "$help_message\n" 1>&2 ; fi;
exit 0 ;;
--*=*) echo "$0: options to scripts must be of the form --name value, got '$1'"
exit 1 ;;
# If the first command-line argument begins with "--" (e.g. --foo-bar),
# then work out the variable name as $name, which will equal "foo_bar".
--*) name=`echo "$1" | sed s/^--// | sed s/-/_/g`;
# Next we test whether the variable in question is undefned-- if so it's
# an invalid option and we die. Note: $0 evaluates to the name of the
# enclosing script.
# The test [ -z ${foo_bar+xxx} ] will return true if the variable foo_bar
# is undefined. We then have to wrap this test inside "eval" because
# foo_bar is itself inside a variable ($name).
eval '[ -z "${'$name'+xxx}" ]' && echo "$0: invalid option $1" 1>&2 && exit 1;
oldval="`eval echo \\$$name`";
# Work out whether we seem to be expecting a Boolean argument.
if [ "$oldval" == "true" ] || [ "$oldval" == "false" ]; then
was_bool=true;
else
was_bool=false;
fi
# Set the variable to the right value-- the escaped quotes make it work if
# the option had spaces, like --cmd "queue.pl -sync y"
eval $name=\"$2\";
# Check that Boolean-valued arguments are really Boolean.
if $was_bool && [[ "$2" != "true" && "$2" != "false" ]]; then
echo "$0: expected \"true\" or \"false\": $1 $2" 1>&2
exit 1;
fi
shift 2;
;;
*) break;
esac
done
# Check for an empty argument to the --cmd option, which can easily occur as a
# result of scripting errors.
[ ! -z "${cmd+xxx}" ] && [ -z "$cmd" ] && echo "$0: empty argument to --cmd option" 1>&2 && exit 1;
true; # so this script returns exit code 0.
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/audio/sft/asr/wenetspeech/path.sh | Shell | cuda_prefix=/usr/local
cache_prefix=/mnt/user-ssd/songxingchen/share
. ./parse_options.sh || exit 1;
if [ ! -d "${cuda_prefix}/cuda" ]; then
echo "Error: CUDA_HOME directory does not exist: ${cuda_prefix}/cuda"
exit 1
fi
if [ ! -d "${cache_prefix}" ]; then
echo "Error: cache_prefix directory does not exist: ${cache_prefix}"
exit 1
fi
# cuda related
export CUDA_HOME=${cuda_prefix}/cuda
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH:-""}
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$CUDA_HOME/lib64/stubs:/usr/lib:/usr/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH
export CUDAToolkit_ROOT_DIR=$CUDA_HOME
export CUDAToolkit_ROOT=$CUDA_HOME
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export CUDA_TOOLKIT_ROOT=$CUDA_HOME
export CUDA_BIN_PATH=$CUDA_HOME
export CUDA_PATH=$CUDA_HOME
export CUDA_INC_PATH=$CUDA_HOME/targets/x86_64-linux
export CFLAGS=-I$CUDA_HOME/targets/x86_64-linux/include:$CFLAGS
export CXXFLAGS=-I$CUDA_HOME/targets/x86_64-linux/include:$CXXFLAGS
export LDFLAGS=-L$CUDA_HOME/lib64:$CUDA_HOME/lib64/stubs:/usr/lib:/usr/lib64:$LDFLAGS
export CUDAToolkit_TARGET_DIR=$CUDA_HOME/targets/x86_64-linux
# python related
export TOUCHNET_DIR=$PWD/../../../../..
export PATH=$PWD:$PATH
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=../../../../../:$PYTHONPATH
# export TORCH_NCCL_BLOCKING_WAIT=1
# export TORCH_NCCL_ASYNC_ERROR_HANDLING=1
# export NCCL_TIMEOUT=1800000000
# export NCCL_LAUNCH_TIMEOUT=6000000000000
# export NCCL_SOCKET_TIMEOUT=3000000000000
# torch related
export TORCH_NCCL_AVOID_RECORD_STREAMS=1 # see https://github.com/pytorch/torchtitan/blob/main/docs/composability.md#setting-torch_nccl_avoid_record_streams1-for-tp
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
export XDG_CACHE_HOME=${cache_prefix}/xdg
# huggingface related
export HF_HOME=${cache_prefix}/huggingface
export NUMBA_CACHE_DIR=${cache_prefix}/numba
export MPLCONFIGDIR=${cache_prefix}/matplotlib
echo "$0: CUDA_HOME: ${CUDA_HOME}"
echo "$0: HF_HOME: ${HF_HOME}"
echo "$0: TOUCHNET_DIR: ${TOUCHNET_DIR}"
echo "$0: XDG_CACHE_HOME: ${XDG_CACHE_HOME}"
echo "$0: NUMBA_CACHE_DIR: ${NUMBA_CACHE_DIR}"
echo "$0: MPLCONFIGDIR: ${MPLCONFIGDIR}"
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/audio/sft/asr/wenetspeech/run.sh | Shell | #!/bin/bash
# NOTE(xcsong): change xx_prefix and xx_version to ur setup
cache_prefix=/mnt/user-ssd/songxingchen/share
cuda_prefix=/usr/local
# NOTE(xcsong): Qwen2-Audio-7B https://modelscope.cn/models/Qwen/Qwen2-Audio-7B
# pretrained_weight_dir="${cache_prefix}/modelscope/Qwen2-Audio-7B" # for fintuning
pretrained_weight_dir="" # for fromscratch training
pretrained_tokenizer_dir="${cache_prefix}/modelscope/Qwen2-Audio-7B"
pretrained_processor_dir="${cache_prefix}/modelscope/Qwen2-Audio-7B"
# NOTE(xcsong): Kimi-Audio-7B-Instruct https://www.modelscope.cn/models/xingchensong/Kimi-Audio-7B-Instruct-with-Tokenizer-Encoder
# pretrained_weight_dir="${cache_prefix}/modelscope/Kimi-Audio-7B-Instruct-with-Tokenizer-Encoder"
# pretrained_tokenizer_dir="${cache_prefix}/modelscope/Kimi-Audio-7B-Instruct-with-Tokenizer-Encoder"
# pretrained_processor_dir="${cache_prefix}/modelscope/Kimi-Audio-7B-Instruct-with-Tokenizer-Encoder"
# NOTE(xcsong): Kimi-Audio-7B https://www.modelscope.cn/models/xingchensong/Kimi-Audio-7B-with-Tokenizer-Encoder
# pretrained_weight_dir="${cache_prefix}/modelscope/Kimi-Audio-7B-with-Tokenizer-Encoder"
# pretrained_tokenizer_dir="${cache_prefix}/modelscope/Kimi-Audio-7B-with-Tokenizer-Encoder"
# pretrained_processor_dir="${cache_prefix}/modelscope/Kimi-Audio-7B-with-Tokenizer-Encoder"
if [ "${pretrained_weight_dir}" != "" ]; then
exp_suffix="frompretrain"
else
exp_suffix="fromscratch"
fi
# Automatically detect number of gpus
if command -v nvidia-smi &> /dev/null; then
num_gpus=$(nvidia-smi -L | wc -l)
gpu_list=$(seq -s, 0 $((num_gpus-1)))
else
num_gpus=-1
gpu_list="-1"
fi
# You can also manually specify CUDA_VISIBLE_DEVICES
# if you don't want to utilize all available GPU resources.
export CUDA_VISIBLE_DEVICES="${gpu_list}"
echo "$0: CUDA_VISIBLE_DEVICES is ${CUDA_VISIBLE_DEVICES}"
stage=1
stop_stage=2
job_id=2026
hf_data_repo="wenet-e2e/wenetspeech"
hf_data_name="default"
train_set=train_l
dev_set=dev
test_sets="test_net test_meeting"
param_dtype="bfloat16"
seed=2025
tensorboard_dir=tensorboard
num_workers=12
prefetch=12
activation_checkpoint_mode="full"
audio_max_length_in_ms_for_filter=30000 # 30s
liger=false
compile=true
if [[ "${pretrained_tokenizer_dir}" == *"Qwen2-Audio-7B"* ]]; then
bs=2
max_seq_len=8192
model_type="qwen2_audio"
model_config="Qwen2-Audio-7B"
pack=false
if [[ "${exp_suffix}" == "frompretrain" ]]; then
num_nodes=1 # NOTE(xcsong): for sft, 1 node with 8 gpus (80GB memory) is enough
HOST_NODE_ADDR="localhost:0"
lr=2e-5
lr_scheduler_steps=30000
lr_scheduler_warmup_steps=2000
elif [[ "${exp_suffix}" == "fromscratch" ]]; then
num_nodes=4 # NOTE(xcsong): for from scratch training, we need 4 nodes with 8 gpus per node (80GB memory)
HOST_NODE_ADDR="xx.xx.xx.xx:9901" # NOTE(xcsong): change to your master ip, https://pytorch.org/docs/stable/elastic/run.html
lr=2e-4
lr_scheduler_steps=30000
lr_scheduler_warmup_steps=2000
fi
elif [[ "${pretrained_tokenizer_dir}" == *"Kimi-Audio-7B"* ]]; then
bs=1
max_seq_len=8192
model_type="kimi_audio"
if [[ "${pretrained_tokenizer_dir}" == *"Kimi-Audio-7B-Instruct"* ]]; then
model_config="Kimi-Audio-7B-Instruct"
else
model_config="Kimi-Audio-7B"
fi
pack=false
if [[ "${exp_suffix}" == "frompretrain" ]]; then
num_nodes=4
HOST_NODE_ADDR="xx.xx.xx.xx:9901" # NOTE(xcsong): change to your master ip, https://pytorch.org/docs/stable/elastic/run.html
lr=2e-5
lr_scheduler_steps=30000
lr_scheduler_warmup_steps=2000
elif [[ "${exp_suffix}" == "fromscratch" ]]; then
echo "fromscratch is not supported for Kimi-Audio"
exit 1
fi
else
num_nodes=4
HOST_NODE_ADDR="xx.xx.xx.xx:9901" # NOTE(xcsong): change to your master ip, https://pytorch.org/docs/stable/elastic/run.html
bs=2
max_seq_len=8192
model_type="touch_audio"
model_config="Touch-Audio-7B"
stack=13
stride=12
pack=true
echo "TODO(xcsong): recipe for Touch-Audio-7B"
exit 1
fi
datapipe_type=${model_type}
checkpoint_step=${lr_scheduler_steps}
. ./parse_options.sh || exit 1;
. ./path.sh --cache_prefix ${cache_prefix} \
--cuda_prefix ${cuda_prefix} || exit 1
git config --global --add safe.directory $(realpath ../../../../../)
commit=$(git rev-parse HEAD | cut -c 1-7)
exp_id="nodes${num_nodes}_wenetspeech_${bs}x${max_seq_len}_cp1_tp1_dp8_pp1_lr${lr}_wp${lr_scheduler_warmup_steps}_total${lr_scheduler_steps}_${model_config}_filter${audio_max_length_in_ms_for_filter}_${exp_suffix}_${commit}_ac${activation_checkpoint_mode}_liger${liger}"
cp=$(echo $exp_id | grep -oP 'cp\d+' | grep -oP '\d+')
tp=$(echo $exp_id | grep -oP 'tp\d+' | grep -oP '\d+')
dp=$(echo $exp_id | grep -oP 'dp\d+' | grep -oP '\d+')
pp=$(echo $exp_id | grep -oP 'pp\d+' | grep -oP '\d+')
echo "================================================"
echo "$0: exp_id: ${exp_id}"
echo "$0: chosen_model=${model_config}, activation_checkpoint_mode=${activation_checkpoint_mode}"
echo "$0: num_nodes=${num_nodes}, cp=${cp}, tp=${tp}, dp=${dp}, pp=${pp}, bs=${bs}, max_seq_len=${max_seq_len}, liger=${liger}"
echo "================================================"
if [ ${stage} -le -1 ] && [ ${stop_stage} -ge -1 ]; then
echo "================================================"
echo "$0: stage -1: Data Download"
echo "================================================"
python download_wenetspeech.py
fi
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
for x in ${train_set} ${dev_set} ${test_sets}; do
if [ ! -f "data/${x}/data.list" ]; then
echo "================================================"
echo "$0: data/${x}/data.list does not exist. generate dataset."
echo "================================================"
mkdir -p data/${x}
python touchnet/bin/make_data.py \
--save_dir "data/${x}" \
--jsonl_path "/mnt/user-ssd/songxingchen/workspace/wenet/examples/wenetspeech/s0/data/${x}/data.list" \
--num_utt_per_shard 2000 \
--num_workers 64 \
--datatypes "audio+metainfo"
fi
done
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ] && [ "${pretrained_weight_dir}" != "" ]; then
echo "================================================"
echo "$0: Stage 1: create seed checkpoint for offline initialization"
echo "================================================"
mkdir -p "exp/${exp_id}"
python touchnet/bin/convert_hf_to_dcp.py \
--ckpt_dir "exp/${exp_id}" \
--model_type "${model_type}" \
--training_model_config_path "config/${model_config}.json" \
--huggingface_model "${pretrained_weight_dir}"
cp "config/${model_config}.json" "exp/${exp_id}/model_config.json"
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
echo "================================================"
echo "$0: Stage 2: start training"
echo "$0: num_nodes is $num_nodes, proc_per_node is $num_gpus"
echo "================================================"
# export TORCH_LOGS="+dynamo"
# export TORCHDYNAMO_VERBOSE=1
torchrun --nnodes=$num_nodes --nproc_per_node=$num_gpus \
--rdzv_id=$job_id --rdzv_backend="c10d" --rdzv_endpoint=$HOST_NODE_ADDR \
--local-ranks-filter "0,1,2,3,4,5,6,7" \
touchnet/bin/train.py \
--tokenizer_model "${pretrained_tokenizer_dir}" \
--tokenizer_type "HuggingFaceTokenizer" \
--processor_model "${pretrained_processor_dir}" \
--datapipe_type "${datapipe_type}" \
--datalist_path "data/${train_set}/data.list" \
--datalist_dev_path "data/${dev_set}/data.list" \
--datalist_sharding true \
--datalist_epoch 1000 \
--datalist_shuffling true \
--dataset_enable_pack ${pack} \
--dataset_shuffling true \
--dataset_mmap true \
--dataset_batchsize ${bs} \
--dataset_audio_seqlen ${max_seq_len} \
--dataset_text_seqlen ${max_seq_len} \
--audio_max_length_in_ms_for_filter ${audio_max_length_in_ms_for_filter} \
--audio_min_length_in_ms_for_filter 200 \
--text_max_length_in_tokens_for_filter $(expr $max_seq_len - 1) \
--text_min_length_in_tokens_for_filter 1 \
--dataloader_num_workers ${num_workers} \
--dataloader_prefetch_factor ${prefetch} \
--training_init_timeout_seconds 300 \
--training_description "wenetspeech asr, ${model_type}" \
--training_seed "${seed}" \
--training_model_name "${model_type}" \
--training_model_config_path "config/${model_config}.json" \
--training_print_args true \
--training_trace_dump_folder "exp/${exp_id}" \
--training_fsdp_reshard_after_forward "default" \
--training_data_parallel_replicate_degree ${num_nodes} \
--training_context_parallel_degree ${cp} \
--training_context_parallel_rotate_method "allgather" \
--training_tensor_parallel_degree ${tp} \
--training_enable_loss_parallel true \
--training_pipeline_parallel_degree ${pp} \
--training_pipeline_parallel_schedule "1F1B" \
--training_enable_ckpt true \
--training_ckpt_load_step -1 \
--training_ckpt_interval 2000 \
--training_ckpt_keep_latest_k 2 \
--training_log_freq 100 \
--training_enable_tensorboard true \
--training_save_tb_folder "tensorboard" \
--training_tb_rank_0_only true \
--training_mixed_precision_param "${param_dtype}" \
--training_mixed_precision_reduce "float32" \
--training_compile ${compile} \
--training_enable_liger_kernel ${liger} \
--training_enable_compiled_autograd false \
--training_gc_freq 1000 \
--training_deterministic false \
--training_max_norm 5.0 \
--training_activation_checkpoint_mode "${activation_checkpoint_mode}" \
--training_activation_checkpoint_selective_ac_option "op" \
--training_enable_profiling true \
--training_profiling_traces_folder "profile_traces" \
--training_profiling_freq 100 \
--training_profiling_keep_first_k 2 \
--training_enable_memory_snapshot true \
--training_memory_snapshot_folder "memory_snapshot" \
--optimizer_name "AdamW" \
--optimizer_lr ${lr} \
--optimizer_impl "fused" \
--lr_scheduler_steps ${lr_scheduler_steps} \
--lr_scheduler_warmup_steps ${lr_scheduler_warmup_steps} \
--lr_scheduler_decay_type "linear" \
$(if [ "${model_type}" = "touch_audio" ]; then
echo "--lr_scheduler_lr_min 0.0 \
--max_text_audio_ratio 1.0 \
--min_text_audio_ratio 0.0005 \
--audio_resample_rate 16000 \
--audio_speed_perturb true \
--audio_feat_type fbank \
--audiofeat_spec_aug true \
--audiofeat_spec_aug_num_t_mask 2 \
--audiofeat_spec_aug_num_f_mask 2 \
--audiofeat_spec_aug_max_t 50 \
--audiofeat_spec_aug_max_f 10 \
--audiofeat_spec_sub true \
--audiofeat_spec_sub_num_t_sub 3 \
--audiofeat_spec_sub_max_t 30 \
--audiofeat_spec_trim false \
--audiofeat_spec_trim_max_t 20 \
--audiofeat_num_mel_bins 80 \
--audiofeat_frame_length 25 \
--audiofeat_frame_shift 10 \
--audiofeat_dither 0.0 \
--audiofeat_stack_length ${stack} \
--audiofeat_stride_length ${stride} \
--audiofeat_normalize true"
else
echo "--lr_scheduler_lr_min 0.0"
fi)
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
echo "================================================"
echo "$0: Stage 3: convert dcp to huggingface-format"
echo "================================================"
python touchnet/bin/convert_dcp_to_hf.py \
--ckpt_dir "exp/${exp_id}" \
--step "${checkpoint_step}" \
--config "exp/${exp_id}/model_config.json" \
--model_type "${model_type}" \
--tokenizer_model "${pretrained_tokenizer_dir}"
fi
if [ ${stage} -le 4 ] && [ ${stop_stage} -ge 4 ]; then
if [[ "${exp_id}" == *"Kimi-Audio-7B"* ]]; then
dtypes="float32"
else
dtypes="bfloat16"
fi
for model_dtype in ${dtypes}; do
if [ "${model_dtype}" = "bfloat16" ]; then
batch_size=16
elif [ "${model_dtype}" = "float32" ]; then
batch_size=1
else
echo "Unsupported model_dtype: ${model_dtype}"
exit 1
fi
for data_type in ${test_sets}; do
if [ "${model_type}" = "touch_audio" ]; then
instruct=""
else
instruct="Generate the transcription:"
fi
model_path="exp/${exp_id}/checkpoint_hf/step-${checkpoint_step}"
output_dir="${model_path}/inference_result/${data_type}.${model_dtype}"
echo "================================================"
echo "$0: data_type: ${data_type}"
echo "$0: model_dtype: ${model_dtype}"
echo "$0: batch_size: ${batch_size}"
echo "$0: model_path: ${model_path}"
echo "$0: output_dir: ${output_dir}"
echo "$0: instruct: ${instruct}"
echo "================================================"
torchrun --nproc_per_node=8 --nnodes=1 \
--rdzv_id=2025 --rdzv_backend="c10d" --rdzv_endpoint="localhost:8899" \
--local-ranks-filter "0" \
touchnet/models/${model_type}/inference_${model_type}.py \
--model_path "${model_path}" \
--model_dtype "${model_dtype}" \
--instruct "${instruct}" \
--data_list data/${data_type}/data.list.raw \
--output_dir "${output_dir}" \
--batch_size ${batch_size} \
--inference_enable_liger_kernel ${liger} \
--num_workers 16 \
--prefetch 8
cat ${output_dir}/part* > ${output_dir}/final.jsonl
python local/extract_trans_and_pred.py --jsonl "${output_dir}/final.jsonl"
# NOTE(xcsong): we use SPEECHIO-style wer calculator
rm -f ${output_dir}/ref.txt
echo "$0 --> Normalizing REF text ..."
python touchnet/bin/textnorm_zh.py --format=ark \
--to_upper --to_banjiao --remove_fillers --remove_erhua \
${output_dir}/trans.txt ${output_dir}/ref.txt
rm -f ${output_dir}/rec.txt
echo "$0 --> Normalizing HYP text ..."
# add "--cc_mode=t2s" option if charset is traditional
# (e.g. whisper & google USM model)
python touchnet/bin/textnorm_zh.py --format=ark \
--to_upper --to_banjiao --remove_fillers --remove_erhua \
${output_dir}/raw_rec.txt ${output_dir}/rec.txt
grep -v $'\t$' ${output_dir}/rec.txt > ${output_dir}/rec_non_empty.txt
tokenizer=char
python touchnet/bin/error_rate_zh \
--tokenizer ${tokenizer} \
--ref ${output_dir}/ref.txt \
--hyp ${output_dir}/rec_non_empty.txt \
${output_dir}/DETAILS.txt | tee ${output_dir}/RESULTS.txt
done
done
fi
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/text/pretrain/allenai_c4/download_c4.py | Python | import os
from datasets import DownloadConfig, load_dataset
hf_data_repo = "allenai/c4"
hf_data_name = "en"
download_config = DownloadConfig(
num_proc=12,
max_retries=1200,
)
# English only
signal = 1
while signal:
try:
# 305GB, 156B tokens
# ref: https://mp.weixin.qq.com/s?__biz=MjM5ODExNDA2MA==&mid=2449950449&idx=1&sn=dcafccb19ef913e905a5b6479a570fe3&chksm=b13c4092864bc9846be7a8bcb2f90d83e40ec5bd6c7e45c8c01f97d884f69213f93c586c9d6c#rd # noqa
datas = load_dataset(f"{hf_data_repo}", f"{hf_data_name}", download_config=download_config)
# multilingual (mC4): 9.7TB (108 subsets, one per language), ~6T tokens
# datas = load_dataset(f"{hf_data_repo}", "multilingual", download_config=download_config)
signal = 0
except Exception as ex:
pass
HF_HOME = os.environ.get("HF_HOME", "/bucket/output/jfs-hdfs/user/xingchen.song/share/huggingface")
prefix = f"{HF_HOME}/datasets/converted_jsonl_for_touchnet"
for key in datas.keys():
# 'train': 364868892
# 'validation': 364608
data = datas[key]
print(f"num_samples of {hf_data_repo}/{hf_data_name}[{key}]: {len(data)}")
num_bytes = data.info.splits[key].num_bytes
# shard data for every 10GB
num_shards = num_bytes // (10 * 1024 * 1024 * 1024) + 1
print(f"num_shards of {hf_data_repo}/{hf_data_name}[{key}]: {num_shards}")
for i in range(num_shards):
data.shard(num_shards, i, writer_batch_size=100000).to_json(
path_or_buf=f"{prefix}/{hf_data_repo}/{hf_data_name}/{key}-{i:05d}-of-{num_shards:05d}.jsonl",
batch_size=100000,
num_proc=32,
)
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/text/pretrain/allenai_c4/parse_options.sh | Shell | #!/bin/bash
# Copyright 2012 Johns Hopkins University (Author: Daniel Povey);
# Arnab Ghoshal, Karel Vesely
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED
# WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
# MERCHANTABLITY OR NON-INFRINGEMENT.
# See the Apache 2 License for the specific language governing permissions and
# limitations under the License.
# Parse command-line options.
# To be sourced by another script (as in ". parse_options.sh").
# Option format is: --option-name arg
# and shell variable "option_name" gets set to value "arg."
# The exception is --help, which takes no arguments, but prints the
# $help_message variable (if defined).
###
### The --config file options have lower priority to command line
### options, so we need to import them first...
###
# Now import all the configs specified by command-line, in left-to-right order
for ((argpos=1; argpos<$#; argpos++)); do
if [ "${!argpos}" == "--config" ]; then
argpos_plus1=$((argpos+1))
config=${!argpos_plus1}
[ ! -r $config ] && echo "$0: missing config '$config'" && exit 1
. $config # source the config file.
fi
done
###
### No we process the command line options
###
while true; do
[ -z "${1:-}" ] && break; # break if there are no arguments
case "$1" in
# If the enclosing script is called with --help option, print the help
# message and exit. Scripts should put help messages in $help_message
--help|-h) if [ -z "$help_message" ]; then echo "No help found." 1>&2;
else printf "$help_message\n" 1>&2 ; fi;
exit 0 ;;
--*=*) echo "$0: options to scripts must be of the form --name value, got '$1'"
exit 1 ;;
# If the first command-line argument begins with "--" (e.g. --foo-bar),
# then work out the variable name as $name, which will equal "foo_bar".
--*) name=`echo "$1" | sed s/^--// | sed s/-/_/g`;
# Next we test whether the variable in question is undefned-- if so it's
# an invalid option and we die. Note: $0 evaluates to the name of the
# enclosing script.
# The test [ -z ${foo_bar+xxx} ] will return true if the variable foo_bar
# is undefined. We then have to wrap this test inside "eval" because
# foo_bar is itself inside a variable ($name).
eval '[ -z "${'$name'+xxx}" ]' && echo "$0: invalid option $1" 1>&2 && exit 1;
oldval="`eval echo \\$$name`";
# Work out whether we seem to be expecting a Boolean argument.
if [ "$oldval" == "true" ] || [ "$oldval" == "false" ]; then
was_bool=true;
else
was_bool=false;
fi
# Set the variable to the right value-- the escaped quotes make it work if
# the option had spaces, like --cmd "queue.pl -sync y"
eval $name=\"$2\";
# Check that Boolean-valued arguments are really Boolean.
if $was_bool && [[ "$2" != "true" && "$2" != "false" ]]; then
echo "$0: expected \"true\" or \"false\": $1 $2" 1>&2
exit 1;
fi
shift 2;
;;
*) break;
esac
done
# Check for an empty argument to the --cmd option, which can easily occur as a
# result of scripting errors.
[ ! -z "${cmd+xxx}" ] && [ -z "$cmd" ] && echo "$0: empty argument to --cmd option" 1>&2 && exit 1;
true; # so this script returns exit code 0.
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/text/pretrain/allenai_c4/path.sh | Shell | cuda_prefix=/usr/local
cache_prefix=/mnt/user-ssd/songxingchen/share
. ./parse_options.sh || exit 1;
if [ ! -d "${cuda_prefix}/cuda" ]; then
echo "Error: CUDA_HOME directory does not exist: ${cuda_prefix}/cuda"
exit 1
fi
if [ ! -d "${cache_prefix}" ]; then
echo "Error: cache_prefix directory does not exist: ${cache_prefix}"
exit 1
fi
# cuda related
export CUDA_HOME=${cuda_prefix}/cuda
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH:-""}
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$CUDA_HOME/lib64/stubs:/usr/lib:/usr/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH
export CUDAToolkit_ROOT_DIR=$CUDA_HOME
export CUDAToolkit_ROOT=$CUDA_HOME
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export CUDA_TOOLKIT_ROOT=$CUDA_HOME
export CUDA_BIN_PATH=$CUDA_HOME
export CUDA_PATH=$CUDA_HOME
export CUDA_INC_PATH=$CUDA_HOME/targets/x86_64-linux
export CFLAGS=-I$CUDA_HOME/targets/x86_64-linux/include:$CFLAGS
export CXXFLAGS=-I$CUDA_HOME/targets/x86_64-linux/include:$CXXFLAGS
export LDFLAGS=-L$CUDA_HOME/lib64:$CUDA_HOME/lib64/stubs:/usr/lib:/usr/lib64:$LDFLAGS
export CUDAToolkit_TARGET_DIR=$CUDA_HOME/targets/x86_64-linux
# python related
export TOUCHNET_DIR=$PWD/../../../..
export PATH=$PWD:$PATH
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=../../../../:$PYTHONPATH
# export TORCH_NCCL_BLOCKING_WAIT=1
# export TORCH_NCCL_ASYNC_ERROR_HANDLING=1
# export NCCL_TIMEOUT=1800000000
# export NCCL_LAUNCH_TIMEOUT=6000000000000
# export NCCL_SOCKET_TIMEOUT=3000000000000
# torch related
export TORCH_NCCL_AVOID_RECORD_STREAMS=1 # see https://github.com/pytorch/torchtitan/blob/main/docs/composability.md#setting-torch_nccl_avoid_record_streams1-for-tp
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
export XDG_CACHE_HOME=${cache_prefix}/xdg
# huggingface related
export HF_HOME=${cache_prefix}/huggingface
export NUMBA_CACHE_DIR=${cache_prefix}/numba
export MPLCONFIGDIR=${cache_prefix}/matplotlib
echo "$0: CUDA_HOME: ${CUDA_HOME}"
echo "$0: HF_HOME: ${HF_HOME}"
echo "$0: TOUCHNET_DIR: ${TOUCHNET_DIR}"
echo "$0: XDG_CACHE_HOME: ${XDG_CACHE_HOME}"
echo "$0: NUMBA_CACHE_DIR: ${NUMBA_CACHE_DIR}"
echo "$0: MPLCONFIGDIR: ${MPLCONFIGDIR}"
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/text/pretrain/allenai_c4/run.sh | Shell | #!/bin/bash
# NOTE(xcsong): change xx_prefix and xx_version to ur setup
cache_prefix=/mnt/user-ssd/songxingchen/share
cuda_prefix=/usr/local
pretrained_weight_dir="" # for fromscratch training
# pretrained_weight_dir="/bucket/output/jfs-hdfs/user/xingchen.song/share/modelscope/Llama-3.2-1B-Instruct" # for continue pretrain
pretrained_tokenizer_dir="/bucket/output/jfs-hdfs/user/xingchen.song/share/modelscope/Llama-3.2-1B-Instruct"
if [ "${pretrained_weight_dir}" != "" ]; then
exp_suffix="frompretrain"
else
exp_suffix="fromscratch"
fi
# Automatically detect number of gpus
if command -v nvidia-smi &> /dev/null; then
num_gpus=$(nvidia-smi -L | wc -l)
gpu_list=$(seq -s, 0 $((num_gpus-1)))
else
num_gpus=-1
gpu_list="-1"
fi
# You can also manually specify CUDA_VISIBLE_DEVICES
# if you don't want to utilize all available GPU resources.
export CUDA_VISIBLE_DEVICES="${gpu_list}"
echo "$0: CUDA_VISIBLE_DEVICES is ${CUDA_VISIBLE_DEVICES}"
stage=1
stop_stage=2
# You should change the following two parameters for multiple machine training,
# see https://pytorch.org/docs/stable/elastic/run.html
HOST_NODE_ADDR="localhost:0"
num_nodes=1
job_id=2026
hf_data_repo="allenai/c4"
hf_data_name="en"
train_set=train
dev_set=validation
test_sets= # c4 has no test set
param_dtype="bfloat16"
seed=2025
model_config=Llama-3_2-1B
tensorboard_dir=tensorboard
num_workers=12
prefetch=12
. ./parse_options.sh || exit 1;
. ./path.sh --cache_prefix ${cache_prefix} \
--cuda_prefix ${cuda_prefix} || exit 1
exp_id="c4.en_1x16384_fullac_cp1_tp1_dp8_pp1_flex_packloss_tieemb_linear2K1M_${model_config}_${exp_suffix}"
cp=$(echo $exp_id | grep -oP 'cp\d+' | grep -oP '\d+')
tp=$(echo $exp_id | grep -oP 'tp\d+' | grep -oP '\d+')
dp=$(echo $exp_id | grep -oP 'dp\d+' | grep -oP '\d+')
pp=$(echo $exp_id | grep -oP 'pp\d+' | grep -oP '\d+')
bs=$(echo $exp_id | grep -oP '\d+x\d+' | grep -oP '\d+' | head -n 1)
max_seq_len=$(echo $exp_id | grep -oP '\d+x\d+' | grep -oP '\d+' | tail -n 1)
echo "$0: ${exp_id}: cp=${cp}, tp=${tp}, dp=${dp}, pp=${pp}, bs=${bs}, max_seq_len=${max_seq_len}"
if [ ${stage} -le -1 ] && [ ${stop_stage} -ge -1 ]; then
echo "$0: stage -1: Data Download"
python download_c4.py
fi
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
for x in ${train_set} ${dev_set} ${test_sets}; do
if [ ! -f "data/${x}/data.list" ]; then
echo "$0: data/${x}/data.list does not exist. generate dataset."
mkdir -p data/${x}
find "${HF_HOME}/datasets/converted_jsonl_for_touchnet/${hf_data_repo}/${hf_data_name}/" \
-maxdepth 1 \
-type f \
-name "${x}*jsonl" \
-print0 | \
while IFS= read -r -d $'\0' text; do
echo "$0: processing ${text}"
mkdir -p "data/${x}/$(basename $text)"
python touchnet/bin/make_data.py \
--save_dir "data/${x}/$(basename $text)" \
--jsonl_path "${text}" \
--tokenizer_model "${pretrained_tokenizer_dir}" \
--tokenizer_type "HuggingFaceTokenizer" \
--num_utt_per_shard 2000 \
--num_workers 16 \
--datatypes "texttoken"
done
cat data/${x}/*/data.list > data/${x}/data.list
fi
done
for x in ${dev_set}; do
# NOTE(xcsong): we only use 20 lists for dev set, this is to speed up validation.
if [ ! -f "data/${x}/data.list.head20" ]; then
echo "$0: data/${x}/data.list.head20 does not exist. generate it."
mkdir -p data/${x}
shuf data/${x}/data.list | head -20 > data/${x}/data.list.head20
fi
done
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ] && [ "${pretrained_weight_dir}" != "" ]; then
echo "$0: Stage 1: create seed checkpoint for offline initialization"
rm -rf "exp/${exp_id}"
mkdir -p "exp/${exp_id}"
python touchnet/bin/convert_hf_to_dcp.py \
--ckpt_dir "exp/${exp_id}" \
--huggingface_model "${pretrained_weight_dir}"
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
echo "$0: Stage 2: start training"
echo "$0: num_nodes is $num_nodes, proc_per_node is $num_gpus"
# export TORCH_LOGS="+dynamo"
# export TORCHDYNAMO_VERBOSE=1
torchrun --nnodes=$num_nodes --nproc_per_node=$num_gpus \
--rdzv_id=$job_id --rdzv_backend="c10d" --rdzv_endpoint=$HOST_NODE_ADDR \
--local-ranks-filter "0" \
touchnet/bin/train.py \
--tokenizer_model "${pretrained_tokenizer_dir}" \
--tokenizer_type "HuggingFaceTokenizer" \
--datalist_path "data/${train_set}/data.list" \
--datalist_dev_path "data/${dev_set}/data.list.head20" \
--datalist_sharding true \
--datalist_epoch 10000 \
--datalist_shuffling true \
--dataset_shuffling true \
--dataset_mmap true \
--dataset_batchsize ${bs} \
--dataset_text_seqlen ${max_seq_len} \
--text_max_length_in_tokens_for_filter $(expr $max_seq_len - 2) \
--text_min_length_in_tokens_for_filter 1 \
--dataloader_num_workers ${num_workers} \
--dataloader_prefetch_factor ${prefetch} \
--training_description "allenai c4.en" \
--training_seed "${seed}" \
--training_model_name "llama" \
--training_model_config_path "config/${model_config}.json" \
--training_print_args true \
--training_trace_dump_folder "exp/${exp_id}" \
--training_fsdp_reshard_after_forward "default" \
--training_context_parallel_degree ${cp} \
--training_context_parallel_rotate_method "allgather" \
--training_tensor_parallel_degree ${tp} \
--training_enable_loss_parallel true \
--training_pipeline_parallel_degree ${pp} \
--training_pipeline_parallel_schedule "1F1B" \
--training_enable_ckpt true \
--training_ckpt_load_step -1 \
--training_ckpt_interval 2000 \
--training_ckpt_keep_latest_k 2 \
--training_log_freq 100 \
--training_enable_tensorboard true \
--training_save_tb_folder "tensorboard" \
--training_tb_rank_0_only true \
--training_mixed_precision_param "${param_dtype}" \
--training_mixed_precision_reduce "float32" \
--training_compile true \
--training_enable_compiled_autograd false \
--training_gc_freq 1000 \
--training_deterministic false \
--training_max_norm 1.0 \
--training_activation_checkpoint_mode "full" \
--training_activation_checkpoint_selective_ac_option "op" \
--training_enable_profiling true \
--training_profiling_traces_folder "profile_traces" \
--training_profiling_freq 100 \
--training_profiling_keep_first_k 10 \
--training_enable_memory_snapshot true \
--training_memory_snapshot_folder "memory_snapshot" \
--optimizer_name "AdamW" \
--optimizer_lr 8e-4 \
--optimizer_impl "fused" \
--lr_scheduler_steps 1000000 \
--lr_scheduler_warmup_steps 2000 \
--lr_scheduler_decay_type "linear" \
--lr_scheduler_lr_min 0.0
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
echo "$0: Stage 3: convert dcp to huggingface-format"
python touchnet/bin/convert_dcp_to_hf.py \
--ckpt_dir "exp/${exp_id}" \
--step 1000000 \
--config "config/${model_config}.json" \
--tokenizer_model "${pretrained_tokenizer_dir}"
fi
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/text/pretrain/fineweb-edu/download_fineweb-edu.py | Python | import os
from datasets import DownloadConfig, load_dataset
hf_data_repo = "HuggingFaceFW/fineweb-edu"
hf_data_name = "default"
download_config = DownloadConfig(
num_proc=12,
max_retries=1200,
)
# English only
signal = 1
while signal:
try:
# 9.74TB, 1.3T tokens
data = load_dataset(
f"{hf_data_repo}",
name=f"{hf_data_name}",
split="train",
download_config=download_config
)
signal = 0
except Exception as ex:
pass
HF_HOME = os.environ.get("HF_HOME", "/bucket/output/jfs-hdfs/user/xingchen.song/share/huggingface")
prefix = f"{HF_HOME}/datasets/converted_jsonl_for_touchnet"
key = "train"
# num_samples: 1426200851
print(f"num_samples of {hf_data_repo}/{hf_data_name}[{key}]: {len(data)}")
num_bytes = data.info.splits[key].num_bytes
# shard data for every 10GB
num_shards = num_bytes // (10 * 1024 * 1024 * 1024) + 1
# num_shards: 681
print(f"num_shards of {hf_data_repo}/{hf_data_name}[{key}]: {num_shards}")
for i in range(num_shards):
data.shard(num_shards, i, writer_batch_size=100000).to_json(
path_or_buf=f"{prefix}/{hf_data_repo}/{hf_data_name}/{key}-{i:05d}-of-{num_shards:05d}.jsonl",
batch_size=100000,
num_proc=16,
)
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/text/pretrain/fineweb-edu/parse_options.sh | Shell | #!/bin/bash
# Copyright 2012 Johns Hopkins University (Author: Daniel Povey);
# Arnab Ghoshal, Karel Vesely
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# THIS CODE IS PROVIDED *AS IS* BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, EITHER EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION ANY IMPLIED
# WARRANTIES OR CONDITIONS OF TITLE, FITNESS FOR A PARTICULAR PURPOSE,
# MERCHANTABLITY OR NON-INFRINGEMENT.
# See the Apache 2 License for the specific language governing permissions and
# limitations under the License.
# Parse command-line options.
# To be sourced by another script (as in ". parse_options.sh").
# Option format is: --option-name arg
# and shell variable "option_name" gets set to value "arg."
# The exception is --help, which takes no arguments, but prints the
# $help_message variable (if defined).
###
### The --config file options have lower priority to command line
### options, so we need to import them first...
###
# Now import all the configs specified by command-line, in left-to-right order
for ((argpos=1; argpos<$#; argpos++)); do
if [ "${!argpos}" == "--config" ]; then
argpos_plus1=$((argpos+1))
config=${!argpos_plus1}
[ ! -r $config ] && echo "$0: missing config '$config'" && exit 1
. $config # source the config file.
fi
done
###
### No we process the command line options
###
while true; do
[ -z "${1:-}" ] && break; # break if there are no arguments
case "$1" in
# If the enclosing script is called with --help option, print the help
# message and exit. Scripts should put help messages in $help_message
--help|-h) if [ -z "$help_message" ]; then echo "No help found." 1>&2;
else printf "$help_message\n" 1>&2 ; fi;
exit 0 ;;
--*=*) echo "$0: options to scripts must be of the form --name value, got '$1'"
exit 1 ;;
# If the first command-line argument begins with "--" (e.g. --foo-bar),
# then work out the variable name as $name, which will equal "foo_bar".
--*) name=`echo "$1" | sed s/^--// | sed s/-/_/g`;
# Next we test whether the variable in question is undefned-- if so it's
# an invalid option and we die. Note: $0 evaluates to the name of the
# enclosing script.
# The test [ -z ${foo_bar+xxx} ] will return true if the variable foo_bar
# is undefined. We then have to wrap this test inside "eval" because
# foo_bar is itself inside a variable ($name).
eval '[ -z "${'$name'+xxx}" ]' && echo "$0: invalid option $1" 1>&2 && exit 1;
oldval="`eval echo \\$$name`";
# Work out whether we seem to be expecting a Boolean argument.
if [ "$oldval" == "true" ] || [ "$oldval" == "false" ]; then
was_bool=true;
else
was_bool=false;
fi
# Set the variable to the right value-- the escaped quotes make it work if
# the option had spaces, like --cmd "queue.pl -sync y"
eval $name=\"$2\";
# Check that Boolean-valued arguments are really Boolean.
if $was_bool && [[ "$2" != "true" && "$2" != "false" ]]; then
echo "$0: expected \"true\" or \"false\": $1 $2" 1>&2
exit 1;
fi
shift 2;
;;
*) break;
esac
done
# Check for an empty argument to the --cmd option, which can easily occur as a
# result of scripting errors.
[ ! -z "${cmd+xxx}" ] && [ -z "$cmd" ] && echo "$0: empty argument to --cmd option" 1>&2 && exit 1;
true; # so this script returns exit code 0.
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/text/pretrain/fineweb-edu/path.sh | Shell | cuda_prefix=/usr/local
cache_prefix=/mnt/user-ssd/songxingchen/share
. ./parse_options.sh || exit 1;
if [ ! -d "${cuda_prefix}/cuda" ]; then
echo "Error: CUDA_HOME directory does not exist: ${cuda_prefix}/cuda"
exit 1
fi
if [ ! -d "${cache_prefix}" ]; then
echo "Error: cache_prefix directory does not exist: ${cache_prefix}"
exit 1
fi
# cuda related
export CUDA_HOME=${cuda_prefix}/cuda
export PATH=$CUDA_HOME/bin:$PATH
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH:-""}
export LD_LIBRARY_PATH=$CUDA_HOME/lib64:$CUDA_HOME/lib64/stubs:/usr/lib:/usr/lib64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$CUDA_HOME/extras/CUPTI/lib64:$LD_LIBRARY_PATH
export CUDAToolkit_ROOT_DIR=$CUDA_HOME
export CUDAToolkit_ROOT=$CUDA_HOME
export CUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME
export CUDA_TOOLKIT_ROOT=$CUDA_HOME
export CUDA_BIN_PATH=$CUDA_HOME
export CUDA_PATH=$CUDA_HOME
export CUDA_INC_PATH=$CUDA_HOME/targets/x86_64-linux
export CFLAGS=-I$CUDA_HOME/targets/x86_64-linux/include:$CFLAGS
export CXXFLAGS=-I$CUDA_HOME/targets/x86_64-linux/include:$CXXFLAGS
export LDFLAGS=-L$CUDA_HOME/lib64:$CUDA_HOME/lib64/stubs:/usr/lib:/usr/lib64:$LDFLAGS
export CUDAToolkit_TARGET_DIR=$CUDA_HOME/targets/x86_64-linux
# python related
export TOUCHNET_DIR=$PWD/../../../..
export PATH=$PWD:$PATH
export PYTHONIOENCODING=UTF-8
export PYTHONPATH=../../../../:$PYTHONPATH
# export TORCH_NCCL_BLOCKING_WAIT=1
# export TORCH_NCCL_ASYNC_ERROR_HANDLING=1
# export NCCL_TIMEOUT=1800000000
# export NCCL_LAUNCH_TIMEOUT=6000000000000
# export NCCL_SOCKET_TIMEOUT=3000000000000
# torch related
export TORCH_NCCL_AVOID_RECORD_STREAMS=1 # see https://github.com/pytorch/torchtitan/blob/main/docs/composability.md#setting-torch_nccl_avoid_record_streams1-for-tp
export PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True
export XDG_CACHE_HOME=${cache_prefix}/xdg
# huggingface related
export HF_HOME=${cache_prefix}/huggingface
export NUMBA_CACHE_DIR=${cache_prefix}/numba
export MPLCONFIGDIR=${cache_prefix}/matplotlib
echo "$0: CUDA_HOME: ${CUDA_HOME}"
echo "$0: HF_HOME: ${HF_HOME}"
echo "$0: TOUCHNET_DIR: ${TOUCHNET_DIR}"
echo "$0: XDG_CACHE_HOME: ${XDG_CACHE_HOME}"
echo "$0: NUMBA_CACHE_DIR: ${NUMBA_CACHE_DIR}"
echo "$0: MPLCONFIGDIR: ${MPLCONFIGDIR}"
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
examples/text/pretrain/fineweb-edu/run.sh | Shell | #!/bin/bash
# NOTE(xcsong): change xx_prefix and xx_version to ur setup
cache_prefix=/mnt/user-ssd/songxingchen/share
cuda_prefix=/usr/local
pretrained_weight_dir="" # for fromscratch training
# pretrained_weight_dir="/bucket/output/jfs-hdfs/user/xingchen.song/share/modelscope/Llama-3.2-1B-Instruct" # for continue pretrain
pretrained_tokenizer_dir="/bucket/output/jfs-hdfs/user/xingchen.song/share/modelscope/Llama-3.2-1B-Instruct"
if [ "${pretrained_weight_dir}" != "" ]; then
exp_suffix="frompretrain"
else
exp_suffix="fromscratch"
fi
# Automatically detect number of gpus
if command -v nvidia-smi &> /dev/null; then
num_gpus=$(nvidia-smi -L | wc -l)
gpu_list=$(seq -s, 0 $((num_gpus-1)))
else
num_gpus=-1
gpu_list="-1"
fi
# You can also manually specify CUDA_VISIBLE_DEVICES
# if you don't want to utilize all available GPU resources.
export CUDA_VISIBLE_DEVICES="${gpu_list}"
echo "$0: CUDA_VISIBLE_DEVICES is ${CUDA_VISIBLE_DEVICES}"
stage=1
stop_stage=2
# You should change the following two parameters for multiple machine training,
# see https://pytorch.org/docs/stable/elastic/run.html
HOST_NODE_ADDR="localhost:0"
num_nodes=1
job_id=2026
hf_data_repo="HuggingFaceFW/fineweb-edu"
hf_data_name="default"
train_set=train
dev_set= # fineweb-edu has no validation set, use c4.validation instead
test_sets= # fineweb-edu has no test set
param_dtype="bfloat16"
seed=2025
model_config=Llama-3_2-1B
tensorboard_dir=tensorboard
num_workers=12
prefetch=12
. ./parse_options.sh || exit 1;
. ./path.sh --cache_prefix ${cache_prefix} \
--cuda_prefix ${cuda_prefix} || exit 1
exp_id="fineweb-edu_1B_1x8192_fullac_cp1_tp1_dp8_pp1_flex_packloss_tieemb_linear2K1M_fixdev_head20_acc_${model_config}_${exp_suffix}"
cp=$(echo $exp_id | grep -oP 'cp\d+' | grep -oP '\d+')
tp=$(echo $exp_id | grep -oP 'tp\d+' | grep -oP '\d+')
dp=$(echo $exp_id | grep -oP 'dp\d+' | grep -oP '\d+')
pp=$(echo $exp_id | grep -oP 'pp\d+' | grep -oP '\d+')
bs=$(echo $exp_id | grep -oP '\d+x\d+' | grep -oP '\d+' | head -n 1)
max_seq_len=$(echo $exp_id | grep -oP '\d+x\d+' | grep -oP '\d+' | tail -n 1)
echo "$0: ${exp_id}: cp=${cp}, tp=${tp}, dp=${dp}, pp=${pp}, bs=${bs}, max_seq_len=${max_seq_len}"
if [ ${stage} -le -1 ] && [ ${stop_stage} -ge -1 ]; then
echo "$0: stage -1: Data Download"
python download_fineweb-edu.py
fi
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then
for x in ${train_set} ${dev_set} ${test_sets}; do
if [ ! -f "data/${x}/data.list" ]; then
echo "$0: data/${x}/data.list does not exist. generate dataset."
mkdir -p data/${x}
find "${HF_HOME}/datasets/converted_jsonl_for_touchnet/${hf_data_repo}/${hf_data_name}/" \
-maxdepth 1 \
-type f \
-name "${x}*jsonl" \
-print0 | \
while IFS= read -r -d $'\0' text; do
echo "$0: processing ${text}"
mkdir -p "data/${x}/$(basename $text)"
python touchnet/bin/make_data.py \
--save_dir "data/${x}/$(basename $text)" \
--jsonl_path "${text}" \
--tokenizer_model "${pretrained_tokenizer_dir}" \
--tokenizer_type "HuggingFaceTokenizer" \
--num_utt_per_shard 20000 \
--num_workers 16 \
--datatypes "texttoken"
done
cat data/${x}/*/data.list > data/${x}/data.list
fi
done
fi
if [ ${stage} -le 1 ] && [ ${stop_stage} -ge 1 ] && [ "${pretrained_weight_dir}" != "" ]; then
echo "$0: Stage 1: create seed checkpoint for offline initialization"
rm -rf "exp/${exp_id}"
mkdir -p "exp/${exp_id}"
python touchnet/bin/convert_hf_to_dcp.py \
--ckpt_dir "exp/${exp_id}" \
--huggingface_model "${pretrained_weight_dir}"
fi
if [ ${stage} -le 2 ] && [ ${stop_stage} -ge 2 ]; then
echo "$0: Stage 2: start training"
echo "$0: num_nodes is $num_nodes, proc_per_node is $num_gpus"
# export TORCH_LOGS="+dynamo"
# export TORCHDYNAMO_VERBOSE=1
torchrun --nnodes=$num_nodes --nproc_per_node=$num_gpus \
--rdzv_id=$job_id --rdzv_backend="c10d" --rdzv_endpoint=$HOST_NODE_ADDR \
--local-ranks-filter "0" \
touchnet/bin/train.py \
--tokenizer_model "${pretrained_tokenizer_dir}" \
--tokenizer_type "HuggingFaceTokenizer" \
--datalist_path "data/${train_set}/data.list" \
--datalist_dev_path "data/${dev_set}/data.list.head20" \
--datalist_sharding true \
--datalist_epoch 10000 \
--datalist_shuffling true \
--dataset_shuffling true \
--dataset_mmap true \
--dataset_batchsize ${bs} \
--dataset_text_seqlen ${max_seq_len} \
--text_max_length_in_tokens_for_filter $(expr $max_seq_len - 2) \
--text_min_length_in_tokens_for_filter 1 \
--dataloader_num_workers ${num_workers} \
--dataloader_prefetch_factor ${prefetch} \
--training_description "fineweb-edu" \
--training_seed "${seed}" \
--training_model_name "llama" \
--training_model_config_path "config/${model_config}.json" \
--training_print_args true \
--training_trace_dump_folder "exp/${exp_id}" \
--training_fsdp_reshard_after_forward "default" \
--training_context_parallel_degree ${cp} \
--training_context_parallel_rotate_method "allgather" \
--training_tensor_parallel_degree ${tp} \
--training_enable_loss_parallel true \
--training_pipeline_parallel_degree ${pp} \
--training_pipeline_parallel_schedule "1F1B" \
--training_enable_ckpt true \
--training_ckpt_load_step -1 \
--training_ckpt_interval 500 \
--training_ckpt_keep_latest_k 2 \
--training_log_freq 1 \
--training_enable_tensorboard true \
--training_save_tb_folder "tensorboard" \
--training_tb_rank_0_only true \
--training_mixed_precision_param "${param_dtype}" \
--training_mixed_precision_reduce "float32" \
--training_compile true \
--training_enable_compiled_autograd false \
--training_gc_freq 500 \
--training_deterministic false \
--training_max_norm 1.0 \
--training_activation_checkpoint_mode "full" \
--training_activation_checkpoint_selective_ac_option "op" \
--training_enable_profiling true \
--training_profiling_traces_folder "profile_traces" \
--training_profiling_freq 100 \
--training_profiling_keep_first_k 10 \
--training_enable_memory_snapshot true \
--training_memory_snapshot_folder "memory_snapshot" \
--optimizer_name "AdamW" \
--optimizer_lr 8e-4 \
--optimizer_impl "fused" \
--lr_scheduler_steps 1000000 \
--lr_scheduler_warmup_steps 2000 \
--lr_scheduler_decay_type "linear" \
--lr_scheduler_lr_min 0.0
fi
if [ ${stage} -le 3 ] && [ ${stop_stage} -ge 3 ]; then
echo "$0: Stage 3: convert dcp to huggingface-format"
python touchnet/bin/convert_dcp_to_hf.py \
--ckpt_dir "exp/${exp_id}" \
--step 1000000 \
--config "config/${model_config}.json" \
--tokenizer_model "${pretrained_tokenizer_dir}"
fi
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
install_cuda_cudnn.sh | Shell | #!/bin/bash
# Copyright [2024-04-09] <sxc19@mails.tsinghua.edu.cn, Xingchen Song>
cuda_version=12.6.3
driver_version=560.35.05
cudnn_version=9.5.1.17
prefix=/bucket/output/jfs-hdfs/user/xingchen.song/tools/cuda
echo "start download cuda ${cuda_version} & cudnn ${cudnn_version}"
wget https://developer.download.nvidia.com/compute/cuda/${cuda_version}/local_installers/cuda_${cuda_version}_${driver_version}_linux.run
wget https://developer.download.nvidia.com/compute/cudnn/redist/cudnn/linux-x86_64/cudnn-linux-x86_64-${cudnn_version}_cuda12-archive.tar.xz
echo "end download cuda ${cuda_version} & cudnn ${cudnn_version}"
echo "start install cuda ${cuda_version}"
tmp_dir=${prefix}/tmp
rm -rf ${tmp_dir}
mkdir -p ${prefix}/cuda-${cuda_version}_cudnn-${cudnn_version}
mkdir -p ${tmp_dir}
./cuda_${cuda_version}_${driver_version}_linux.run \
--silent \
--toolkit \
--installpath=${prefix}/cuda-${cuda_version}_cudnn-${cudnn_version} \
--no-opengl-libs \
--no-drm \
--no-man-page \
--tmpdir=${tmp_dir}
echo "end install cuda ${cuda_version}"
echo "start install cudnn ${cudnn_version}"
rm -rf ${tmp_dir}
mkdir -p ${tmp_dir}
tar xvf cudnn-linux-x86_64-${cudnn_version}_cuda12-archive.tar.xz --strip-components=1 -C ${tmp_dir}
cp ${tmp_dir}/include/cudnn* ${prefix}/cuda-${cuda_version}_cudnn-${cudnn_version}/include
cp ${tmp_dir}/lib/libcudnn* ${prefix}/cuda-${cuda_version}_cudnn-${cudnn_version}/lib64
chmod a+r ${prefix}/cuda-${cuda_version}_cudnn-${cudnn_version}/include/cudnn*.h ${prefix}/cuda-${cuda_version}_cudnn-${cudnn_version}/lib64/libcudnn*
echo "end install cudnn ${cudnn_version}"
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
tests/touchnet/bin/test_make_data.py | Python | import json
import subprocess
import pytest
import torch
import torchaudio
from touchnet.data import DataConfig
from touchnet.data.datapipe import LowLevelTouchDatapipe
@pytest.fixture
def run_shell():
def _run(cmd, check=True):
return subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
check=check
)
return _run
@pytest.mark.parametrize("num, expected_md5", [
(1, "05fe272d67459992748bbf5720c5a92e"),
(2, "93245372eca0dce2013c1e5bd393f17f")
])
def test_make_data(run_shell, num, expected_md5):
result = run_shell(
f"""
python touchnet/bin/make_data.py \
--save_dir tests/tmp/{num}sample_per_shard \
--jsonl_path tests/assets/dataset/data.jsonl \
--num_utt_per_shard {num} \
--audio_resample 16000 \
--num_workers 1 \
--datatypes 'audio+metainfo'
"""
)
assert result.returncode == 0
md5 = run_shell(
f"""find tests/tmp/{num}sample_per_shard \\( -name "*.idx" -o -name "*.bin" \\) -type f -exec md5sum {{}} \\; | sort | cut -d ' ' -f1 | md5sum | awk '{{print $1}}'""" # noqa
)
assert md5.stdout.strip() == expected_md5
orig_data = {}
with open("tests/assets/dataset/data.jsonl", "r") as f:
for line in f.readlines():
data = json.loads(line.strip())
orig_data[data['key']] = data
data_config = DataConfig()
data_config.datalist_path = f"tests/tmp/{num}sample_per_shard/data.list"
data_config.datalist_shuffling = False
data_config.datalist_sharding = False
data_config.datalist_epoch = 1
data_config.dataset_shuffling = False
data_config.audio_speed_perturb = False
data_config.audiofeat_spec_aug = False
data_config.audiofeat_spec_sub = False
data_config.audiofeat_spec_trim = False
data_config.audiofeat_dither = 0.0
datapipe = LowLevelTouchDatapipe(data_config, 0, 1)
for data in datapipe:
key = data['key']
assert key in orig_data
assert data['wav'] == orig_data[key]['wav']
assert data['txt'] == orig_data[key]['txt']
orig_waveform = torchaudio.load(data['wav'])[0]
waveform = data['waveform']
assert torch.allclose(orig_waveform, waveform)
run_shell(f"rm -rf tests/tmp/{num}sample_per_shard")
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
tests/touchnet/data/test_dataloader.py | Python | import os
import numpy
import pytest
import torch
from touchnet.bin.make_data import DataBuilder
from touchnet.data import DataConfig
from touchnet.data.dataloader import ParallelAwareDataloader
from touchnet.data.datapipe import LowLevelTouchDatapipe
def build_fake_data(nnodes, nproc_per_node, max_epoch):
total_number = nnodes * nproc_per_node * max_epoch
shards_list = []
for i in range(0, nnodes * nproc_per_node):
path_prefix = f"tests/tmp/fake_data_{total_number}/shards_{i}"
os.makedirs(path_prefix, exist_ok=True)
builders = {
"texttoken": DataBuilder(f"{path_prefix}/texttoken.bin",
numpy.uint16)
}
for j in range(0, max_epoch):
builders["texttoken"].add_item(torch.IntTensor([i * max_epoch + j]))
# documents contain only one sentence.
builders["texttoken"].end_document()
builders["texttoken"].finalize(f"{path_prefix}/texttoken.idx")
shards_list.append(path_prefix)
with open(f"tests/tmp/fake_data_{total_number}/data.list", "w", encoding="utf8") as fout:
for name in shards_list:
fout.write(f"{name} texttoken\n")
# TODO(xcsong): support breal_point for num_workers > 1
@pytest.mark.parametrize("nnodes, nproc_per_node, max_epoch, num_workers, dp_rank, dp_worldsize, break_point", [
(4, 8, 6, 1, 3, 8, 5),
(4, 8, 6, 1, 3, 8, 12),
(4, 8, 6, 1, 3, 8, 24),
(4, 8, 6, 0, 3, 8, 12),
(4, 8, 6, 0, 3, 8, 15),
(4, 8, 6, 1, 3, 8, -1),
(1, 8, 6, 4, 1, 2, -1),
(1, 8, 6, 2, 1, 4, -1),
(4, 8, 6, 4, 3, 8, -1),
(2, 8, 6, 4, 0, 2, -1),
])
def test_dataloader(nnodes, nproc_per_node, max_epoch, num_workers, dp_rank, dp_worldsize, break_point):
if num_workers > 0:
assert (nnodes * nproc_per_node) % (dp_worldsize * num_workers) == 0
assert nnodes * nproc_per_node * max_epoch // dp_worldsize >= break_point
total_number = nnodes * nproc_per_node * max_epoch
build_fake_data(nnodes, nproc_per_node, max_epoch)
config = DataConfig(datalist_path=f"tests/tmp/fake_data_{total_number}/data.list",
datalist_sharding=True,
datalist_shuffling=False,
dataset_shuffling=False,
dataset_mmap=True)
datapipe = LowLevelTouchDatapipe(config, dp_rank, dp_worldsize)
dataloader = ParallelAwareDataloader(
dataset=datapipe,
dp_rank=dp_rank,
dp_world_size=dp_worldsize,
batch_size=None,
num_workers=num_workers,
pin_memory=True,
prefetch_factor=4 if num_workers > 0 else None,
)
state_dict = {}
loaded_data = []
for i, data in enumerate(dataloader):
if i == break_point:
state_dict = dataloader.state_dict()
break
input_ids = data["input_ids"]
assert len(input_ids) == 1
loaded_data.append(input_ids[0])
del dataloader, datapipe
# resume from mid-checkpoint
if len(state_dict.keys()) > 0:
datapipe = LowLevelTouchDatapipe(config, dp_rank, dp_worldsize)
dataloader = ParallelAwareDataloader(
dataset=datapipe,
dp_rank=dp_rank,
dp_world_size=dp_worldsize,
batch_size=None,
num_workers=num_workers,
pin_memory=True,
prefetch_factor=4 if num_workers > 0 else None,
)
print(state_dict)
for k in state_dict:
if "dp_rank" in k:
print(state_dict[k])
dataloader.load_state_dict(state_dict)
for i, data in enumerate(dataloader):
input_ids = data["input_ids"]
assert len(input_ids) == 1
loaded_data.append(input_ids[0])
loaded_data = numpy.array(loaded_data, dtype=numpy.int32)
expected_data = numpy.array([i for i in range(0, total_number)],
dtype=numpy.int32).reshape(-1, max_epoch)
expected_data = expected_data[dp_rank::dp_worldsize, :]
if num_workers > 0:
buffer = []
for i in range(num_workers):
tmp_data = expected_data[i::num_workers, :].reshape(1, -1)
if tmp_data.shape[-1] > 0:
buffer.append(tmp_data)
expected_data = numpy.concatenate(buffer, axis=0).transpose()
expected_data = expected_data.reshape(-1)
assert numpy.allclose(loaded_data, expected_data)
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
tests/touchnet/models/test_llama.py | Python | import os
import subprocess
from multiprocessing import Manager
import pytest
import torch
import torch.distributed.checkpoint as dcp
from torch import distributed as dist
from transformers import AutoConfig, AutoModelForCausalLM
from touchnet.bin import TrainConfig
from touchnet.utils.distributed import ParallelDims
from touchnet.utils.train_spec import get_train_spec
@pytest.fixture
def run_shell():
def _run(cmd, check=True):
return subprocess.run(
cmd,
shell=True,
capture_output=True,
text=True,
check=check
)
return _run
def tiny_eval(parallel_dims: ParallelDims, folder: str, shard_folder: str):
world_mesh = parallel_dims.build_mesh(device_type="cpu")
train_spec = get_train_spec("llama")
model_config = AutoConfig.from_pretrained("tests/assets/config/tiny_llama.json",
attn_implementation="eager")
model_config.return_dict = False # NOTE: for compatibility with pipeline parallel
with torch.device("meta"):
shard_model = AutoModelForCausalLM.from_config(model_config)
shard_model.apply(lambda m: setattr(m, "_is_hf_initialized", False))
job_config = TrainConfig()
job_config.training_compile = False
train_spec.parallelize_fn(shard_model, world_mesh, parallel_dims, job_config)
shard_model.to_empty(device="cpu")
with torch.no_grad():
shard_model.post_init()
train_spec.additional_post_init_fn(shard_model, "cpu")
shard_model.eval()
# Load weights from un-shard ckpt
dcp.load({"model": shard_model.state_dict()}, checkpoint_id=folder)
# Save weights to shard ckpt
dcp.save({"model": shard_model.state_dict()}, checkpoint_id=shard_folder)
return True
def run_distributed(func, world_size, *args):
with Manager() as manager:
results = manager.list([None] * world_size)
torch.multiprocessing.spawn(
_dist_worker,
args=(func, world_size, args, results),
nprocs=world_size
)
return results
def _dist_worker(rank, func, world_size, args, results):
torch.distributed.init_process_group(
backend='gloo',
init_method='tcp://127.0.0.1:29505',
world_size=world_size,
rank=rank
)
try:
result = func(*args)
results[rank] = result
finally:
dist.barrier()
dist.destroy_process_group()
# TODO(xcsong): support PP?
@pytest.mark.parametrize("world_size, dp, pp, cp, tp", [
(2, 1, 1, 1, 2),
(8, 8, 1, 1, 1),
(8, 2, 1, 4, 1),
(8, 4, 1, 2, 1),
(8, 2, 1, 2, 2),
])
def test_llama(run_shell, world_size, dp, pp, cp, tp):
# NOTE(xcsong): cpu does not support sdpa or flexatt
model_config = AutoConfig.from_pretrained("tests/assets/config/tiny_llama.json",
attn_implementation="eager")
model_config.return_dict = False # NOTE: for compatibility with pipeline parallel
model = AutoModelForCausalLM.from_config(model_config)
with torch.no_grad():
model.post_init()
model.eval()
folder = "tests/tmp/checkpoint/step-0"
run_shell(f"rm -rf {folder}")
os.makedirs(folder, exist_ok=True)
dcp.save({"model": model.state_dict()}, checkpoint_id=folder)
batch_size = 8
max_len = 8
assert max_len % cp == 0
assert batch_size % dp == 0
input_ids = torch.randint(low=0, high=model_config.vocab_size, size=(batch_size, max_len))
position_ids = torch.arange(start=0, end=max_len, step=1, dtype=torch.int64).unsqueeze(0).repeat(batch_size, 1)
with torch.no_grad():
results = model(
input_ids=input_ids,
position_ids=position_ids,
)[0].float().cpu().numpy()
parallel_dims = ParallelDims(
dp_shard=dp, dp_replicate=1, cp=cp, tp=tp, pp=pp,
world_size=world_size, enable_loss_parallel=True,
)
shard_folder = "tests/tmp/checkpoint/step-0-sharded"
run_shell(f"rm -rf {shard_folder}")
all_inputs = (parallel_dims, folder, shard_folder)
run_distributed(tiny_eval, world_size, *all_inputs)
train_spec = get_train_spec("llama")
with torch.device("meta"):
new_model = AutoModelForCausalLM.from_config(model_config)
new_model.apply(lambda m: setattr(m, "_is_hf_initialized", False))
new_model.to_empty(device="cpu")
with torch.no_grad():
new_model.post_init()
train_spec.additional_post_init_fn(new_model, "cpu")
# Load weights from shard ckpt
dcp.load({"model": new_model.state_dict()}, checkpoint_id=shard_folder)
new_model.eval()
with torch.no_grad():
new_results = new_model(
input_ids=input_ids,
position_ids=position_ids,
)[0].float().cpu().numpy()
assert results == pytest.approx(new_results, abs=1e-6)
run_shell(f"rm -rf {folder}")
run_shell(f"rm -rf {shard_folder}")
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
tests/touchnet/utils/distributed_cpu.py | Python | import os
import torch
from transformers.hf_argparser import HfArgumentParser
from touchnet.bin import TrainConfig
from touchnet.utils.distributed import ParallelDims
from touchnet.utils.logging import init_logger
init_logger()
parser = HfArgumentParser(TrainConfig)
job_config = parser.parse_args_into_dataclasses()[0]
# init distributed
world_size = int(os.environ["WORLD_SIZE"])
parallel_dims = ParallelDims(
dp_shard=job_config.training_data_parallel_shard_degree,
dp_replicate=job_config.training_data_parallel_replicate_degree,
cp=job_config.training_context_parallel_degree,
tp=job_config.training_tensor_parallel_degree,
pp=1,
world_size=world_size,
enable_loss_parallel=job_config.training_enable_loss_parallel,
)
torch.distributed.init_process_group(backend="gloo")
world_mesh = parallel_dims.build_mesh(device_type="cpu")
if parallel_dims.dp_enabled:
dp_mesh = world_mesh["dp"]
dp_degree, dp_rank = dp_mesh.size(), dp_mesh.get_local_rank()
else:
dp_degree, dp_rank = 1, 0
if parallel_dims.tp_enabled:
tp_mesh = world_mesh["tp"]
tp_degree, tp_rank = tp_mesh.size(), tp_mesh.get_local_rank()
else:
tp_degree, tp_rank = 1, 0
if parallel_dims.cp_enabled:
cp_mesh = world_mesh["cp"]
cp_degree, cp_rank = cp_mesh.size(), cp_mesh.get_local_rank()
else:
cp_degree, cp_rank = 1, 0
rank = torch.distributed.get_rank()
world_size = torch.distributed.get_world_size()
local_rank = int(os.environ["LOCAL_RANK"])
print(f"""rank={rank}, world_size={world_size}, local_rank={local_rank},
dp_degree={dp_degree}, dp_rank={dp_rank},
tp_degree={tp_degree}, tp_rank={tp_rank},
cp_degree={cp_degree}, cp_rank={cp_rank}""")
torch.distributed.destroy_process_group()
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
tests/touchnet/utils/test_distributed_cpu.py | Python | import subprocess
import time
import pytest
def is_port_open(host, port):
import socket
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
s.settimeout(2)
return s.connect_ex((host, port)) == 0
# @pytest.mark.parametrize("master_port, nnodes, nproc_per_node, dp_shard, dp_replicate, cp, tp, lp", [
# (29500, 4, 8, -1, 2, 2, 4, True),
# (29501, 4, 8, -1, 1, 4, 2, False),
# (29502, 4, 8, -1, 1, 2, 4, True),
# ])
@pytest.mark.skip(reason="太吃机器资源,离线测试就行")
def test_distributed_cpu(master_port, nnodes, nproc_per_node, dp_shard, dp_replicate, cp, tp, lp):
master_addr = "127.0.0.1"
processes = []
master_cmd = [
"torchrun",
"--nnodes", str(nnodes),
"--nproc_per_node", str(nproc_per_node),
"--node_rank", "0",
"--master_addr", master_addr,
"--master_port", str(master_port),
"--rdzv_endpoint", f"{master_addr}:{master_port}",
"--rdzv_backend", "c10d",
"tests/touchnet/utils/distributed_cpu.py",
"--training_data_parallel_shard_degree", str(dp_shard),
"--training_data_parallel_replicate_degree", str(dp_replicate),
"--training_context_parallel_degree", str(cp),
"--training_tensor_parallel_degree", str(tp),
"--training_enable_loss_parallel", str(lp),
]
processes.append(subprocess.Popen(master_cmd))
while not is_port_open(master_addr, master_port):
time.sleep(1)
for node_rank in range(1, nnodes):
cmd = [
"torchrun",
"--nnodes", str(nnodes),
"--nproc_per_node", str(nproc_per_node),
"--node_rank", str(node_rank),
"--master_addr", master_addr,
"--master_port", str(master_port),
"--rdzv_endpoint", f"{master_addr}:{master_port}",
"--rdzv_backend", "c10d",
"tests/touchnet/utils/distributed_cpu.py",
"--training_data_parallel_shard_degree", str(dp_shard),
"--training_data_parallel_replicate_degree", str(dp_replicate),
"--training_context_parallel_degree", str(cp),
"--training_tensor_parallel_degree", str(tp),
"--training_enable_loss_parallel", str(lp),
]
processes.append(subprocess.Popen(cmd))
for p in processes:
p.wait()
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
tests/touchnet/utils/test_pack_loss.py | Python | from multiprocessing import Manager
import pytest
import torch
import torch.nn as nn
from torch import distributed as dist
from torch.distributed.nn.functional import all_gather
def calc_batch_dp_loss(batch_input_ids=None, batch_labels=None):
"""
Calculate loss using data parallelism (batch splitting).
Args:
batch_input_ids: Tensor of shape [batch, length, vocab] containing logits
batch_labels: Tensor of shape [batch, length] containing target indices
Returns:
float: The average loss across all processes
"""
batch_input_ids = batch_input_ids.detach().clone() # [batch, length, vocab]
batch_labels = batch_labels.detach().clone() # [batch, length]
world_size = dist.get_world_size()
rank = dist.get_rank()
assert len(batch_input_ids) % world_size == 0
# Split data in batch-dim to simulate data parallel
batch_input_ids = torch.split(
batch_input_ids,
len(batch_input_ids) // world_size, dim=0
)[rank]
batch_labels = torch.split(
batch_labels,
len(batch_labels) // world_size, dim=0
)[rank]
vocab = batch_input_ids.size(-1)
loss_fn = nn.CrossEntropyLoss(reduction='none', ignore_index=-100)
batch_size = batch_input_ids.size(0)
loss = loss_fn(batch_input_ids.reshape(-1, vocab), batch_labels.reshape(-1))
# 1. reduce loss over sentences
loss = loss.reshape(batch_size, -1).sum(dim=1) / ((batch_labels != -100).sum(dim=1).float() + 1e-12)
# 2. reduce loss over batches
loss = loss.mean()
# 3. reduce loss over dp
dist.all_reduce(loss, op=dist.ReduceOp.SUM)
loss = loss / world_size
print(f"rank {rank}: {loss.item()}")
return loss.item()
def calc_pack_sp_loss(pack_input_ids=None, pack_labels=None, num_tokens=None):
"""
Calculate loss using (packed) sequence parallelism (sequence splitting).
Args:
pack_input_ids: Tensor of shape [length, vocab] containing logits
pack_labels: Tensor of shape [length] containing target indices
num_tokens: number of tokens for each sentence
Returns:
float: The average loss across all processes
"""
# NOTE(xcsong): In pack mode, we assume batch_size == 1 and sp == world_size
pack_input_ids = pack_input_ids.detach().clone() # [length, vocab]
pack_labels = pack_labels.detach().clone() # [length]
world_size = dist.get_world_size()
rank = dist.get_rank()
assert len(pack_input_ids) % world_size == 0
assert len(pack_input_ids) == len(pack_labels)
assert sum(num_tokens) == len(pack_labels)
orig_pack_labels = pack_labels.detach().clone()
# Split data in sequence-dim to simulate sequence parallel
pack_input_ids = torch.split(
pack_input_ids,
len(pack_input_ids) // world_size, dim=0
)[rank]
pack_labels = torch.split(
pack_labels,
len(pack_labels) // world_size, dim=0
)[rank]
loss_fc = nn.CrossEntropyLoss(reduction='none', ignore_index=-100)
loss = loss_fc(pack_input_ids, pack_labels)
all_loss = all_gather(loss)
all_loss = torch.cat(all_loss)
loss_list = all_loss.split(num_tokens)
labels_list = orig_pack_labels.split(num_tokens)
# 1. reduce loss over sentences
loss_list = [
loss.sum() / ((label != -100).sum().float() + 1e-12)
for loss, label in zip(loss_list, labels_list)
]
# 2. reduce loss over batches
loss = torch.stack(loss_list).mean()
# 3. since sp == world_size, we got dp == 1, no need for reducing over dp
print(f"rank {rank}: {loss.item()}")
return loss.item()
def run_distributed(func, world_size, *args):
with Manager() as manager:
results = manager.list([None] * world_size)
torch.multiprocessing.spawn(
_dist_worker,
args=(func, world_size, args, results),
nprocs=world_size
)
return list(results)
def _dist_worker(rank, func, world_size, args, results):
torch.distributed.init_process_group(
backend='gloo',
init_method='tcp://127.0.0.1:29505',
world_size=world_size,
rank=rank
)
try:
result = func(*args)
results[rank] = result
finally:
dist.barrier()
dist.destroy_process_group()
# NOTE(xcsong): The following references provide context for pack loss implementation:
# - Technical explanation of pack mode vs batch mode: https://zhuanlan.zhihu.com/p/721652210
# - Related implementation discussion: https://github.com/THUDM/LongAlign/issues/3
@pytest.mark.parametrize("world_size", [2, 4, 8])
def test_pack_loss(world_size):
a1 = torch.randn(5, 9).float()
b1 = torch.Tensor([-100, -100, 1, 2, 3]).long()
a2 = torch.randn(8, 9).float()
b2 = torch.Tensor([4, -100, 3, 4, 6, -100, -100, 7]).long()
a3 = torch.randn(3, 9).float()
b3 = torch.Tensor([-100, 6, 8]).long()
a4 = torch.randn(4, 9).float()
b4 = torch.Tensor([-100, 7, 8, -100]).long()
a5 = torch.randn(6, 9).float()
b5 = torch.Tensor([-100, -100, 7, 4, 2, 5]).long()
a6 = torch.randn(3, 9).float()
b6 = torch.Tensor([5, 8, -100]).long()
max_item_length = 8
batch_input_ids = torch.zeros(8, max_item_length, 9)
batch_labels = torch.ones(8, max_item_length).long() * -100
for i, (a, b) in enumerate(
[(a1, b1), (a2, b2), (a3, b3), (a2, b2),
(a6, b6), (a4, b4), (a5, b5), (a6, b6)]
):
batch_input_ids[i, :a.size(0)] = a
batch_labels[i, :b.size(0)] = b
# NOTE(xcsong): In pack mode, we assume batch_size == 1
pack_input_ids = torch.cat([a1, a2, a3, a2, a6, a4, a5, a6], dim=0)
pack_labels = torch.cat([b1, b2, b3, b2, b6, b4, b5, b6], dim=0)
num_tokens = [5, 8, 3, 8, 3, 4, 6, 3]
data_batch = (batch_input_ids, batch_labels)
data_pack = (pack_input_ids, pack_labels, num_tokens)
results_batch = run_distributed(calc_batch_dp_loss, world_size, *data_batch)
results_pack = run_distributed(calc_pack_sp_loss, world_size, *data_pack)
assert len(set(results_batch)) == 1, f"The results of each child process are inconsistent: {results_batch}"
assert len(set(results_pack)) == 1, f"The results of each child process are inconsistent: {results_pack}"
assert results_batch[0] == pytest.approx(results_pack[0], abs=1e-6)
| xingchensong/TouchNet | 224 | A native-PyTorch library for large scale M-LLM (text/audio) training with tp/cp/dp. | Python | xingchensong | Xingchen Song(宋星辰) | Tsinghua University (2019-2022), WeNet Community (2021-now) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.