+
+ This program is free software: you can redistribute it and/or modify it under the terms of the GNU Affero General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Affero General Public License for more details.
+
+ You should have received a copy of the GNU Affero General Public License along with this program. If not, see .
+
+Also add information on how to contact you by electronic and paper mail.
+
+ If your software can interact with users remotely through a computer network, you should also make sure that it provides a way for users to get its source. For example, if your program is a web application, its interface could display a "Source" link that leads users to an archive of the code. There are many ways you could offer source, and different solutions will be better for different programs; see section 13 for the specific requirements.
+
+ You should also get your employer (if you work as a programmer) or school,
+if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU AGPL, see .
+
+
+
+------------- LICENSE FOR openai CODE --------------
+
+Copyright notice:Copyright 2025 OpenAI
+
+License:apache2.0
+
+Please see above.
+
+
+
+------------- LICENSE FOR qwen_vl_utils CODE --------------
+
+Copyright notice:No copyright info provided
+
+License:apache2.0
+
+Please see above.
+
+
+
+------------- LICENSE FOR transformers CODE --------------
+
+Copyright notice:Copyright 2018- The Hugging Face team. All rights reserved.
+
+License:apache2.0
+
+Please see above.
+
+
+
+------------- LICENSE FOR huggingface_hub CODE --------------
+
+Copyright notice:No copyright info provided
+
+License:apache2.0
+
+Please see above.
+
+
+
+------------- LICENSE FOR flash-attn CODE --------------
+
+Copyright notice:Copyright (c) 2022, the respective contributors, as shown by the AUTHORS file. All rights reserved.
+
+License:BSD-3-Clause license
+
+BSD 3-Clause License
+
+Copyright (c) 2022, the respective contributors, as shown by the AUTHORS file. All rights reserved.
+
+Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
+
+* Redistributions of source code must retain the above copyright notice, this list ofconditions and the following disclaimer.
+
+* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
+
+* Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISEDOF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+
+------------- LICENSE FOR accelerate CODE --------------
+
+Copyright notice:No copyright info provided
+
+License:apache2.0
+
+Please see above.
+
+
+
+------------- LICENSE FOR MonkeyOCR CODE --------------
+
+Copyright notice:No copyright info provided
+
+License:apache2.0
+
+Please see above.
+
+
+
+------------- LICENSE FOR OmniDocbench CODE --------------
+
+Copyright notice:No copyright info provided
+
+License:apache2.0
+
+Please see above.
+
+
+
+------------- LICENSE FOR Qwen2.5-VL CODE --------------
+
+Copyright notice:No copyright info provided
+
+License:apache2.0
+
+Please see above.
+
+
+
+------------- LICENSE FOR aimv2 CODE --------------
+
+Copyright notice: Copyright (C) 2024 Apple Inc. All Rights Reserved.
+
+License:
+
+IMPORTANT: This Apple software is supplied to you by Apple Inc. ("Apple") in consideration of your agreement to the following terms, and your use, installation, modification or redistribution of this Apple software constitutes acceptance of these terms. If you do not agree with these terms, please do not use, install, modify or
+redistribute this Apple software.
+
+In consideration of your agreement to abide by the following terms, and subject to these terms, Apple grants you a personal, non-exclusive license, under Apple's copyrights in this original Apple software (the "Apple Software"), to use, reproduce, modify and redistribute the Apple Software, with or without modifications, in source and/or binary forms; provided that if you redistribute the Apple Software in its entirety and without modifications, you must retain this notice and the following text and disclaimers in all such redistributions of the Apple Software. Neither the name, trademarks, service marks or logos of Apple Inc. May be used to endorse or promote products derived from the Apple Software without specific prior written permission from Apple. Except as expressly stated in this notice, no other rights or licenses, express or implied, are granted by Apple herein, including but not limited to any patent rights that may be infringed by your derivative works or by other works in which the Apple Software may be incorporated.
+
+The Apple Software is provided by Apple on an "AS IS" basis. APPLE MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS
+FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS. IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL
+OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION, MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE), STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+SOFTWARE DISTRIBUTED WITH AUTOREGRESSIVE IMAGE MODELS:
+
+The Autoregressive Image Models software includes a number of subcomponents with
+separate copyright notices and license terms - please see the file ACKNOWLEDGEMENTS.
+
+Acknowledgements:
+
+Portions of the Autoregressive Image Models project may utilize the following copyrighted material, the use of which is hereby acknowledged.
+
+_____________________
+
+Meta Platforms, Inc. and affiliates (Data-Efficient architectures and training for Image classification)
+
+Copyright notice:Copyright 2020 - present, Facebook, Inc
+
+License:apache2.0
+
+Please see above.
+
+Meta Platforms, Inc. and affiliates (Masked Autoencoders: A PyTorch Implementation)
+
+License:Attribution-NonCommercial 4.0 International
+
+Creative Commons Corporation ("Creative Commons") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an "as-is" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Common disclaims all liability for damages resulting from their use to the fullest extent possible.
+
+ Using Creative Commons Public Licenses
+
+ Creative Commons public licenses provide a standard set of terms anconditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.
+
+ Considerations for licensors: Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC- licensed material, or material used under an exception or limitation to copyright. More considerations for licensors: wiki.creativecommons.org/Considerations_for_licensors
+
+ Considerations for the public: By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor's permission is not necessary for any reason--for example, because of any applicable exception or limitation to copyright--then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. More_considerations for the public: wiki.creativecommons.org/Considerations_for_licensees
+
+Creative Commons Attribution-NonCommercial 4.0 International Public License
+
+ By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
+
+ Section 1 -- Definitions.
+
+ a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
+
+ b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.
+
+ c. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
+
+d. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
+
+ e. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
+
+ f. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
+
+ g. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
+
+ h. Licensor means the individual(s) or entity(ies) granting rights under this Public License.
+
+ i. NonCommercial means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange.
+
+ j. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
+
+ k. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
+
+ l. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
+
+ Section 2 -- Scope.
+
+ a. License grant.
+
+ 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
+
+ a. reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and
+
+ b. produce, reproduce, and Share Adapted Material for NonCommercial purposes only.
+
+ 2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
+
+ 3. Term. The term of this Public License is specified in Section 6(a).
+
+ 4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a) (4) never produces Adapted Material.
+
+ 5. Downstream recipients.
+
+ a. Offer from the Licensor -- Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
+
+ b. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, orapply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
+
+ 6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
+
+ b. Other rights.
+
+ 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
+
+ 2. Patent and trademark rights are not licensed under this Public License.
+
+ 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes.
+
+ Section 3 -- License Conditions.
+
+ Your exercise of the Licensed Rights is expressly made subject to the following conditions.
+
+ a. Attribution.
+
+ 1. If You Share the Licensed Material (including in modified form), You must: a. retain the following if it is supplied by the Licensor with the Licensed Material: i. identification of the creator(s) of the Licensed Material and any others designated to receiveattribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); ii. a copyright notice; iii. a notice that refers to this Public License; iv. a notice that refers to the disclaimer of warranties; v. a URI or hyperlink to the Licensed Material to theextent reasonably practicable;
+
+ b. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and
+
+ c. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.
+
+ 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
+
+ 3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
+
+ 4. If You Share Adapted Material You produce, the Adapter's License You apply must not prevent recipients of the Adapted Material from complying with this Public License.
+
+ Section 4 -- Sui Generis Database Rights.
+
+ Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
+
+ a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only;
+
+ b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and
+
+ c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
+
+ Section 5 -- Disclaimer of Warranties and Limitation of Liability.
+
+ a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OFANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
+
+ b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
+
+ c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer andwaiver of all liability.
+
+ Section 6 -- Term and Termination.
+
+ a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
+
+ b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
+
+ 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
+
+ 2. upon express reinstatement by the Licensor. For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.
+
+ c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
+
+ d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
+
+ Section 7 -- Other Terms and Conditions.
+
+ a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
+
+ b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
+
+ Section 8 -- Interpretation.
+
+ a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
+
+ b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
+
+ c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
+
+ d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
+
+ Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark "Creative Commons" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
+
+Creative Commons may be contacted at creativecommons.org.
+
+_____________________
+
+Meta Platforms, Inc. and affiliates (ImageBind: One Embedding Space To Bind Them All)
+
+License:Attribution-NonCommercial-ShareAlike 4.0 International
+
+ Creative Commons Corporation ("Creative Commons") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an "as-is" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.
+
+ Using Creative Commons Public Licenses
+
+ Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.
+
+ Considerations for licensors: Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC- licensed material, or material used under an exception or limitation to copyright. More considerations for licensors: wiki.creativecommons.org/Considerations_for_licensors
+
+ Considerations for the public: By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor's permission is not necessary for any reason--for example, because of any applicable exception or limitation to copyright--then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. More considerations for the public: wiki.creativecommons.org/Considerations_for_licensees
+
+
+
+Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License
+
+ By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
+
+
+ Section 1 -- Definitions.
+
+ a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
+
+ b. Adapter's License means the license You apply to Your Copyright and Similar Rights in Your contributions to Adapted Material in accordance with the terms and conditions of this Public License.
+
+ c. BY-NC-SA Compatible License means a license listed at creative commons.org/compatiblelicenses, approved by CreativeCommons as essentially the equivalent of this Public License.
+
+ d. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
+
+ e. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
+
+ f. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
+
+ g. License Elements means the license attributes listed in the name of a Creative Commons Public License. The License Elements of this Public License are Attribution, NonCommercial, and ShareAlike.
+
+ h. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
+
+ i. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
+
+ j. Licensor means the individual(s) or entity(ies) granting rights under this Public License.
+
+ k. NonCommercial means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange.
+
+ l. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
+
+ m. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
+
+ n. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
+
+
+ Section 2 -- Scope.
+
+ a. License grant.
+
+ 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
+
+ a. reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and
+
+ b. produce, reproduce, and Share Adapted Material for NonCommercial purposes only.
+
+ 2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
+
+ 3. Term. The term of this Public License is specified in Section 6(a).
+
+ 4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right orauthority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a)(4) never produces Adapted Material.
+
+ 5. Downstream recipients.
+
+ a. Offer from the Licensor -- Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
+
+ b. Additional offer from the Licensor -- Adapted Material. Every recipient of Adapted Material from You automatically receives an offer from the Licensor to exercise the Licensed Rights in the Adapted Material under the conditions of the Adapter's License You apply.
+
+ c. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
+
+ 6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that You are, or that Your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
+
+ b. Other rights.
+
+ 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
+
+ 2. Patent and trademark rights are not licensed under this Public License.
+
+ 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes.
+
+
+ Section 3 -- License Conditions.
+
+ Your exercise of the Licensed Rights is expressly made subject to the following conditions.
+
+ a. Attribution.
+
+ 1. If You Share the Licensed Material (including in modified form), You must: a. retain the following if it is supplied by the Licensor with the Licensed Material:
+ i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated); ii. a copyright notice; iii. a notice that refers to this Public License; iv. a notice that refers to the disclaimer of warranties; v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
+
+ b. indicate if You modified the Licensed Material and retain an indication of any previous modifications; and
+
+ c. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License.
+
+ 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which You Share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
+
+3. If requested by the Licensor, You must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
+
+ b. ShareAlike. In addition to the conditions in Section 3(a), if You Share Adapted Material You produce, the following conditions also apply.
+
+ 1. The Adapter's License You apply must be a Creative Commons license with the same License Elements, this version or later, or a BY-NC-SA Compatible License.
+
+ 2. You must include the text of, or the URI or hyperlink to, the Adapter's License You apply. You may satisfy this condition in any reasonable manner based on the medium, means, and context in which You Share Adapted Material.
+
+ 3. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, Adapted Material that restrict exercise of the rights granted under the Adapter's License You apply.
+
+
+ Section 4 -- Sui Generis Database Rights.
+
+ Where the Licensed Rights include Sui Generis Database Rights that apply to Your use of the Licensed Material:
+
+ a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only;
+
+ b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material, including for purposes of Section 3(b); and
+
+ c. You must comply with the conditions in Section 3(a) if You Share all or a substantial portion of the contents of the database.
+
+ For the avoidance of doubt, this Section 4 supplements and does not replace Your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
+ Section 5 -- Disclaimer of Warranties and Limitation of Liability.
+
+ a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
+
+ b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
+
+ c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer and waiver of all liability.
+
+ Section 6 -- Term and Termination.
+
+ a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
+
+ b. Where Your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
+
+ 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
+
+ 2. upon express reinstatement by the Licensor.
+
+ For the avoidance of doubt, this Section 6(b) does not affect anyright the Licensor may have to seek remedies for Your violations of this Public License.
+
+ c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
+
+ d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
+
+
+ Section 7 -- Other Terms and Conditions.
+
+ a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
+
+ b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
+
+
+ Section 8 -- Interpretation.
+
+ a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
+
+ b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
+
+ c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
+
+ d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
+
+
+ Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the “Licensor.” The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark "Creative Commons" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
+
+ Creative Commons may be contacted at creativecommons.org.
+
+
+
+------------- LICENSE FOR Hugging Face CODE --------------
+
+Copyright notice:Copyright 2019 Ross Wightman
+
+License:apache2.0
+
+Please see above.
+
+
+
+------------- LICENSE FOR vLLM CODE --------------
+
+Copyright notice:No copyright info provided
+
+License:apache2.0
+
+Please see above.
+
+
+
+------------- LICENSE FOR Doclaynet --------------
+
+Copyright notice:No copyright info provided
+
+License:Community Data License Agreement
+
+Community Data License Agreement – Permissive – Version 1.0
+
+This is the Community Data License Agreement – Permissive, Version 1.0 (“Agreement”). Data is provided to You under this Agreement by each of the Data Providers. Your exercise of any of the rights and permissions granted below constitutes your acceptance and agreement to be bound by the terms and conditions of this Agreement.
+
+The benefits that each Data Provider receives from making Data available and that You receive from Data or otherwise under these terms and conditions shall be deemed sufficient consideration for the formation of this Agreement. Accordingly, Data Provider(s) and You (the "Parties") agree as follows:
+
+Section 1. Definitions
+
+1.1 "Add" means to supplement Data with Your own or someone else's Data, resulting in Your “Additions.” Additions do not include Results.
+
+1.2 "Computational Use" means Your analysis (through the use of computational devices or otherwise) or other interpretation of Data. By way of example and not limitation, "Computational Use" includes the application of any computational analytical technique, the purpose of which is the analysis of any Data in digital form to generate information about Data such as patterns, trends, correlations, inferences, insights and attributes.
+
+1.3 "Data" means the information (including copyrightable information, such as images or text), collectively or individually, whether created or gathered by a Data Provider or an Entity acting on its behalf, to which rights are granted under this Agreement.
+
+1.4 "Data Provider" means any Entity (including any employee or contractor of such Entity authorized to Publish Data on behalf of such Entity) that Publishes Data under this Agreement prior to Your Receiving it.
+
+1.5 "Enhanced Data" means the subset of Data that You Publish and that is composed of (a) Your Additions and/or (b) Modifications to Data You have received under this Agreement.
+
+1.6 "Entity" means any natural person or organization that exists under the laws of the jurisdiction in which it is organized, together with all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (a) the power, directly or indirectly, to cause the direction or management of such entity, whether by contract or otherwise, (b) the ownership of more than fifty percent (50%) of the outstanding shares or securities, (c) the beneficial ownership of such entity or, (d) the ability to appoint, whether by agreement or right, the majority of directors of an Entity.
+
+1.7 "Modify" means to delete, erase, correct or re-arrange Data, resulting in “Modifications.” Modifications do not include Results.
+
+1.8 "Publish" means to make all or a subset of Data (including Your Enhanced Data) available in any manner which enables its use, including by providing a copy on physical media or remote access. For any form of Entity, that is to make the Data available to any individual who is not employed by that Entity or engaged as a contractor or agent to perform work on that Entity's behalf. A "Publication" occurs each time you Publish Data.
+
+1.9 "Receive" or "Receives" means to have been given access to Data, locally or remotely.
+
+1.10 "Results" means the outcomes or outputs that You obtain from Your Computational Use of Data. Results shall not include more than a de minimis portion of the Data on which the Computational Use is based.
+
+1.11 "Sui Generis Database Rights" means rights, other than copyright, resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other equivalent rights anywhere in the world.
+
+1.12 "Use" means using Data (including accessing, copying, studying, reviewing, adapting, analyzing, evaluating, or making Computational Use of it), either by machines or humans, or a combination of both.
+
+1.13 "You" or "Your" means any Entity that Receives Data under this Agreement.
+
+Section 2. Right and License to Use and to Publish
+
+2.1 Subject to the conditions set forth in Section 3 of this Agreement, Data Provider(s) hereby grant(s) to You a worldwide, non-exclusive, irrevocable (except as provided in Section 5) right to: (a) Use Data; and (b) Publish Data.
+
+2.2 To the extent that the Data or the coordination, selection or arrangement of Data is protected or protectable under copyright, Sui Generis Database Rights, or other law, Data Provider(s) further agree(s) that such Data or coordination, selection or arrangement is hereby licensed to You and to anyone else who Receives Data under this Agreement for Use and Publication, subject to the conditions set forth in Section 3 of this Agreement.
+
+2.3 Except for these rights and licenses expressly granted, no other intellectual property rights are granted or should be implied.
+
+Section 3. Conditions on Rights Granted
+
+3.1 If You Publish Data You Receive or Enhanced Data:
+
+(a) You may do so under a license of your choice provided that you give anyone who receives the data from you the text of this Agreement, the name of this Agreement and/or a hyperlink or other method reasonably likely to provide a copy of the text of this Agreement; and
+
+(b) You must cause any Data files containing Enhanced Data to carry prominent notices that you have changed those files; and
+
+(c) If You Publish Data You Receive, You must preserve all credit or attribution to the Data Provider(s). Such retained credit or attribution includes any of the following to the extent they exist in the Data as You have Received it: legal notices or metadata; identification of the Data Provider(s); or hyperlinks to Data to the extent it is practical to do so.
+
+3.2 You may provide additional or different license terms and conditions for use, reproduction, or distribution of that Enhanced Data, or for any combination of Data and Enhanced Data as a whole, provided that Your Use and Publication of that combined Data otherwise complies with the conditions stated in this License.
+
+3.3 You and each Data Provider agree that Enhanced Data shall not be considered a work of joint authorship by virtue of its relationship to Data licensed under this Agreement and shall not require either any obligation of accounting to or the consent of any Data Provider.
+
+3.4 This Agreement imposes no obligations or restrictions on Your Use or Publication of Results.
+
+Section 4. Data Provider(s)' Representations
+
+4.1 Each Data Provider represents that the Data Provider has exercised reasonable care, to assure that: (a) the Data it Publishes was created or generated by it or was obtained from others with the right to Publish the Data under this Agreement; and (b) Publication of such Data does not violate any privacy or confidentiality obligation undertaken by the Data Provider.
+
+Section 5. Termination
+
+5.1 All of Your rights under this Agreement will terminate, and Your right to Receive, Use or Publish the Data will be revoked or modified if You materially fail to comply with the terms and conditions of this Agreement and You do not cure such failure in a reasonable period of time after becoming aware of such noncompliance. If your rights under this Agreement terminate, you agree to cease Receipt, Use and Publication of Data. However, your obligations and any rights and permissions granted by you under this Agreement relating to Data that you published prior to such termination will continue and survive.
+
+5.2 If you institute litigation against a Data Provider or anyone else who Receives the Data (including a cross-claim in a lawsuit) based on the Data, other than a claim asserting breach of this Agreement, then any rights previously granted to You to Receive, Use and Publish Data under this Agreement will terminate as of the date such litigation is filed.
+
+Section 6. Disclaimer of Warranties and Limitation of Liability
+
+6.1 EXCEPT AS EXPRESSLY SET FORTH IN THIS AGREEMENT, THE DATA (INCLUDING ENHANCED DATA) IS PROVIDED ON AN "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, EITHER EXPRESS OR IMPLIED INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OR CONDITIONS OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
+
+6.2 NEITHER YOU NOR ANY DATA PROVIDERS SHALL HAVE ANY LIABILITY FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING WITHOUT LIMITATION LOST PROFITS), HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OR DISTRIBUTION OF THE DATA OR THE EXERCISE OF ANY RIGHTS GRANTED HEREUNDER, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+Section 7. Miscellaneous
+
+7.1 You agree that it is solely your responsibility to comply with all applicable laws with regard to Your Use or Publication of Data, including any applicable privacy, data protection, security and export laws. You agree to take reasonable steps to assist a Data Provider fulfilling responsibilities to comply with applicable laws with regard to Use or Publication of Data Received hereunder.
+
+7.2 You and Data Provider(s), collectively and individually, waive and/or agree not to assert, to the extent permitted by law, any moral rights you or they hold in Data.
+
+7.3 This Agreement confers no rights or remedies upon any person or entity other than the Parties and their respective heirs, executors, successors and assigns.
+
+7.4 The Data Provider(s) reserve no right or expectation of privacy, data protection or confidentiality in any Data that they Publish under this Agreement. If you choose to Publish Data under this Agreement, you similarly do so with no reservation or expectation of any rights of privacy or confidentiality in that Data.
+
+7.5 The Community Data License Agreement workgroup under The Linux Foundation is the steward of this Agreement (“Steward”). No one other than the Steward has the right to modify or publish new versions of this Agreement. Each version will be given a distinguishing version number. You may Use and Publish Data Received hereunder under the terms of the version of the Agreement under which You originally Received the Data, or under the terms of any subsequent version published by the Steward.
+
+
+
+------------- LICENSE FOR M6Doc --------------
+
+Copyright notice:No copyright info provided
+
+License:Attribution-NonCommercial-NoDerivatives 4.0
+
+=======Attribution-NonCommercial-NoDerivatives 4.0 International==========
+
+Creative Commons Corporation ("Creative Commons") is not a law firm and does not provide legal services or legal advice. Distribution of Creative Commons public licenses does not create a lawyer-client or other relationship. Creative Commons makes its licenses and related information available on an "as-is" basis. Creative Commons gives no warranties regarding its licenses, any material licensed under their terms and conditions, or any related information. Creative Commons disclaims all liability for damages resulting from their use to the fullest extent possible.
+
+Using Creative Commons Public Licenses
+
+Creative Commons public licenses provide a standard set of terms and conditions that creators and other rights holders may use to share original works of authorship and other material subject to copyright and certain other rights specified in the public license below. The following considerations are for informational purposes only, are not exhaustive, and do not form part of our licenses.
+
+Considerations for licensors: Our public licenses are intended for use by those authorized to give the public permission to use material in ways otherwise restricted by copyright and certain other rights. Our licenses are irrevocable. Licensors should read and understand the terms and conditions of the license they choose before applying it. Licensors should also secure all rights necessary before applying our licenses so that the public can reuse the material as expected. Licensors should clearly mark any material not subject to the license. This includes other CC- licensed material, or material used under an exception or limitation to copyright. More considerations for licensors: wiki.creativecommons.org/Considerations_for_licensors
+
+Considerations for the public: By using one of our public licenses, a licensor grants the public permission to use the licensed material under specified terms and conditions. If the licensor's permission is not necessary for any reason--for example, because of any applicable exception or limitation to copyright--then that use is not regulated by the license. Our licenses grant only permissions under copyright and certain other rights that a licensor has authority to grant. Use of the licensed material may still be restricted for other reasons, including because others have copyright or other rights in the material. A licensor may make special requests, such as asking that all changes be marked or described. Although not required by our licenses, you are encouraged to respect those requests where reasonable. More considerations for the public: wiki.creativecommons.org/Considerations_for_licensees
+
+Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License
+
+By exercising the Licensed Rights (defined below), You accept and agree to be bound by the terms and conditions of this Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License ("Public License"). To the extent this Public License may be interpreted as a contract, You are granted the Licensed Rights in consideration of Your acceptance of these terms and conditions, and the Licensor grants You such rights in consideration of benefits the Licensor receives from making the Licensed Material available under these terms and conditions.
+
+
+Section 1 -- Definitions.
+
+ a. Adapted Material means material subject to Copyright and Similar Rights that is derived from or based upon the Licensed Material and in which the Licensed Material is translated, altered, arranged, transformed, or otherwise modified in a manner requiring permission under the Copyright and Similar Rights held by the Licensor. For purposes of this Public License, where the Licensed Material is a musical work, performance, or sound recording, Adapted Material is always produced where the Licensed Material is synched in timed relation with a moving image.
+
+ b. Copyright and Similar Rights means copyright and/or similar rights closely related to copyright including, without limitation, performance, broadcast, sound recording, and Sui Generis Database Rights, without regard to how the rights are labeled or categorized. For purposes of this Public License, the rights specified in Section 2(b)(1)-(2) are not Copyright and Similar Rights.
+
+ c. Effective Technological Measures means those measures that, in the absence of proper authority, may not be circumvented under laws fulfilling obligations under Article 11 of the WIPO Copyright Treaty adopted on December 20, 1996, and/or similar international agreements.
+
+ d. Exceptions and Limitations means fair use, fair dealing, and/or any other exception or limitation to Copyright and Similar Rights that applies to Your use of the Licensed Material.
+
+ e. Licensed Material means the artistic or literary work, database, or other material to which the Licensor applied this Public License.
+
+ f. Licensed Rights means the rights granted to You subject to the terms and conditions of this Public License, which are limited to all Copyright and Similar Rights that apply to Your use of the Licensed Material and that the Licensor has authority to license.
+
+ g. Licensor means the individual(s) or entity(ies) granting rights under this Public License.
+
+ h. NonCommercial means not primarily intended for or directed towards commercial advantage or monetary compensation. For purposes of this Public License, the exchange of the Licensed Material for other material subject to Copyright and Similar Rights by digital file-sharing or similar means is NonCommercial provided there is no payment of monetary compensation in connection with the exchange.
+
+ i. Share means to provide material to the public by any means or process that requires permission under the Licensed Rights, such as reproduction, public display, public performance, distribution, dissemination, communication, or importation, and to make material available to the public including in ways that members of the public may access the material from a place and at a time individually chosen by them.
+
+ j. Sui Generis Database Rights means rights other than copyright resulting from Directive 96/9/EC of the European Parliament and of the Council of 11 March 1996 on the legal protection of databases, as amended and/or succeeded, as well as other essentially equivalent rights anywhere in the world.
+
+ k. You means the individual or entity exercising the Licensed Rights under this Public License. Your has a corresponding meaning.
+
+
+Section 2 -- Scope.
+
+ a. License granted.
+
+ 1. Subject to the terms and conditions of this Public License, the Licensor hereby grants You a worldwide, royalty-free, non-sublicensable, non-exclusive, irrevocable license to exercise the Licensed Rights in the Licensed Material to:
+
+ a. reproduce and Share the Licensed Material, in whole or in part, for NonCommercial purposes only; and
+
+ b. produce and reproduce, but not Share, Adapted Material for NonCommercial purposes only.
+
+ 2. Exceptions and Limitations. For the avoidance of doubt, where Exceptions and Limitations apply to Your use, this Public License does not apply, and You do not need to comply with its terms and conditions.
+
+ 3. Term. The term of this Public License is specified in Section 6(a).
+
+ 4. Media and formats; technical modifications allowed. The Licensor authorizes You to exercise the Licensed Rights in all media and formats whether now known or hereafter created, and to make technical modifications necessary to do so. The Licensor waives and/or agrees not to assert any right or authority to forbid You from making technical modifications necessary to exercise the Licensed Rights, including technical modifications necessary to circumvent Effective Technological Measures. For purposes of this Public License, simply making modifications authorized by this Section 2(a) (4) never produces Adapted Material.
+
+ 5. Downstream recipients.
+
+ a. Offer from the Licensor -- Licensed Material. Every recipient of the Licensed Material automatically receives an offer from the Licensor to exercise the Licensed Rights under the terms and conditions of this Public License.
+
+ b. No downstream restrictions. You may not offer or impose any additional or different terms or conditions on, or apply any Effective Technological Measures to, the Licensed Material if doing so restricts exercise of the Licensed Rights by any recipient of the Licensed Material.
+
+ 6. No endorsement. Nothing in this Public License constitutes or may be construed as permission to assert or imply that you are, or that your use of the Licensed Material is, connected with, or sponsored, endorsed, or granted official status by, the Licensor or others designated to receive attribution as provided in Section 3(a)(1)(A)(i).
+
+ b. Other rights.
+
+ 1. Moral rights, such as the right of integrity, are not licensed under this Public License, nor are publicity, privacy, and/or other similar personality rights; however, to the extent possible, the Licensor waives and/or agrees not to assert any such rights held by the Licensor to the limited extent necessary to allow You to exercise the Licensed Rights, but not otherwise.
+
+ 2. Patent and trademark rights are not licensed under this Public License.
+
+ 3. To the extent possible, the Licensor waives any right to collect royalties from You for the exercise of the Licensed Rights, whether directly or through a collecting society under any voluntary or waivable statutory or compulsory licensing scheme. In all other cases the Licensor expressly reserves any right to collect such royalties, including when the Licensed Material is used other than for NonCommercial purposes.
+
+
+Section 3 -- License Conditions.
+
+Your exercise of the Licensed Rights is expressly made subject to the
+following conditions.
+
+ a. Attribution.
+
+ 1. If You Share the Licensed Material, You must:
+
+ a. retain the following if it is supplied by the Licensor with the Licensed Material:
+
+ i. identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);
+
+ ii. a copyright notice;
+
+ iii. a notice that refers to this Public License;
+
+ iv. a notice that refers to the disclaimer of warranties;
+
+ v. a URI or hyperlink to the Licensed Material to the extent reasonably practicable;
+
+ b. indicate if you modified the Licensed Material and retain an indication of any previous modifications; and
+
+ c. indicate the Licensed Material is licensed under this Public License, and include the text of, or the URI or hyperlink to, this Public License. For the avoidance of doubt, you do not have permission under this Public License to Share Adapted Material.
+
+ 2. You may satisfy the conditions in Section 3(a)(1) in any reasonable manner based on the medium, means, and context in which you share the Licensed Material. For example, it may be reasonable to satisfy the conditions by providing a URI or hyperlink to a resource that includes the required information.
+
+ 3. If requested by the Licensor, you must remove any of the information required by Section 3(a)(1)(A) to the extent reasonably practicable.
+
+
+Section 4 -- Sui Generis Database Rights.
+
+Where the Licensed Rights include Sui Generis Database Rights that
+apply to Your use of the Licensed Material:
+
+ a. for the avoidance of doubt, Section 2(a)(1) grants You the right to extract, reuse, reproduce, and Share all or a substantial portion of the contents of the database for NonCommercial purposes only and provided You do not Share Adapted Material;
+
+ b. if You include all or a substantial portion of the database contents in a database in which You have Sui Generis Database Rights, then the database in which You have Sui Generis Database Rights (but not its individual contents) is Adapted Material; and
+
+ c. You must comply with the conditions in Section 3(a) if you share all or a substantial portion of the contents of the database.
+
+For the avoidance of doubt, this Section 4 supplements and does not replace your obligations under this Public License where the Licensed Rights include other Copyright and Similar Rights.
+
+
+Section 5 -- Disclaimer of Warranties and Limitation of Liability.
+
+ a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS, IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
+
+ b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION, NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT, INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES, COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
+
+ c. The disclaimer of warranties and limitation of liability provided above shall be interpreted in a manner that, to the extent possible, most closely approximates an absolute disclaimer andwaiver of all liability.
+
+
+Section 6 -- Term and Termination.
+
+ a. This Public License applies for the term of the Copyright and Similar Rights licensed here. However, if You fail to comply with this Public License, then Your rights under this Public License terminate automatically.
+
+ b. Where your right to use the Licensed Material has terminated under Section 6(a), it reinstates:
+
+ 1. automatically as of the date the violation is cured, provided it is cured within 30 days of Your discovery of the violation; or
+
+ 2. upon express reinstatement by the Licensor.
+
+ For the avoidance of doubt, this Section 6(b) does not affect any right the Licensor may have to seek remedies for Your violations of this Public License.
+
+ c. For the avoidance of doubt, the Licensor may also offer the Licensed Material under separate terms or conditions or stop distributing the Licensed Material at any time; however, doing so will not terminate this Public License.
+
+ d. Sections 1, 5, 6, 7, and 8 survive termination of this Public License.
+
+
+Section 7 -- Other Terms and Conditions.
+
+ a. The Licensor shall not be bound by any additional or different terms or conditions communicated by You unless expressly agreed.
+
+ b. Any arrangements, understandings, or agreements regarding the Licensed Material not stated herein are separate from and independent of the terms and conditions of this Public License.
+
+
+Section 8 -- Interpretation.
+
+ a. For the avoidance of doubt, this Public License does not, and shall not be interpreted to, reduce, limit, restrict, or impose conditions on any use of the Licensed Material that could lawfully be made without permission under this Public License.
+
+ b. To the extent possible, if any provision of this Public License is deemed unenforceable, it shall be automatically reformed to the minimum extent necessary to make it enforceable. If the provision cannot be reformed, it shall be severed from this Public License without affecting the enforceability of the remaining terms and conditions.
+
+ c. No term or condition of this Public License will be waived and no failure to comply consented to unless expressly agreed to by the Licensor.
+
+ d. Nothing in this Public License constitutes or may be interpreted as a limitation upon, or waiver of, any privileges and immunities that apply to the Licensor or You, including from the legal processes of any jurisdiction or authority.
+
+Creative Commons is not a party to its public licenses. Notwithstanding, Creative Commons may elect to apply one of its public licenses to material it publishes and in those instances will be considered the "Licensor." The text of the Creative Commons public licenses is dedicated to the public domain under the CC0 Public
+Domain Dedication. Except for the limited purpose of indicating that material is shared under a Creative Commons public license or as otherwise permitted by the Creative Commons policies published at creativecommons.org/policies, Creative Commons does not authorize the use of the trademark "Creative Commons" or any other trademark or logo of Creative Commons without its prior written consent including, without limitation, in connection with any unauthorized modifications to any of its public licenses or any other arrangements, understandings, or agreements concerning use of licensed material. For the avoidance of doubt, this paragraph does not form part of the public licenses.
+
+Creative Commons may be contacted at creativecommons.org.
+
+
+
+------------- LICENSE FOR CDLA --------------
+
+Copyright notice:No copyright info provided
+
+License:No License info provided
+
+
+
+------------- LICENSE FOR D4LA --------------
+
+Copyright notice:No copyright info provided
+
+License:No License info provided
\ No newline at end of file
diff --git a/README.md b/README.md
index eeed212611f40509b21dea1eb41231409266a1f7..ad6a4cd72da51a1d4a362921961c6cdaad9b8027 100644
--- a/README.md
+++ b/README.md
@@ -1,12 +1,1228 @@
----
-title: Dots Ocr
-emoji: 🔥
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 5.42.0
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
+
+
+
+
+
+
+
+dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model
+
+
+[](https://github.com/rednote-hilab/dots.ocr/blob/master/assets/blog.md)
+[](https://huggingface.co/rednote-hilab/dots.ocr)
+
+
+
+
+
+
+
+
+## Introduction
+
+**dots.ocr** is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
+
+1. **Powerful Performance:** **dots.ocr** achieves SOTA performance for text, tables, and reading order on [OmniDocBench](https://github.com/opendatalab/OmniDocBench), while delivering formula recognition results comparable to much larger models like Doubao-1.5 and gemini2.5-pro.
+2. **Multilingual Support:** **dots.ocr** demonstrates robust parsing capabilities for low-resource languages, achieving decisive advantages across both layout detection and content recognition on our in-house multilingual documents benchmark.
+3. **Unified and Simple Architecture:** By leveraging a single vision-language model, **dots.ocr** offers a significantly more streamlined architecture than conventional methods that rely on complex, multi-model pipelines. Switching between tasks is accomplished simply by altering the input prompt, proving that a VLM can achieve competitive detection results compared to traditional detection models like DocLayout-YOLO.
+4. **Efficient and Fast Performance:** Built upon a compact 1.7B LLM, **dots.ocr** provides faster inference speeds than many other high-performing models based on larger foundations.
+
+
+### Performance Comparison: dots.ocr vs. Competing Models
+
+
+> **Notes:**
+> - The EN, ZH metrics are the end2end evaluation results of [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and Multilingual metric is the end2end evaluation results of dots.ocr-bench.
+
+
+## News
+* ```2025.07.30 ``` 🚀 We release [dots.ocr](https://github.com/rednote-hilab/dots.ocr), — a multilingual documents parsing model based on 1.7b llm, with SOTA performance.
+
+
+
+## Benchmark Results
+
+### 1. OmniDocBench
+
+#### The end-to-end evaluation results of different tasks.
+
+
+
+
+Model Type |
+Methods |
+OverallEdit↓ |
+TextEdit↓ |
+FormulaEdit↓ |
+TableTEDS↑ |
+TableEdit↓ |
+Read OrderEdit↓ |
+
+
+| EN |
+ZH |
+EN |
+ZH |
+EN |
+ZH |
+EN |
+ZH |
+EN |
+ZH |
+EN |
+ZH |
+
+
+
+
+Pipeline Tools |
+MinerU |
+0.150 |
+0.357 |
+0.061 |
+0.215 |
+0.278 |
+0.577 |
+78.6 |
+62.1 |
+0.180 |
+0.344 |
+0.079 |
+0.292 |
+
+
+| Marker |
+0.336 |
+0.556 |
+0.080 |
+0.315 |
+0.530 |
+0.883 |
+67.6 |
+49.2 |
+0.619 |
+0.685 |
+0.114 |
+0.340 |
+
+
+| Mathpix |
+0.191 |
+0.365 |
+0.105 |
+0.384 |
+0.306 |
+0.454 |
+77.0 |
+67.1 |
+0.243 |
+0.320 |
+0.108 |
+0.304 |
+
+
+| Docling |
+0.589 |
+0.909 |
+0.416 |
+0.987 |
+0.999 |
+1 |
+61.3 |
+25.0 |
+0.627 |
+0.810 |
+0.313 |
+0.837 |
+
+
+| Pix2Text |
+0.320 |
+0.528 |
+0.138 |
+0.356 |
+0.276 |
+0.611 |
+73.6 |
+66.2 |
+0.584 |
+0.645 |
+0.281 |
+0.499 |
+
+
+| Unstructured |
+0.586 |
+0.716 |
+0.198 |
+0.481 |
+0.999 |
+1 |
+0 |
+0.06 |
+1 |
+0.998 |
+0.145 |
+0.387 |
+
+
+| OpenParse |
+0.646 |
+0.814 |
+0.681 |
+0.974 |
+0.996 |
+1 |
+64.8 |
+27.5 |
+0.284 |
+0.639 |
+0.595 |
+0.641 |
+
+
+| PPStruct-V3 |
+0.145 |
+0.206 |
+0.058 |
+0.088 |
+0.295 |
+0.535 |
+- |
+- |
+0.159 |
+0.109 |
+0.069 |
+0.091 |
+
+
+Expert VLMs |
+GOT-OCR |
+0.287 |
+0.411 |
+0.189 |
+0.315 |
+0.360 |
+0.528 |
+53.2 |
+47.2 |
+0.459 |
+0.520 |
+0.141 |
+0.280 |
+
+
+| Nougat |
+0.452 |
+0.973 |
+0.365 |
+0.998 |
+0.488 |
+0.941 |
+39.9 |
+0 |
+0.572 |
+1.000 |
+0.382 |
+0.954 |
+
+
+| Mistral OCR |
+0.268 |
+0.439 |
+0.072 |
+0.325 |
+0.318 |
+0.495 |
+75.8 |
+63.6 |
+0.600 |
+0.650 |
+0.083 |
+0.284 |
+
+
+| OLMOCR-sglang |
+0.326 |
+0.469 |
+0.097 |
+0.293 |
+0.455 |
+0.655 |
+68.1 |
+61.3 |
+0.608 |
+0.652 |
+0.145 |
+0.277 |
+
+
+| SmolDocling-256M |
+0.493 |
+0.816 |
+0.262 |
+0.838 |
+0.753 |
+0.997 |
+44.9 |
+16.5 |
+0.729 |
+0.907 |
+0.227 |
+0.522 |
+
+
+| Dolphin |
+0.206 |
+0.306 |
+0.107 |
+0.197 |
+0.447 |
+0.580 |
+77.3 |
+67.2 |
+0.180 |
+0.285 |
+0.091 |
+0.162 |
+
+
+| MinerU 2 |
+0.139 |
+0.240 |
+0.047 |
+0.109 |
+0.297 |
+0.536 |
+82.5 |
+79.0 |
+0.141 |
+0.195 |
+0.069< |
+0.118 |
+
+
+| OCRFlux |
+0.195 |
+0.281 |
+0.064 |
+0.183 |
+0.379 |
+0.613 |
+71.6 |
+81.3 |
+0.253 |
+0.139 |
+0.086 |
+0.187 |
+
+
+| MonkeyOCR-pro-3B |
+0.138 |
+0.206 |
+0.067 |
+0.107 |
+0.246 |
+0.421 |
+81.5 |
+87.5 |
+0.139 |
+0.111 |
+0.100 |
+0.185 |
+
+
+
+General VLMs |
+GPT4o |
+0.233 |
+0.399 |
+0.144 |
+0.409 |
+0.425 |
+0.606 |
+72.0 |
+62.9 |
+0.234 |
+0.329 |
+0.128 |
+0.251 |
+
+
+ | Qwen2-VL-72B |
+ 0.252 |
+ 0.327 |
+ 0.096 |
+ 0.218 |
+ 0.404 |
+ 0.487 |
+ 76.8 |
+ 76.4 |
+ 0.387 |
+ 0.408 |
+ 0.119 |
+ 0.193 |
+
+
+ | Qwen2.5-VL-72B |
+ 0.214 |
+ 0.261 |
+ 0.092 |
+ 0.18 |
+ 0.315 |
+ 0.434 |
+ 82.9 |
+ 83.9 |
+ 0.341 |
+ 0.262 |
+ 0.106 |
+ 0.168 |
+
+
+ | Gemini2.5-Pro |
+ 0.148 |
+ 0.212 |
+ 0.055 |
+ 0.168 |
+ 0.356 |
+ 0.439 |
+ 85.8 |
+ 86.4 |
+ 0.13 |
+ 0.119 |
+ 0.049 |
+ 0.121 |
+
+
+ | doubao-1-5-thinking-vision-pro-250428 |
+ 0.140 |
+ 0.162 |
+ 0.043 |
+ 0.085 |
+ 0.295 |
+ 0.384 |
+ 83.3 |
+ 89.3 |
+ 0.165 |
+ 0.085 |
+ 0.058 |
+ 0.094 |
+
+
+| Expert VLMs |
+dots.ocr |
+0.125 |
+0.160 |
+0.032 |
+0.066 |
+0.329 |
+0.416 |
+88.6 |
+89.0 |
+0.099 |
+0.092 |
+0.040 |
+0.067 |
+
+
+
+
+
+
+#### The end-to-end text recognition performance across 9 PDF page types.
+
+
+
+
+Model Type |
+Models |
+Book |
+Slides |
+Financial Report |
+Textbook |
+Exam Paper |
+Magazine |
+Academic Papers |
+Notes |
+Newspaper |
+Overall |
+
+
+
+
+Pipeline Tools |
+MinerU |
+0.055 |
+0.124 |
+0.033 |
+0.102 |
+0.159 |
+0.072 |
+0.025 |
+0.984 |
+0.171 |
+0.206 |
+
+
+| Marker |
+0.074 |
+0.340 |
+0.089 |
+0.319 |
+0.452 |
+0.153 |
+0.059 |
+0.651 |
+0.192 |
+0.274 |
+
+
+| Mathpix |
+0.131 |
+0.220 |
+0.202 |
+0.216 |
+0.278 |
+0.147 |
+0.091 |
+0.634 |
+0.690 |
+0.300 |
+
+
+Expert VLMs |
+GOT-OCR |
+0.111 |
+0.222 |
+0.067 |
+0.132 |
+0.204 |
+0.198 |
+0.179 |
+0.388 |
+0.771 |
+0.267 |
+
+
+| Nougat |
+0.734 |
+0.958 |
+1.000 |
+0.820 |
+0.930 |
+0.830 |
+0.214 |
+0.991 |
+0.871 |
+0.806 |
+
+
+| Dolphin |
+0.091 |
+0.131 |
+0.057 |
+0.146 |
+0.231 |
+0.121 |
+0.074 |
+0.363 |
+0.307 |
+0.177 |
+
+
+| OCRFlux |
+0.068 |
+0.125 |
+0.092 |
+0.102 |
+0.119 |
+0.083 |
+0.047 |
+0.223 |
+0.536 |
+0.149 |
+
+
+| MonkeyOCR-pro-3B |
+0.084 |
+0.129 |
+0.060 |
+0.090 |
+0.107 |
+0.073 |
+0.050 |
+0.171 |
+0.107 |
+0.100 |
+
+
+General VLMs |
+GPT4o |
+0.157 |
+0.163 |
+0.348 |
+0.187 |
+0.281 |
+0.173 |
+0.146 |
+0.607 |
+0.751 |
+0.316 |
+
+
+| Qwen2.5-VL-7B |
+0.148 |
+0.053 |
+0.111 |
+0.137 |
+0.189 |
+0.117 |
+0.134 |
+0.204 |
+0.706 |
+0.205 |
+
+
+| InternVL3-8B |
+0.163 |
+0.056 |
+0.107 |
+0.109 |
+0.129 |
+0.100 |
+0.159 |
+0.150 |
+0.681 |
+0.188 |
+
+
+| doubao-1-5-thinking-vision-pro-250428 |
+0.048 |
+0.048 |
+0.024 |
+0.062 |
+0.085 |
+0.051 |
+0.039 |
+0.096 |
+0.181 |
+0.073 |
+
+
+| Expert VLMs |
+dots.ocr |
+0.031 |
+0.047 |
+0.011 |
+0.082 |
+0.079 |
+0.028 |
+0.029 |
+0.109 |
+0.056 |
+0.055 |
+
+
+
+
+
+> **Notes:**
+> - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR), [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and our own internal evaluations.
+> - We delete the Page-header and Page-footer cells in the result markdown.
+> - We use tikz_preprocess pipeline to upsample the images to dpi 200.
+
+
+### 2. **dots.ocr-bench**
+
+This is an inhouse benchmark which contain 1493 pdf images with 100 languages.
+
+#### The end-to-end evaluation results of different tasks.
+
+
+
+
+| Methods |
+OverallEdit↓ |
+TextEdit↓ |
+FormulaEdit↓ |
+TableTEDS↑ |
+TableEdit↓ |
+Read OrderEdit↓ |
+
+
+
+MonkeyOCR-3B |
+0.483 |
+0.445 |
+0.627 |
+50.93 |
+0.452 |
+0.409 |
+
+
+| doubao-1-5-thinking-vision-pro-250428 |
+0.291 |
+0.226 |
+0.440 |
+71.2 |
+0.260 |
+0.238 |
+
+
+| doubao-1-6 |
+0.299 |
+0.270 |
+0.417 |
+71.0 |
+0.258 |
+0.253 |
+
+
+| Gemini2.5-Pro |
+0.251 |
+0.163 |
+0.402 |
+77.1 |
+0.236 |
+0.202 |
+
+
+| dots.ocr |
+0.177 |
+0.075 |
+0.297 |
+79.2 |
+0.186 |
+0.152 |
+
+
+
+
+
+> **Notes:**
+> - We use the same metric calculation pipeline of [OmniDocBench](https://github.com/opendatalab/OmniDocBench).
+> - We delete the Page-header and Page-footer cells in the result markdown.
+
+#### Layout Detection
+
+
+
+
+| Method |
+F1@IoU=.50:.05:.95↑ |
+F1@IoU=.50↑ |
+
+
+| Overall |
+Text |
+Formula |
+Table |
+Picture |
+Overall |
+Text |
+Formula |
+Table |
+Picture |
+
+
+
+
+DocLayout-YOLO-DocStructBench |
+0.733 |
+0.694 |
+0.480 |
+0.803 |
+0.619 |
+0.806 |
+0.779 |
+0.620 |
+0.858 |
+0.678 |
+
+
+
+| dots.ocr-parse all |
+0.831 |
+0.801 |
+0.654 |
+0.838 |
+0.748 |
+0.922 |
+0.909 |
+0.770 |
+0.888 |
+0.831 |
+
+
+
+| dots.ocr-detection only |
+0.845 |
+0.816 |
+0.716 |
+0.875 |
+0.765 |
+0.930 |
+0.917 |
+0.832 |
+0.918 |
+0.843 |
+
+
+
+
+
+> **Notes:**
+> - prompt_layout_all_en for **parse all**, prompt_layout_only_en for **detection only**, please refer to [prompts](https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py)
+
+
+### 3. olmOCR-bench.
+
+
+
+
+| Model |
+ArXiv |
+Old Scans Math |
+Tables |
+Old Scans |
+Headers and Footers |
+Multi column |
+Long Tiny Text |
+Base |
+Overall |
+
+
+
+
+| GOT OCR |
+52.7 |
+52.0 |
+0.2 |
+22.1 |
+93.6 |
+42.0 |
+29.9 |
+94.0 |
+48.3 ± 1.1 |
+
+
+| Marker |
+76.0 |
+57.9 |
+57.6 |
+27.8 |
+84.9 |
+72.9 |
+84.6 |
+99.1 |
+70.1 ± 1.1 |
+
+
+| MinerU |
+75.4 |
+47.4 |
+60.9 |
+17.3 |
+96.6 |
+59.0 |
+39.1 |
+96.6 |
+61.5 ± 1.1 |
+
+
+| Mistral OCR |
+77.2 |
+67.5 |
+60.6 |
+29.3 |
+93.6 |
+71.3 |
+77.1 |
+99.4 |
+72.0 ± 1.1 |
+
+
+| Nanonets OCR |
+67.0 |
+68.6 |
+77.7 |
+39.5 |
+40.7 |
+69.9 |
+53.4 |
+99.3 |
+64.5 ± 1.1 |
+
+
+GPT-4o (No Anchor) |
+51.5 |
+75.5 |
+69.1 |
+40.9 |
+94.2 |
+68.9 |
+54.1 |
+96.7 |
+68.9 ± 1.1 |
+
+
+GPT-4o (Anchored) |
+53.5 |
+74.5 |
+70.0 |
+40.7 |
+93.8 |
+69.3 |
+60.6 |
+96.8 |
+69.9 ± 1.1 |
+
+
+Gemini Flash 2 (No Anchor) |
+32.1 |
+56.3 |
+61.4 |
+27.8 |
+48.0 |
+58.7 |
+84.4 |
+94.0 |
+57.8 ± 1.1 |
+
+
+Gemini Flash 2 (Anchored) |
+54.5 |
+56.1 |
+72.1 |
+34.2 |
+64.7 |
+61.5 |
+71.5 |
+95.6 |
+63.8 ± 1.2 |
+
+
+Qwen 2 VL (No Anchor) |
+19.7 |
+31.7 |
+24.2 |
+17.1 |
+88.9 |
+8.3 |
+6.8 |
+55.5 |
+31.5 ± 0.9 |
+
+
+Qwen 2.5 VL (No Anchor) |
+63.1 |
+65.7 |
+67.3 |
+38.6 |
+73.6 |
+68.3 |
+49.1 |
+98.3 |
+65.5 ± 1.2 |
+
+
+olmOCR v0.1.75 (No Anchor) |
+71.5 |
+71.4 |
+71.4 |
+42.8 |
+94.1 |
+77.7 |
+71.0 |
+97.8 |
+74.7 ± 1.1 |
+
+
+olmOCR v0.1.75 (Anchored) |
+74.9 |
+71.2 |
+71.0 |
+42.2 |
+94.5 |
+78.3 |
+73.3 |
+98.3 |
+75.5 ± 1.0 |
+
+
+| MonkeyOCR-pro-3B |
+83.8 |
+68.8 |
+74.6 |
+36.1 |
+91.2 |
+76.6 |
+80.1 |
+95.3 |
+75.8 ± 1.0 |
+
+
+| dots.ocr |
+82.1 |
+64.2 |
+88.3 |
+40.9 |
+94.1 |
+82.4 |
+81.2 |
+99.5 |
+79.1 ± 1.0 |
+
+
+
+
+
+> **Note:**
+> - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
+[olmocr](https://github.com/allenai/olmocr), and our own internal evaluations.
+> - We delete the Page-header and Page-footer cells in the result markdown.
+
+
+
+# Quick Start
+## 1. Installation
+### Install dots.ocr
+```shell
+conda create -n dots_ocr python=3.12
+conda activate dots_ocr
+
+git clone https://github.com/rednote-hilab/dots.ocr.git
+cd dots.ocr
+
+# Install pytorch, see https://pytorch.org/get-started/previous-versions/ for your cuda version
+pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu128
+pip install -e .
+```
+
+If you have trouble with the installation, try our [Docker Image](https://hub.docker.com/r/rednotehilab/dots.ocr) for an easier setup, and follow these steps:
+```shell
+git clone https://github.com/rednote-hilab/dots.ocr.git
+cd dots.ocr
+pip install -e .
+```
+
+
+### Download Model Weights
+> 💡**Note:** Please use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`) for the model save path. This is a temporary workaround pending our integration with Transformers.
+```shell
+python3 tools/download_model.py
+
+# with modelscope
+python3 tools/download_model.py --type modelscope
+```
+
+
+## 2. Deployment
+### vLLM inference
+We highly recommend using vllm for deployment and inference. All of our evaluations results are based on vllm version 0.9.1.
+The [Docker Image](https://hub.docker.com/r/rednotehilab/dots.ocr) is based on the official vllm image. You can also follow [Dockerfile](https://github.com/rednote-hilab/dots.ocr/blob/master/docker/Dockerfile) to build the deployment environment by yourself.
+
+```shell
+# You need to register model to vllm at first
+python3 tools/download_model.py
+export hf_model_path=./weights/DotsOCR # Path to your downloaded model weights, Please use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`) for the model save path. This is a temporary workaround pending our integration with Transformers.
+export PYTHONPATH=$(dirname "$hf_model_path"):$PYTHONPATH
+sed -i '/^from vllm\.entrypoints\.cli\.main import main$/a\
+from DotsOCR import modeling_dots_ocr_vllm' `which vllm` # If you downloaded model weights by yourself, please replace `DotsOCR` by your model saved directory name, and remember to use a directory name without periods (e.g., `DotsOCR` instead of `dots.ocr`)
+
+# launch vllm server
+CUDA_VISIBLE_DEVICES=0 vllm serve ${hf_model_path} --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --chat-template-content-format string --served-model-name model --trust-remote-code
+
+# If you get a ModuleNotFoundError: No module named 'DotsOCR', please check the note above on the saved model directory name.
+
+# vllm api demo
+python3 ./demo/demo_vllm.py --prompt_mode prompt_layout_all_en
+```
+
+### Hugginface inference
+```shell
+python3 demo/demo_hf.py
+```
+
+
+Hugginface inference details
+
+```python
+import torch
+from transformers import AutoModelForCausalLM, AutoProcessor, AutoTokenizer
+from qwen_vl_utils import process_vision_info
+from dots_ocr.utils import dict_promptmode_to_prompt
+
+model_path = "./weights/DotsOCR"
+model = AutoModelForCausalLM.from_pretrained(
+ model_path,
+ attn_implementation="flash_attention_2",
+ torch_dtype=torch.bfloat16,
+ device_map="auto",
+ trust_remote_code=True
+)
+processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
+
+image_path = "demo/demo_image1.jpg"
+prompt = """Please output the layout information from the PDF image, including each layout element's bbox, its category, and the corresponding text content within the bbox.
+
+1. Bbox format: [x1, y1, x2, y2]
+
+2. Layout Categories: The possible categories are ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title'].
+
+3. Text Extraction & Formatting Rules:
+ - Picture: For the 'Picture' category, the text field should be omitted.
+ - Formula: Format its text as LaTeX.
+ - Table: Format its text as HTML.
+ - All Others (Text, Title, etc.): Format their text as Markdown.
+
+4. Constraints:
+ - The output text must be the original text from the image, with no translation.
+ - All layout elements must be sorted according to human reading order.
+
+5. Final Output: The entire output must be a single JSON object.
+"""
+
+messages = [
+ {
+ "role": "user",
+ "content": [
+ {
+ "type": "image",
+ "image": image_path
+ },
+ {"type": "text", "text": prompt}
+ ]
+ }
+ ]
+
+# Preparation for inference
+text = processor.apply_chat_template(
+ messages,
+ tokenize=False,
+ add_generation_prompt=True
+)
+image_inputs, video_inputs = process_vision_info(messages)
+inputs = processor(
+ text=[text],
+ images=image_inputs,
+ videos=video_inputs,
+ padding=True,
+ return_tensors="pt",
+)
+
+inputs = inputs.to("cuda")
+
+# Inference: Generation of the output
+generated_ids = model.generate(**inputs, max_new_tokens=24000)
+generated_ids_trimmed = [
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
+]
+output_text = processor.batch_decode(
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
+)
+print(output_text)
+
+```
+
+
+
+### Hugginface inference with CPU
+Please refer to [CPU inference](https://github.com/rednote-hilab/dots.ocr/issues/1#issuecomment-3148962536)
+
+
+## 3. Document Parse
+**Based on vLLM server**, you can parse an image or a pdf file using the following commands:
+```bash
+
+# Parse all layout info, both detection and recognition
+# Parse a single image
+python3 dots_ocr/parser.py demo/demo_image1.jpg
+# Parse a single PDF
+python3 dots_ocr/parser.py demo/demo_pdf1.pdf --num_thread 64 # try bigger num_threads for pdf with a large number of pages
+
+# Layout detection only
+python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_layout_only_en
+
+# Parse text only, except Page-header and Page-footer
+python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_ocr
+
+# Parse layout info by bbox
+python3 dots_ocr/parser.py demo/demo_image1.jpg --prompt prompt_grounding_ocr --bbox 163 241 1536 705
+
+```
+**Based on Transformers**, you can parse an image or a pdf file using the same commands above, just add `--use_hf true`.
+
+> Notice: transformers is slower than vllm, if you want to use demo/* with transformers,just add `use_hf=True` in `DotsOCRParser(..,use_hf=True)`
+
+
+Output Results
+
+1. **Structured Layout Data** (`demo_image1.json`): A JSON file containing the detected layout elements, including their bounding boxes, categories, and extracted text.
+2. **Processed Markdown File** (`demo_image1.md`): A Markdown file generated from the concatenated text of all detected cells.
+ * An additional version, `demo_image1_nohf.md`, is also provided, which excludes page headers and footers for compatibility with benchmarks like Omnidocbench and olmOCR-bench.
+3. **Layout Visualization** (`demo_image1.jpg`): The original image with the detected layout bounding boxes drawn on it.
+
+
+
+## 4. Demo
+You can run the demo with the following command, or try directly at [live demo](https://dotsocr.xiaohongshu.com/)
+```bash
+python demo/demo_gradio.py
+```
+
+We also provide a demo for grounding ocr:
+```bash
+python demo/demo_gradio_annotion.py
+```
+
+
+### Example for formula document
+
+
+
+
+### Example for table document
+
+
+
+
+### Example for multilingual document
+
+
+
+
+
+
+### Example for reading order
+
+
+### Example for grounding ocr
+
+
+
+## Acknowledgments
+We would like to thank [Qwen2.5-VL](https://github.com/QwenLM/Qwen2.5-VL), [aimv2](https://github.com/apple/ml-aim), [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
+[OmniDocBench](https://github.com/opendatalab/OmniDocBench), [PyMuPDF](https://github.com/pymupdf/PyMuPDF), for providing code and models.
+
+We also thank [DocLayNet](https://github.com/DS4SD/DocLayNet), [M6Doc](https://github.com/HCIILAB/M6Doc), [CDLA](https://github.com/buptlihang/CDLA), [D4LA](https://github.com/AlibabaResearch/AdvancedLiterateMachinery) for providing valuable datasets.
+
+## Limitation & Future Work
+
+- **Complex Document Elements:**
+ - **Table&Formula**: dots.ocr is not yet perfect for high-complexity tables and formula extraction.
+ - **Picture**: Pictures in documents are currently not parsed.
+
+- **Parsing Failures:** The model may fail to parse under certain conditions:
+ - When the character-to-pixel ratio is excessively high. Try enlarging the image or increasing the PDF parsing DPI (a setting of 200 is recommended). However, please note that the model performs optimally on images with a resolution under 11289600 pixels.
+ - Continuous special characters, such as ellipses (`...`) and underscores (`_`), may cause the prediction output to repeat endlessly. In such scenarios, consider using alternative prompts like `prompt_layout_only_en`, `prompt_ocr`, or `prompt_grounding_ocr` ([details here](https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py)).
+
+- **Performance Bottleneck:** Despite its 1.7B parameter LLM foundation, **dots.ocr** is not yet optimized for high-throughput processing of large PDF volumes.
+
+We are committed to achieving more accurate table and formula parsing, as well as enhancing the model's OCR capabilities for broader generalization, all while aiming for **a more powerful, more efficient model**. Furthermore, we are actively considering the development of **a more general-purpose perception model** based on Vision-Language Models (VLMs), which would integrate general detection, image captioning, and OCR tasks into a unified framework. **Parsing the content of the pictures in the documents** is also a key priority for our future work.
+We believe that collaboration is the key to tackling these exciting challenges. If you are passionate about advancing the frontiers of document intelligence and are interested in contributing to these future endeavors, we would love to hear from you. Please reach out to us via email at: [yanqing4@xiaohongshu.com].
diff --git a/assets/blog.md b/assets/blog.md
new file mode 100644
index 0000000000000000000000000000000000000000..f276bba5ac44c425fbe8b5a9d4735cabb874509e
--- /dev/null
+++ b/assets/blog.md
@@ -0,0 +1,1044 @@
+
+dots.ocr: Multilingual Document Layout Parsing in a Single Vision-Language Model
+
+
+
+## Introduction
+
+**dots.ocr** is a powerful, multilingual document parser that unifies layout detection and content recognition within a single vision-language model while maintaining good reading order. Despite its compact 1.7B-parameter LLM foundation, it achieves state-of-the-art(SOTA) performance.
+
+1. **Powerful Performance:** **dots.ocr** achieves SOTA performance for text, tables, and reading order on [OmniDocBench](https://github.com/opendatalab/OmniDocBench), while delivering formula recognition results comparable to much larger models like Doubao-1.5 and gemini2.5-pro.
+2. **Multilingual Support:** **dots.ocr** demonstrates robust parsing capabilities for low-resource languages, achieving decisive advantages across both layout detection and content recognition on our in-house multilingual documents benchmark.
+3. **Unified and Simple Architecture:** By leveraging a single vision-language model, **dots.ocr** offers a significantly more streamlined architecture than conventional methods that rely on complex, multi-model pipelines. Switching between tasks is accomplished simply by altering the input prompt, proving that a VLM can achieve competitive detection results compared to traditional detection models like DocLayout-YOLO.
+4. **Efficient and Fast Performance:** Built upon a compact 1.7B LLM, **dots.ocr** provides faster inference speeds than many other high-performing models based on larger foundations.
+
+
+### Performance Comparison on Document Parsing Benchmarks
+
+
+> **Notes:**
+> - The EN, ZH metrics are the end2end evaluation results of [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and Multilingual metric is the end2end evaluation results of dots.ocr-bench.
+
+
+## Show Case
+### Example for formula document
+
+
+
+
+### Example for table document
+
+
+
+
+### Example for multilingual document
+
+
+
+
+
+
+### Example for reading order
+
+
+### Example for grounding ocr
+
+
+
+
+## Benchmark Results
+
+### 1. OmniDocBench
+
+#### The end-to-end evaluation results of different tasks.
+
+
+
+
+Model Type |
+Methods |
+OverallEdit↓ |
+TextEdit↓ |
+FormulaEdit↓ |
+TableTEDS↑ |
+TableEdit↓ |
+Read OrderEdit↓ |
+
+
+| EN |
+ZH |
+EN |
+ZH |
+EN |
+ZH |
+EN |
+ZH |
+EN |
+ZH |
+EN |
+ZH |
+
+
+
+
+Pipeline Tools |
+MinerU |
+0.150 |
+0.357 |
+0.061 |
+0.215 |
+0.278 |
+0.577 |
+78.6 |
+62.1 |
+0.180 |
+0.344 |
+0.079 |
+0.292 |
+
+
+| Marker |
+0.336 |
+0.556 |
+0.080 |
+0.315 |
+0.530 |
+0.883 |
+67.6 |
+49.2 |
+0.619 |
+0.685 |
+0.114 |
+0.340 |
+
+
+| Mathpix |
+0.191 |
+0.365 |
+0.105 |
+0.384 |
+0.306 |
+0.454 |
+77.0 |
+67.1 |
+0.243 |
+0.320 |
+0.108 |
+0.304 |
+
+
+| Docling |
+0.589 |
+0.909 |
+0.416 |
+0.987 |
+0.999 |
+1 |
+61.3 |
+25.0 |
+0.627 |
+0.810 |
+0.313 |
+0.837 |
+
+
+| Pix2Text |
+0.320 |
+0.528 |
+0.138 |
+0.356 |
+0.276 |
+0.611 |
+73.6 |
+66.2 |
+0.584 |
+0.645 |
+0.281 |
+0.499 |
+
+
+| Unstructured |
+0.586 |
+0.716 |
+0.198 |
+0.481 |
+0.999 |
+1 |
+0 |
+0.06 |
+1 |
+0.998 |
+0.145 |
+0.387 |
+
+
+| OpenParse |
+0.646 |
+0.814 |
+0.681 |
+0.974 |
+0.996 |
+1 |
+64.8 |
+27.5 |
+0.284 |
+0.639 |
+0.595 |
+0.641 |
+
+
+| PPStruct-V3 |
+0.145 |
+0.206 |
+0.058 |
+0.088 |
+0.295 |
+0.535 |
+- |
+- |
+0.159 |
+0.109 |
+0.069 |
+0.091 |
+
+
+Expert VLMs |
+GOT-OCR |
+0.287 |
+0.411 |
+0.189 |
+0.315 |
+0.360 |
+0.528 |
+53.2 |
+47.2 |
+0.459 |
+0.520 |
+0.141 |
+0.280 |
+
+
+| Nougat |
+0.452 |
+0.973 |
+0.365 |
+0.998 |
+0.488 |
+0.941 |
+39.9 |
+0 |
+0.572 |
+1.000 |
+0.382 |
+0.954 |
+
+
+| Mistral OCR |
+0.268 |
+0.439 |
+0.072 |
+0.325 |
+0.318 |
+0.495 |
+75.8 |
+63.6 |
+0.600 |
+0.650 |
+0.083 |
+0.284 |
+
+
+| OLMOCR-sglang |
+0.326 |
+0.469 |
+0.097 |
+0.293 |
+0.455 |
+0.655 |
+68.1 |
+61.3 |
+0.608 |
+0.652 |
+0.145 |
+0.277 |
+
+
+| SmolDocling-256M |
+0.493 |
+0.816 |
+0.262 |
+0.838 |
+0.753 |
+0.997 |
+44.9 |
+16.5 |
+0.729 |
+0.907 |
+0.227 |
+0.522 |
+
+
+| Dolphin |
+0.206 |
+0.306 |
+0.107 |
+0.197 |
+0.447 |
+0.580 |
+77.3 |
+67.2 |
+0.180 |
+0.285 |
+0.091 |
+0.162 |
+
+
+| MinerU 2 |
+0.139 |
+0.240 |
+0.047 |
+0.109 |
+0.297 |
+0.536 |
+82.5 |
+79.0 |
+0.141 |
+0.195 |
+0.069< |
+0.118 |
+
+
+| OCRFlux |
+0.195 |
+0.281 |
+0.064 |
+0.183 |
+0.379 |
+0.613 |
+71.6 |
+81.3 |
+0.253 |
+0.139 |
+0.086 |
+0.187 |
+
+
+| MonkeyOCR-pro-3B |
+0.138 |
+0.206 |
+0.067 |
+0.107 |
+0.246 |
+0.421 |
+81.5 |
+87.5 |
+0.139 |
+0.111 |
+0.100 |
+0.185 |
+
+
+
+General VLMs |
+GPT4o |
+0.233 |
+0.399 |
+0.144 |
+0.409 |
+0.425 |
+0.606 |
+72.0 |
+62.9 |
+0.234 |
+0.329 |
+0.128 |
+0.251 |
+
+
+ | Qwen2-VL-72B |
+ 0.252 |
+ 0.327 |
+ 0.096 |
+ 0.218 |
+ 0.404 |
+ 0.487 |
+ 76.8 |
+ 76.4 |
+ 0.387 |
+ 0.408 |
+ 0.119 |
+ 0.193 |
+
+
+ | Qwen2.5-VL-72B |
+ 0.214 |
+ 0.261 |
+ 0.092 |
+ 0.18 |
+ 0.315 |
+ 0.434 |
+ 82.9 |
+ 83.9 |
+ 0.341 |
+ 0.262 |
+ 0.106 |
+ 0.168 |
+
+
+ | Gemini2.5-Pro |
+ 0.148 |
+ 0.212 |
+ 0.055 |
+ 0.168 |
+ 0.356 |
+ 0.439 |
+ 85.8 |
+ 86.4 |
+ 0.13 |
+ 0.119 |
+ 0.049 |
+ 0.121 |
+
+
+ | doubao-1-5-thinking-vision-pro-250428 |
+ 0.140 |
+ 0.162 |
+ 0.043 |
+ 0.085 |
+ 0.295 |
+ 0.384 |
+ 83.3 |
+ 89.3 |
+ 0.165 |
+ 0.085 |
+ 0.058 |
+ 0.094 |
+
+
+| Expert VLMs |
+dots.ocr |
+0.125 |
+0.160 |
+0.032 |
+0.066 |
+0.329 |
+0.416 |
+88.6 |
+89.0 |
+0.099 |
+0.092 |
+0.040 |
+0.067 |
+
+
+
+
+
+
+#### The end-to-end text recognition performance across 9 PDF page types.
+
+
+
+
+Model Type |
+Models |
+Book |
+Slides |
+Financial Report |
+Textbook |
+Exam Paper |
+Magazine |
+Academic Papers |
+Notes |
+Newspaper |
+Overall |
+
+
+
+
+Pipeline Tools |
+MinerU |
+0.055 |
+0.124 |
+0.033 |
+0.102 |
+0.159 |
+0.072 |
+0.025 |
+0.984 |
+0.171 |
+0.206 |
+
+
+| Marker |
+0.074 |
+0.340 |
+0.089 |
+0.319 |
+0.452 |
+0.153 |
+0.059 |
+0.651 |
+0.192 |
+0.274 |
+
+
+| Mathpix |
+0.131 |
+0.220 |
+0.202 |
+0.216 |
+0.278 |
+0.147 |
+0.091 |
+0.634 |
+0.690 |
+0.300 |
+
+
+Expert VLMs |
+GOT-OCR |
+0.111 |
+0.222 |
+0.067 |
+0.132 |
+0.204 |
+0.198 |
+0.179 |
+0.388 |
+0.771 |
+0.267 |
+
+
+| Nougat |
+0.734 |
+0.958 |
+1.000 |
+0.820 |
+0.930 |
+0.830 |
+0.214 |
+0.991 |
+0.871 |
+0.806 |
+
+
+| Dolphin |
+0.091 |
+0.131 |
+0.057 |
+0.146 |
+0.231 |
+0.121 |
+0.074 |
+0.363 |
+0.307 |
+0.177 |
+
+
+| OCRFlux |
+0.068 |
+0.125 |
+0.092 |
+0.102 |
+0.119 |
+0.083 |
+0.047 |
+0.223 |
+0.536 |
+0.149 |
+
+
+| MonkeyOCR-pro-3B |
+0.084 |
+0.129 |
+0.060 |
+0.090 |
+0.107 |
+0.073 |
+0.050 |
+0.171 |
+0.107 |
+0.100 |
+
+
+General VLMs |
+GPT4o |
+0.157 |
+0.163 |
+0.348 |
+0.187 |
+0.281 |
+0.173 |
+0.146 |
+0.607 |
+0.751 |
+0.316 |
+
+
+| Qwen2.5-VL-7B |
+0.148 |
+0.053 |
+0.111 |
+0.137 |
+0.189 |
+0.117 |
+0.134 |
+0.204 |
+0.706 |
+0.205 |
+
+
+| InternVL3-8B |
+0.163 |
+0.056 |
+0.107 |
+0.109 |
+0.129 |
+0.100 |
+0.159 |
+0.150 |
+0.681 |
+0.188 |
+
+
+| doubao-1-5-thinking-vision-pro-250428 |
+0.048 |
+0.048 |
+0.024 |
+0.062 |
+0.085 |
+0.051 |
+0.039 |
+0.096 |
+0.181 |
+0.073 |
+
+
+| Expert VLMs |
+dots.ocr |
+0.031 |
+0.047 |
+0.011 |
+0.082 |
+0.079 |
+0.028 |
+0.029 |
+0.109 |
+0.056 |
+0.055 |
+
+
+
+
+
+> **Notes:**
+> - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR), [OmniDocBench](https://github.com/opendatalab/OmniDocBench), and our own internal evaluations.
+> - We delete the Page-header and Page-footer cells in the result markdown.
+> - We use tikz_preprocess pipeline to upsample the images to dpi 200.
+
+
+### 2. **dots.ocr-bench**
+
+This is an inhouse benchmark which contain 1493 pdf images with 100 languages.
+
+#### The end-to-end evaluation results of different tasks.
+
+
+
+
+| Methods |
+OverallEdit↓ |
+TextEdit↓ |
+FormulaEdit↓ |
+TableTEDS↑ |
+TableEdit↓ |
+Read OrderEdit↓ |
+
+
+
+MonkeyOCR-3B |
+0.483 |
+0.445 |
+0.627 |
+50.93 |
+0.452 |
+0.409 |
+
+
+| doubao-1-5-thinking-vision-pro-250428 |
+0.291 |
+0.226 |
+0.440 |
+71.2 |
+0.260 |
+0.238 |
+
+
+| doubao-1-6 |
+0.299 |
+0.270 |
+0.417 |
+71.0 |
+0.258 |
+0.253 |
+
+
+| Gemini2.5-Pro |
+0.251 |
+0.163 |
+0.402 |
+77.1 |
+0.236 |
+0.202 |
+
+
+| dots.ocr |
+0.177 |
+0.075 |
+0.297 |
+79.2 |
+0.186 |
+0.152 |
+
+
+
+
+
+> **Notes:**
+> - We use the same metric calculation pipeline of [OmniDocBench](https://github.com/opendatalab/OmniDocBench).
+> - We delete the Page-header and Page-footer cells in the result markdown.
+
+#### Layout Detection
+
+
+
+
+| Method |
+F1@IoU=.50:.05:.95↑ |
+F1@IoU=.50↑ |
+
+
+| Overall |
+Text |
+Formula |
+Table |
+Picture |
+Overall |
+Text |
+Formula |
+Table |
+Picture |
+
+
+
+
+DocLayout-YOLO-DocStructBench |
+0.733 |
+0.694 |
+0.480 |
+0.803 |
+0.619 |
+0.806 |
+0.779 |
+0.620 |
+0.858 |
+0.678 |
+
+
+
+| dots.ocr-parse all |
+0.831 |
+0.801 |
+0.654 |
+0.838 |
+0.748 |
+0.922 |
+0.909 |
+0.770 |
+0.888 |
+0.831 |
+
+
+
+| dots.ocr-detection only |
+0.845 |
+0.816 |
+0.716 |
+0.875 |
+0.765 |
+0.930 |
+0.917 |
+0.832 |
+0.918 |
+0.843 |
+
+
+
+
+
+> **Notes:**
+> - prompt_layout_all_en for **parse all**, prompt_layout_only_en for **detection only**, please refer to [prompts](https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py)
+
+
+### 3. olmOCR-bench.
+
+
+
+
+| Model |
+ArXiv |
+Old Scans Math |
+Tables |
+Old Scans |
+Headers and Footers |
+Multi column |
+Long Tiny Text |
+Base |
+Overall |
+
+
+
+
+| GOT OCR |
+52.7 |
+52.0 |
+0.2 |
+22.1 |
+93.6 |
+42.0 |
+29.9 |
+94.0 |
+48.3 ± 1.1 |
+
+
+| Marker |
+76.0 |
+57.9 |
+57.6 |
+27.8 |
+84.9 |
+72.9 |
+84.6 |
+99.1 |
+70.1 ± 1.1 |
+
+
+| MinerU |
+75.4 |
+47.4 |
+60.9 |
+17.3 |
+96.6 |
+59.0 |
+39.1 |
+96.6 |
+61.5 ± 1.1 |
+
+
+| Mistral OCR |
+77.2 |
+67.5 |
+60.6 |
+29.3 |
+93.6 |
+71.3 |
+77.1 |
+99.4 |
+72.0 ± 1.1 |
+
+
+| Nanonets OCR |
+67.0 |
+68.6 |
+77.7 |
+39.5 |
+40.7 |
+69.9 |
+53.4 |
+99.3 |
+64.5 ± 1.1 |
+
+
+GPT-4o (No Anchor) |
+51.5 |
+75.5 |
+69.1 |
+40.9 |
+94.2 |
+68.9 |
+54.1 |
+96.7 |
+68.9 ± 1.1 |
+
+
+GPT-4o (Anchored) |
+53.5 |
+74.5 |
+70.0 |
+40.7 |
+93.8 |
+69.3 |
+60.6 |
+96.8 |
+69.9 ± 1.1 |
+
+
+Gemini Flash 2 (No Anchor) |
+32.1 |
+56.3 |
+61.4 |
+27.8 |
+48.0 |
+58.7 |
+84.4 |
+94.0 |
+57.8 ± 1.1 |
+
+
+Gemini Flash 2 (Anchored) |
+54.5 |
+56.1 |
+72.1 |
+34.2 |
+64.7 |
+61.5 |
+71.5 |
+95.6 |
+63.8 ± 1.2 |
+
+
+Qwen 2 VL (No Anchor) |
+19.7 |
+31.7 |
+24.2 |
+17.1 |
+88.9 |
+8.3 |
+6.8 |
+55.5 |
+31.5 ± 0.9 |
+
+
+Qwen 2.5 VL (No Anchor) |
+63.1 |
+65.7 |
+67.3 |
+38.6 |
+73.6 |
+68.3 |
+49.1 |
+98.3 |
+65.5 ± 1.2 |
+
+
+olmOCR v0.1.75 (No Anchor) |
+71.5 |
+71.4 |
+71.4 |
+42.8 |
+94.1 |
+77.7 |
+71.0 |
+97.8 |
+74.7 ± 1.1 |
+
+
+olmOCR v0.1.75 (Anchored) |
+74.9 |
+71.2 |
+71.0 |
+42.2 |
+94.5 |
+78.3 |
+73.3 |
+98.3 |
+75.5 ± 1.0 |
+
+
+| MonkeyOCR-pro-3B |
+83.8 |
+68.8 |
+74.6 |
+36.1 |
+91.2 |
+76.6 |
+80.1 |
+95.3 |
+75.8 ± 1.0 |
+
+
+| dots.ocr |
+82.1 |
+64.2 |
+88.3 |
+40.9 |
+94.1 |
+82.4 |
+81.2 |
+99.5 |
+79.1 ± 1.0 |
+
+
+
+
+
+> **Note:**
+> - The metrics are from [MonkeyOCR](https://github.com/Yuliang-Liu/MonkeyOCR),
+[olmocr](https://github.com/allenai/olmocr), and our own internal evaluations.
+> - We delete the Page-header and Page-footer cells in the result markdown.
+
+## Methods
+
+### Pretrain
+
+We developed a foundational Vision-Language Model (VLM) through a three-stage training process:
+
+* **Stage1: Vision Encoder Pre-training**
+ We trained a 1.2-billion-parameter Vision Encoder (VE) from scratch on a vast and comprehensive dataset of image-text pairs.
+* **Stage2: VE Continued Pre-training**
+ We incorporated additional visual data, including OCR, video, grounding data, etc. Leveraging the `NaViT` architecture, our model supports high-resolution inputs of up to 11 million pixels. The VE was then aligned with the `Qwen2.5-1.5B` language model and trained on this diverse visual data with LLM frozen, which resulted in our general vision encoder `dots.vit`.
+* **Stage3: VLM Specialization for OCR**
+ We then used a pure OCR dataset for training. To improve training efficiency, we first trained on a certain volume of tokens with the VE parameters frozen. Subsequently, we unfroze all parameters and continued training on an additional one-fifth of that token volume, which produced our foundational OCR model, `dots.ocr.base`.
+
+### SFT
+
+The SFT stage was implemented on the following key strategies:
+
+* **Diverse SFT Dataset:** We constructed a dataset of nearly 300,000 samples, integrating our in-house manual annotations, synthetic data (tables, formulas, multilingual OCR), as well as open-source datasets.
+* **Iterative Data Flywheel:** We employed a feedback loop to build an inhouse multilingual structured layout data with 15k samples. This process, repeated over three iterations, involved:
+ * Sampling "bad cases" based on model performance.
+ * Manually annotating these cases.
+ * Adding them back into the training set.
+* **Reading Order:** We corrected the sequence of all layout element boxes to establish the correct reading order. This was primarily done using larger models for sorting, supplemented by rule-based post-processing methods. We found that with sufficient data diversity and quality, training the model on a list of elements sorted in their natural reading order yields excellent results.
+* **Quality and Robustness:** We build a multi-expert system for data cleaning and distillation, and applied data augmentation (resizing, rotation, noise) to improve model robustness.
+* **Multitask training:** We leveraged a single source of structured layout data to generate the SFT data with a variety of prompts. This approach enables the model to perform different tasks, such as detection and recognition, based on the specific prompt provided.
+
+The resulting `dots.ocr` model demonstrates performance on par with models possessing significantly more parameters.
+
+
+## Limitation & Future Work
+
+- **Complex Document Elements:**
+ - **Table&Formula**: dots.ocr is not yet perfect for high-complexity tables and formula extraction.
+ - **Picture**: Pictures in documents are currently not parsed.
+
+- **Parsing Failures:** The model may fail to parse under certain conditions:
+ - When the character-to-pixel ratio is excessively high. Try enlarging the image or increasing the PDF parsing DPI (a setting of 200 is recommended). However, please note that the model performs optimally on images with a resolution under 11289600 pixels.
+ - Continuous special characters, such as ellipses (`...`) and underscores (`_`), may cause the prediction output to repeat endlessly. In such scenarios, consider using alternative prompts like `prompt_layout_only_en`, `prompt_ocr`, or `prompt_grounding_ocr` ([details here](https://github.com/rednote-hilab/dots.ocr/blob/master/dots_ocr/utils/prompts.py)).
+
+- **Performance Bottleneck:** Despite its 1.7B parameter LLM foundation, **dots.ocr** is not yet optimized for high-throughput processing of large PDF volumes.
+
+We are committed to achieving more accurate table and formula parsing, as well as enhancing the model's OCR capabilities for broader generalization, all while aiming for **a more powerful, more efficient model**. Furthermore, we are actively considering the development of **a more general-purpose perception model** based on Vision-Language Models (VLMs), which would integrate general detection, image captioning, and OCR tasks into a unified framework. **Parsing the content of the pictures in the documents** is also a key priority for our future work.
+We believe that collaboration is the key to tackling these exciting challenges. If you are passionate about advancing the frontiers of document intelligence and are interested in contributing to these future endeavors, we would love to hear from you. Please reach out to us via email at: [yanqing4@xiaohongshu.com].
+
+## Author List
+
+### Contributors
+Mi Jian, Yumeng Li, Bowen Wang, Xiaomin He, Zheyuan Gu
+
+### Project Leader
+Qing Yan
+
+### Advisor
+Colin Zhang, Lei Zhang
diff --git a/assets/chart.png b/assets/chart.png
new file mode 100644
index 0000000000000000000000000000000000000000..86ec3809460b6adba863d1aa3affcb213e430aed
--- /dev/null
+++ b/assets/chart.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0576d51813061c25f36c0fcbca837fed1a1d8e06042f2b352be4bdc7b7b5cab1
+size 64522
diff --git a/assets/logo.png b/assets/logo.png
new file mode 100644
index 0000000000000000000000000000000000000000..4d9ec0384bc366976145431b15322e28295b8a2b
--- /dev/null
+++ b/assets/logo.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad0b70b18bbf2fb7ad1a838437c1c6069eeb3fdf2df42f7299ec9abeb3427ae4
+size 67197
diff --git a/assets/showcase/Tibetan.png b/assets/showcase/Tibetan.png
new file mode 100644
index 0000000000000000000000000000000000000000..4c43b8aa2b06bd05843d36f20664f14b4f5d2829
--- /dev/null
+++ b/assets/showcase/Tibetan.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:97bdb98172dc2d5c6a4668188588eb15cc33ecd042f9d9b8224ea933229741ce
+size 2885077
diff --git a/assets/showcase/formula1.png b/assets/showcase/formula1.png
new file mode 100644
index 0000000000000000000000000000000000000000..8ef95a3d5ffddcd366e610fa1f5235c78fa2427c
--- /dev/null
+++ b/assets/showcase/formula1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f7196032f7c4cc6aad9112ba4edeca6e1c3b303c34828711e107f0bb6603c44
+size 1296658
diff --git a/assets/showcase/formula2.png b/assets/showcase/formula2.png
new file mode 100644
index 0000000000000000000000000000000000000000..d2106f44d2ab6629c147b27006091506aebe8638
--- /dev/null
+++ b/assets/showcase/formula2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a6edff564ee572a17062a2356eb6d83b98fc15e8bf1544b554f62003ce3ec98b
+size 1736024
diff --git a/assets/showcase/formula3.png b/assets/showcase/formula3.png
new file mode 100644
index 0000000000000000000000000000000000000000..ebd2ac3f2a69935384bc7268c7c51d6aaa9e61ba
--- /dev/null
+++ b/assets/showcase/formula3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:45b6331b43e3b11d0af4674f021f04c9b9e4e096cf533c8f5f8a15d46261982f
+size 1082837
diff --git a/assets/showcase/grounding.png b/assets/showcase/grounding.png
new file mode 100644
index 0000000000000000000000000000000000000000..db4739ec56e93b4bbe0df2b091da8b5d56df5b1c
--- /dev/null
+++ b/assets/showcase/grounding.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a11a2b2feba8208820ec35c8036c1ee5c0588ce9c9010a4e9ce7901c7cb65e8a
+size 1036980
diff --git a/assets/showcase/kannada.png b/assets/showcase/kannada.png
new file mode 100644
index 0000000000000000000000000000000000000000..0ae79d724acb5456ed7e0dc18dcd62d6a5692c16
--- /dev/null
+++ b/assets/showcase/kannada.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:96f0d36e3e0b08029903066a931defe9ddf002e515d7c63262dcbeeb6b86b32a
+size 1920354
diff --git a/assets/showcase/nl.png b/assets/showcase/nl.png
new file mode 100644
index 0000000000000000000000000000000000000000..ba0846f024cd803b1cae6c4c949367b0819a020d
--- /dev/null
+++ b/assets/showcase/nl.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:53e3bd10e4a85b9dfdbb3fc3b192c47f9834101dc224d4d979c145a0a574c700
+size 3844344
diff --git a/assets/showcase/reading_order.png b/assets/showcase/reading_order.png
new file mode 100644
index 0000000000000000000000000000000000000000..d92bf9015a2f4c9421f37edb1eaca1d83ef3916e
--- /dev/null
+++ b/assets/showcase/reading_order.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:916b8cd5833ec7bbbd896771537ab66aa96a9c7f70e52685d7df533b6b0cbd2a
+size 2900854
diff --git a/assets/showcase/russian.png b/assets/showcase/russian.png
new file mode 100644
index 0000000000000000000000000000000000000000..e9c1a136eae5040d403bec47c911efc6dcbba798
--- /dev/null
+++ b/assets/showcase/russian.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:307f66b083df466e5a84b049e6d5cf8117050d6e1a612dc2b2fe7f2c0e996b9c
+size 3057091
diff --git a/assets/showcase/table1.png b/assets/showcase/table1.png
new file mode 100644
index 0000000000000000000000000000000000000000..0b1b4442f40ff9299eb6d3211ada51f4d9c35b87
--- /dev/null
+++ b/assets/showcase/table1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0f75ef4c9a995a8cd29585dc7e9714fa9cb0e98490ededc745094e6c9dfd375
+size 1453950
diff --git a/assets/showcase/table2.png b/assets/showcase/table2.png
new file mode 100644
index 0000000000000000000000000000000000000000..c38838b3546708b75755dfc2f0f2110ca2f0f846
--- /dev/null
+++ b/assets/showcase/table2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d6084dac8845096749ba98191552182b98bde72806577b693d02069a1cc91b5b
+size 1771637
diff --git a/assets/showcase/table3.png b/assets/showcase/table3.png
new file mode 100644
index 0000000000000000000000000000000000000000..31585bd7ce7e30480697f5e2a8ba406d8bb46686
--- /dev/null
+++ b/assets/showcase/table3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c42c3b33230d4d00f83b41cb22a9f21511de138bbbc4ce04c62aa916eed53428
+size 1512598
diff --git a/assets/showcase/tradition_zh.png b/assets/showcase/tradition_zh.png
new file mode 100644
index 0000000000000000000000000000000000000000..10ae9e633a29b2392d947ae36fd99d6cd7e123cc
--- /dev/null
+++ b/assets/showcase/tradition_zh.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dfe7892659fdb07733ba102eeb55f2532a604194596eabb81b28d847a8127e50
+size 1873785
diff --git a/assets/showcase_origin/Tibetan.png b/assets/showcase_origin/Tibetan.png
new file mode 100644
index 0000000000000000000000000000000000000000..8aac407c730dde7505953d4e9b5cb8e7fe041221
--- /dev/null
+++ b/assets/showcase_origin/Tibetan.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a761e2eeb987ea3c08ade69c9ffe5781d7e9a06828a1abd474c63b7f27e6d278
+size 965736
diff --git a/assets/showcase_origin/formula_1.jpg b/assets/showcase_origin/formula_1.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..790a4949cfe45c6a0eea47017a4d2875b8d09d83
--- /dev/null
+++ b/assets/showcase_origin/formula_1.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b01fa0b9f47e2b0de6b67e02dc869600c8d715b98a952e05868a86d958348ce
+size 677437
diff --git a/assets/showcase_origin/formula_2.jpg b/assets/showcase_origin/formula_2.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..ab7e23fa5be1fa4f5736cbd23a90d9a78c55a964
--- /dev/null
+++ b/assets/showcase_origin/formula_2.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:322ec389bcd88e6870ffb91ccf5ca6b667b02b5c129f44e3c6e93877e7f95800
+size 299567
diff --git a/assets/showcase_origin/formula_3.jpg b/assets/showcase_origin/formula_3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..79b5c067459a6a411d9e71467155dc4cd44dac16
--- /dev/null
+++ b/assets/showcase_origin/formula_3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e47451f351abdd184f8bda270e8fba08cb1e739157584d064d9245e4fbf29247
+size 269355
diff --git a/assets/showcase_origin/kannada.jpg b/assets/showcase_origin/kannada.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..f4a027212eaeddd787bdfea180b1c822fe1a61a6
--- /dev/null
+++ b/assets/showcase_origin/kannada.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dad7aefe09cb39d7db21cd9e1c86c6fd47a2775e55b2fbe087ebdc2f44f0ab9f
+size 455554
diff --git a/assets/showcase_origin/nl.png b/assets/showcase_origin/nl.png
new file mode 100644
index 0000000000000000000000000000000000000000..e9d91e95528c50af1d596cfb6a7439b8af9b05ce
--- /dev/null
+++ b/assets/showcase_origin/nl.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aabb798d409851fb0fee59f3152354827fc633c5f9103a6ae130e6849e4c6030
+size 1151744
diff --git a/assets/showcase_origin/reading_order.png b/assets/showcase_origin/reading_order.png
new file mode 100644
index 0000000000000000000000000000000000000000..f23a7caba349e98fc7a97a2d5fe4a3d8769f3693
--- /dev/null
+++ b/assets/showcase_origin/reading_order.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ebf62f427254a527d917b2d7acb3e68f7a6881277ffa382192e584508a84ca91
+size 688864
diff --git a/assets/showcase_origin/russian.png b/assets/showcase_origin/russian.png
new file mode 100644
index 0000000000000000000000000000000000000000..77588ba287ddcd4a09acdba645ab9147113fd982
--- /dev/null
+++ b/assets/showcase_origin/russian.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:46e1e851f18e67153291b0608563eb98095975e1a9b0e23aa7a2308e229fdf49
+size 1796445
diff --git a/assets/showcase_origin/table_1.jpg b/assets/showcase_origin/table_1.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..488d8d30d8239567e50e634059c50797638e5974
--- /dev/null
+++ b/assets/showcase_origin/table_1.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90345584ccc2c4a883779e5d47693276e8cf3fe752700af4f03b3142ab46cfa2
+size 772990
diff --git a/assets/showcase_origin/table_2.jpg b/assets/showcase_origin/table_2.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..a1971ba58da659904763a4550ae6c97ba598ab7e
--- /dev/null
+++ b/assets/showcase_origin/table_2.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:308a117b9293b92ca11f2ead9d2bca58df39c435e53d50e7a78785785041acf1
+size 942140
diff --git a/assets/showcase_origin/table_3.jpg b/assets/showcase_origin/table_3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..24a7b715762854bc97c10376d834d92bfd9d6c2f
--- /dev/null
+++ b/assets/showcase_origin/table_3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4542239b141f27f85006b1ec533e671e6e338ed4e18430b5974aa7a2d1105fef
+size 2056191
diff --git a/assets/showcase_origin/tradition_zh.png b/assets/showcase_origin/tradition_zh.png
new file mode 100644
index 0000000000000000000000000000000000000000..f73ba28ffa5602b102156b8a0a0e53dedcf89eed
--- /dev/null
+++ b/assets/showcase_origin/tradition_zh.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:318d5e7b11b0569deb0021a057cd8068d5e1b16ce50dfa2e8628998b1b5a448d
+size 959717
diff --git a/assets/wechat.png b/assets/wechat.png
new file mode 100644
index 0000000000000000000000000000000000000000..68f52cc85590ca96fcdda8d06ad89e1858a3d649
--- /dev/null
+++ b/assets/wechat.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c2208f35514007740f9b1efc1f738f0735095f5d6cd79b47eb7fac63bc7a0941
+size 592830
diff --git a/demo/demo_colab_remote_server.ipynb b/demo/demo_colab_remote_server.ipynb
new file mode 100644
index 0000000000000000000000000000000000000000..7412beb14a7d44ce1e9becb254204745a0250d0f
--- /dev/null
+++ b/demo/demo_colab_remote_server.ipynb
@@ -0,0 +1,1166 @@
+{
+ "nbformat": 4,
+ "nbformat_minor": 0,
+ "metadata": {
+ "colab": {
+ "provenance": [],
+ "machine_shape": "hm",
+ "gpuType": "L4",
+ "authorship_tag": "ABX9TyOkGQh7maXiQhQ6pYoY2NaU",
+ "include_colab_link": true
+ },
+ "kernelspec": {
+ "name": "python3",
+ "display_name": "Python 3"
+ },
+ "language_info": {
+ "name": "python"
+ },
+ "accelerator": "GPU"
+ },
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {
+ "id": "view-in-github",
+ "colab_type": "text"
+ },
+ "source": [
+ "
"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "source": [
+ "# DotsOCR vLLM Openai API Compatible server"
+ ],
+ "metadata": {
+ "id": "PshK9ZarVTfM"
+ }
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!pip install pyngrok\n",
+ "!ngrok authtoken # Get this from https://dashboard.ngrok.com/"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "oyY3E3mlOXNX",
+ "outputId": "8d7ba92f-7170-4b2e-e8a0-c7f94096f7e0"
+ },
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Requirement already satisfied: pyngrok in /usr/local/lib/python3.11/dist-packages (7.3.0)\n",
+ "Requirement already satisfied: PyYAML>=5.1 in /usr/local/lib/python3.11/dist-packages (from pyngrok) (6.0.2)\n",
+ "Authtoken saved to configuration file: /root/.config/ngrok/ngrok.yml\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!conda create -n dots_ocr python=3.12\n",
+ "!conda activate dots_ocr\n",
+ "\n",
+ "!git clone https://github.com/rednote-hilab/dots.ocr.git"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "BcV7hkvuRnwS",
+ "outputId": "7cb9c743-6f41-4c90-a05b-90bce2c29ced"
+ },
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "/bin/bash: line 1: conda: command not found\n",
+ "/bin/bash: line 1: conda: command not found\n",
+ "Cloning into 'dots.ocr'...\n",
+ "remote: Enumerating objects: 163, done.\u001b[K\n",
+ "remote: Counting objects: 100% (51/51), done.\u001b[K\n",
+ "remote: Compressing objects: 100% (31/31), done.\u001b[K\n",
+ "remote: Total 163 (delta 30), reused 30 (delta 20), pack-reused 112 (from 1)\u001b[K\n",
+ "Receiving objects: 100% (163/163), 35.82 MiB | 13.64 MiB/s, done.\n",
+ "Resolving deltas: 100% (56/56), done.\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "cd /content/dots.ocr"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "Rsc_MkGfRpit",
+ "outputId": "5265315f-c27c-4346-cda7-aba7d4c226d6"
+ },
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "/content/dots.ocr\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "# Install pytorch, see https://pytorch.org/get-started/previous-versions/ for your cuda version\n",
+ "!pip install torch==2.7.0 torchvision==0.22.0 torchaudio==2.7.0 --index-url https://download.pytorch.org/whl/cu128\n",
+ "!pip install -e ."
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "OxLaSyTJPFwk",
+ "outputId": "a073dcdd-5e5d-4f62-d3b9-be9e9cf98d2f"
+ },
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Looking in indexes: https://download.pytorch.org/whl/cu128\n",
+ "Requirement already satisfied: torch==2.7.0 in /usr/local/lib/python3.11/dist-packages (2.7.0+cu128)\n",
+ "Requirement already satisfied: torchvision==0.22.0 in /usr/local/lib/python3.11/dist-packages (0.22.0+cu128)\n",
+ "Requirement already satisfied: torchaudio==2.7.0 in /usr/local/lib/python3.11/dist-packages (2.7.0+cu128)\n",
+ "Requirement already satisfied: filelock in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (3.18.0)\n",
+ "Requirement already satisfied: typing-extensions>=4.10.0 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (4.14.1)\n",
+ "Requirement already satisfied: sympy>=1.13.3 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (1.13.3)\n",
+ "Requirement already satisfied: networkx in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (3.5)\n",
+ "Requirement already satisfied: jinja2 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (3.1.6)\n",
+ "Requirement already satisfied: fsspec in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (2025.3.0)\n",
+ "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.8.61 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (12.8.61)\n",
+ "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.8.57 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (12.8.57)\n",
+ "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.8.57 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (12.8.57)\n",
+ "Requirement already satisfied: nvidia-cudnn-cu12==9.7.1.26 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (9.7.1.26)\n",
+ "Requirement already satisfied: nvidia-cublas-cu12==12.8.3.14 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (12.8.3.14)\n",
+ "Requirement already satisfied: nvidia-cufft-cu12==11.3.3.41 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (11.3.3.41)\n",
+ "Requirement already satisfied: nvidia-curand-cu12==10.3.9.55 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (10.3.9.55)\n",
+ "Requirement already satisfied: nvidia-cusolver-cu12==11.7.2.55 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (11.7.2.55)\n",
+ "Requirement already satisfied: nvidia-cusparse-cu12==12.5.7.53 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (12.5.7.53)\n",
+ "Requirement already satisfied: nvidia-cusparselt-cu12==0.6.3 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (0.6.3)\n",
+ "Requirement already satisfied: nvidia-nccl-cu12==2.26.2 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (2.26.2)\n",
+ "Requirement already satisfied: nvidia-nvtx-cu12==12.8.55 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (12.8.55)\n",
+ "Requirement already satisfied: nvidia-nvjitlink-cu12==12.8.61 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (12.8.61)\n",
+ "Requirement already satisfied: nvidia-cufile-cu12==1.13.0.11 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (1.13.0.11)\n",
+ "Requirement already satisfied: triton==3.3.0 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.0) (3.3.0)\n",
+ "Requirement already satisfied: numpy in /usr/local/lib/python3.11/dist-packages (from torchvision==0.22.0) (2.0.2)\n",
+ "Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in /usr/local/lib/python3.11/dist-packages (from torchvision==0.22.0) (11.3.0)\n",
+ "Requirement already satisfied: setuptools>=40.8.0 in /usr/local/lib/python3.11/dist-packages (from triton==3.3.0->torch==2.7.0) (75.2.0)\n",
+ "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.11/dist-packages (from sympy>=1.13.3->torch==2.7.0) (1.3.0)\n",
+ "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.11/dist-packages (from jinja2->torch==2.7.0) (3.0.2)\n",
+ "Obtaining file:///content/dots.ocr\n",
+ " Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
+ "Requirement already satisfied: gradio in /usr/local/lib/python3.11/dist-packages (from dots_ocr==1.0) (5.39.0)\n",
+ "Collecting gradio_image_annotation (from dots_ocr==1.0)\n",
+ " Downloading gradio_image_annotation-0.4.0-py3-none-any.whl.metadata (17 kB)\n",
+ "Collecting PyMuPDF (from dots_ocr==1.0)\n",
+ " Downloading pymupdf-1.26.3-cp39-abi3-manylinux_2_28_x86_64.whl.metadata (3.4 kB)\n",
+ "Requirement already satisfied: openai in /usr/local/lib/python3.11/dist-packages (from dots_ocr==1.0) (1.98.0)\n",
+ "Collecting qwen_vl_utils (from dots_ocr==1.0)\n",
+ " Downloading qwen_vl_utils-0.0.11-py3-none-any.whl.metadata (6.3 kB)\n",
+ "Collecting transformers==4.51.3 (from dots_ocr==1.0)\n",
+ " Downloading transformers-4.51.3-py3-none-any.whl.metadata (38 kB)\n",
+ "Requirement already satisfied: huggingface_hub in /usr/local/lib/python3.11/dist-packages (from dots_ocr==1.0) (0.34.3)\n",
+ "Collecting modelscope (from dots_ocr==1.0)\n",
+ " Downloading modelscope-1.28.2-py3-none-any.whl.metadata (39 kB)\n",
+ "Collecting flash-attn==2.8.0.post2 (from dots_ocr==1.0)\n",
+ " Downloading flash_attn-2.8.0.post2.tar.gz (7.9 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.9/7.9 MB\u001b[0m \u001b[31m124.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25h Preparing metadata (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
+ "Requirement already satisfied: accelerate in /usr/local/lib/python3.11/dist-packages (from dots_ocr==1.0) (1.9.0)\n",
+ "Requirement already satisfied: torch in /usr/local/lib/python3.11/dist-packages (from flash-attn==2.8.0.post2->dots_ocr==1.0) (2.7.0+cu128)\n",
+ "Requirement already satisfied: einops in /usr/local/lib/python3.11/dist-packages (from flash-attn==2.8.0.post2->dots_ocr==1.0) (0.8.1)\n",
+ "Requirement already satisfied: filelock in /usr/local/lib/python3.11/dist-packages (from transformers==4.51.3->dots_ocr==1.0) (3.18.0)\n",
+ "Requirement already satisfied: numpy>=1.17 in /usr/local/lib/python3.11/dist-packages (from transformers==4.51.3->dots_ocr==1.0) (2.0.2)\n",
+ "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.11/dist-packages (from transformers==4.51.3->dots_ocr==1.0) (25.0)\n",
+ "Requirement already satisfied: pyyaml>=5.1 in /usr/local/lib/python3.11/dist-packages (from transformers==4.51.3->dots_ocr==1.0) (6.0.2)\n",
+ "Requirement already satisfied: regex!=2019.12.17 in /usr/local/lib/python3.11/dist-packages (from transformers==4.51.3->dots_ocr==1.0) (2024.11.6)\n",
+ "Requirement already satisfied: requests in /usr/local/lib/python3.11/dist-packages (from transformers==4.51.3->dots_ocr==1.0) (2.32.3)\n",
+ "Requirement already satisfied: tokenizers<0.22,>=0.21 in /usr/local/lib/python3.11/dist-packages (from transformers==4.51.3->dots_ocr==1.0) (0.21.4)\n",
+ "Requirement already satisfied: safetensors>=0.4.3 in /usr/local/lib/python3.11/dist-packages (from transformers==4.51.3->dots_ocr==1.0) (0.5.3)\n",
+ "Requirement already satisfied: tqdm>=4.27 in /usr/local/lib/python3.11/dist-packages (from transformers==4.51.3->dots_ocr==1.0) (4.67.1)\n",
+ "Requirement already satisfied: fsspec>=2023.5.0 in /usr/local/lib/python3.11/dist-packages (from huggingface_hub->dots_ocr==1.0) (2025.3.0)\n",
+ "Requirement already satisfied: typing-extensions>=3.7.4.3 in /usr/local/lib/python3.11/dist-packages (from huggingface_hub->dots_ocr==1.0) (4.14.1)\n",
+ "Requirement already satisfied: hf-xet<2.0.0,>=1.1.3 in /usr/local/lib/python3.11/dist-packages (from huggingface_hub->dots_ocr==1.0) (1.1.5)\n",
+ "Requirement already satisfied: psutil in /usr/local/lib/python3.11/dist-packages (from accelerate->dots_ocr==1.0) (5.9.5)\n",
+ "Requirement already satisfied: aiofiles<25.0,>=22.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (24.1.0)\n",
+ "Requirement already satisfied: anyio<5.0,>=3.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (4.10.0)\n",
+ "Requirement already satisfied: brotli>=1.1.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (1.1.0)\n",
+ "Requirement already satisfied: fastapi<1.0,>=0.115.2 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.116.1)\n",
+ "Requirement already satisfied: ffmpy in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.6.1)\n",
+ "Requirement already satisfied: gradio-client==1.11.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (1.11.0)\n",
+ "Requirement already satisfied: groovy~=0.1 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.1.2)\n",
+ "Requirement already satisfied: httpx<1.0,>=0.24.1 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.28.1)\n",
+ "Requirement already satisfied: jinja2<4.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (3.1.6)\n",
+ "Requirement already satisfied: markupsafe<4.0,>=2.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (3.0.2)\n",
+ "Requirement already satisfied: orjson~=3.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (3.11.1)\n",
+ "Requirement already satisfied: pandas<3.0,>=1.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (2.2.2)\n",
+ "Requirement already satisfied: pillow<12.0,>=8.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (11.3.0)\n",
+ "Requirement already satisfied: pydantic<2.12,>=2.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (2.11.7)\n",
+ "Requirement already satisfied: pydub in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.25.1)\n",
+ "Requirement already satisfied: python-multipart>=0.0.18 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.0.20)\n",
+ "Requirement already satisfied: ruff>=0.9.3 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.12.7)\n",
+ "Requirement already satisfied: safehttpx<0.2.0,>=0.1.6 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.1.6)\n",
+ "Requirement already satisfied: semantic-version~=2.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (2.10.0)\n",
+ "Requirement already satisfied: starlette<1.0,>=0.40.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.47.2)\n",
+ "Requirement already satisfied: tomlkit<0.14.0,>=0.12.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.13.3)\n",
+ "Requirement already satisfied: typer<1.0,>=0.12 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.16.0)\n",
+ "Requirement already satisfied: uvicorn>=0.14.0 in /usr/local/lib/python3.11/dist-packages (from gradio->dots_ocr==1.0) (0.35.0)\n",
+ "Requirement already satisfied: websockets<16.0,>=10.0 in /usr/local/lib/python3.11/dist-packages (from gradio-client==1.11.0->gradio->dots_ocr==1.0) (15.0.1)\n",
+ "Requirement already satisfied: setuptools in /usr/local/lib/python3.11/dist-packages (from modelscope->dots_ocr==1.0) (75.2.0)\n",
+ "Requirement already satisfied: urllib3>=1.26 in /usr/local/lib/python3.11/dist-packages (from modelscope->dots_ocr==1.0) (2.5.0)\n",
+ "Requirement already satisfied: distro<2,>=1.7.0 in /usr/local/lib/python3.11/dist-packages (from openai->dots_ocr==1.0) (1.9.0)\n",
+ "Requirement already satisfied: jiter<1,>=0.4.0 in /usr/local/lib/python3.11/dist-packages (from openai->dots_ocr==1.0) (0.10.0)\n",
+ "Requirement already satisfied: sniffio in /usr/local/lib/python3.11/dist-packages (from openai->dots_ocr==1.0) (1.3.1)\n",
+ "Collecting av (from qwen_vl_utils->dots_ocr==1.0)\n",
+ " Downloading av-15.0.0-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (4.6 kB)\n",
+ "Requirement already satisfied: idna>=2.8 in /usr/local/lib/python3.11/dist-packages (from anyio<5.0,>=3.0->gradio->dots_ocr==1.0) (3.10)\n",
+ "Requirement already satisfied: certifi in /usr/local/lib/python3.11/dist-packages (from httpx<1.0,>=0.24.1->gradio->dots_ocr==1.0) (2025.8.3)\n",
+ "Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.11/dist-packages (from httpx<1.0,>=0.24.1->gradio->dots_ocr==1.0) (1.0.9)\n",
+ "Requirement already satisfied: h11>=0.16 in /usr/local/lib/python3.11/dist-packages (from httpcore==1.*->httpx<1.0,>=0.24.1->gradio->dots_ocr==1.0) (0.16.0)\n",
+ "Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.11/dist-packages (from pandas<3.0,>=1.0->gradio->dots_ocr==1.0) (2.9.0.post0)\n",
+ "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.11/dist-packages (from pandas<3.0,>=1.0->gradio->dots_ocr==1.0) (2025.2)\n",
+ "Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.11/dist-packages (from pandas<3.0,>=1.0->gradio->dots_ocr==1.0) (2025.2)\n",
+ "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.11/dist-packages (from pydantic<2.12,>=2.0->gradio->dots_ocr==1.0) (0.7.0)\n",
+ "Requirement already satisfied: pydantic-core==2.33.2 in /usr/local/lib/python3.11/dist-packages (from pydantic<2.12,>=2.0->gradio->dots_ocr==1.0) (2.33.2)\n",
+ "Requirement already satisfied: typing-inspection>=0.4.0 in /usr/local/lib/python3.11/dist-packages (from pydantic<2.12,>=2.0->gradio->dots_ocr==1.0) (0.4.1)\n",
+ "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests->transformers==4.51.3->dots_ocr==1.0) (3.4.2)\n",
+ "Requirement already satisfied: sympy>=1.13.3 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (1.13.3)\n",
+ "Requirement already satisfied: networkx in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (3.5)\n",
+ "Requirement already satisfied: nvidia-cuda-nvrtc-cu12==12.8.61 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (12.8.61)\n",
+ "Requirement already satisfied: nvidia-cuda-runtime-cu12==12.8.57 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (12.8.57)\n",
+ "Requirement already satisfied: nvidia-cuda-cupti-cu12==12.8.57 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (12.8.57)\n",
+ "Requirement already satisfied: nvidia-cudnn-cu12==9.7.1.26 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (9.7.1.26)\n",
+ "Requirement already satisfied: nvidia-cublas-cu12==12.8.3.14 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (12.8.3.14)\n",
+ "Requirement already satisfied: nvidia-cufft-cu12==11.3.3.41 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (11.3.3.41)\n",
+ "Requirement already satisfied: nvidia-curand-cu12==10.3.9.55 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (10.3.9.55)\n",
+ "Requirement already satisfied: nvidia-cusolver-cu12==11.7.2.55 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (11.7.2.55)\n",
+ "Requirement already satisfied: nvidia-cusparse-cu12==12.5.7.53 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (12.5.7.53)\n",
+ "Requirement already satisfied: nvidia-cusparselt-cu12==0.6.3 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (0.6.3)\n",
+ "Requirement already satisfied: nvidia-nccl-cu12==2.26.2 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (2.26.2)\n",
+ "Requirement already satisfied: nvidia-nvtx-cu12==12.8.55 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (12.8.55)\n",
+ "Requirement already satisfied: nvidia-nvjitlink-cu12==12.8.61 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (12.8.61)\n",
+ "Requirement already satisfied: nvidia-cufile-cu12==1.13.0.11 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (1.13.0.11)\n",
+ "Requirement already satisfied: triton==3.3.0 in /usr/local/lib/python3.11/dist-packages (from torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (3.3.0)\n",
+ "Requirement already satisfied: click>=8.0.0 in /usr/local/lib/python3.11/dist-packages (from typer<1.0,>=0.12->gradio->dots_ocr==1.0) (8.2.1)\n",
+ "Requirement already satisfied: shellingham>=1.3.0 in /usr/local/lib/python3.11/dist-packages (from typer<1.0,>=0.12->gradio->dots_ocr==1.0) (1.5.4)\n",
+ "Requirement already satisfied: rich>=10.11.0 in /usr/local/lib/python3.11/dist-packages (from typer<1.0,>=0.12->gradio->dots_ocr==1.0) (13.9.4)\n",
+ "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.11/dist-packages (from python-dateutil>=2.8.2->pandas<3.0,>=1.0->gradio->dots_ocr==1.0) (1.17.0)\n",
+ "Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.11/dist-packages (from rich>=10.11.0->typer<1.0,>=0.12->gradio->dots_ocr==1.0) (3.0.0)\n",
+ "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.11/dist-packages (from rich>=10.11.0->typer<1.0,>=0.12->gradio->dots_ocr==1.0) (2.19.2)\n",
+ "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.11/dist-packages (from sympy>=1.13.3->torch->flash-attn==2.8.0.post2->dots_ocr==1.0) (1.3.0)\n",
+ "Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.11/dist-packages (from markdown-it-py>=2.2.0->rich>=10.11.0->typer<1.0,>=0.12->gradio->dots_ocr==1.0) (0.1.2)\n",
+ "Downloading transformers-4.51.3-py3-none-any.whl (10.4 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m10.4/10.4 MB\u001b[0m \u001b[31m132.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading gradio_image_annotation-0.4.0-py3-none-any.whl (91 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m91.5/91.5 kB\u001b[0m \u001b[31m9.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading modelscope-1.28.2-py3-none-any.whl (5.9 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m5.9/5.9 MB\u001b[0m \u001b[31m129.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading pymupdf-1.26.3-cp39-abi3-manylinux_2_28_x86_64.whl (24.1 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m24.1/24.1 MB\u001b[0m \u001b[31m101.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading qwen_vl_utils-0.0.11-py3-none-any.whl (7.6 kB)\n",
+ "Downloading av-15.0.0-cp311-cp311-manylinux_2_28_x86_64.whl (39.7 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m39.7/39.7 MB\u001b[0m \u001b[31m61.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hBuilding wheels for collected packages: flash-attn\n",
+ " Building wheel for flash-attn (setup.py) ... \u001b[?25l\u001b[?25hdone\n",
+ " Created wheel for flash-attn: filename=flash_attn-2.8.0.post2-cp311-cp311-linux_x86_64.whl size=255941661 sha256=8ed71ac092f80b079d2e6043b769135904d6e834916cb6da7d372b394581447b\n",
+ " Stored in directory: /root/.cache/pip/wheels/a2/75/55/57ba1e272fd7fa1a01d9ba6b5334b7adaabf79900ede22c040\n",
+ "Successfully built flash-attn\n",
+ "Installing collected packages: PyMuPDF, av, qwen_vl_utils, modelscope, transformers, flash-attn, gradio_image_annotation, dots_ocr\n",
+ " Attempting uninstall: transformers\n",
+ " Found existing installation: transformers 4.54.1\n",
+ " Uninstalling transformers-4.54.1:\n",
+ " Successfully uninstalled transformers-4.54.1\n",
+ " Running setup.py develop for dots_ocr\n",
+ "Successfully installed PyMuPDF-1.26.3 av-15.0.0 dots_ocr-1.0 flash-attn-2.8.0.post2 gradio_image_annotation-0.4.0 modelscope-1.28.2 qwen_vl_utils-0.0.11 transformers-4.51.3\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!python3 tools/download_model.py"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "z0nKSOYsRaA2",
+ "outputId": "e4d67ed5-0cb9-437a-abec-5514f7bb8ccc"
+ },
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Attention: The model save dir dots.ocr should be replace by a name without `.` like DotsOCR, util we merge our code to transformers.\n",
+ "/usr/local/lib/python3.11/dist-packages/huggingface_hub/file_download.py:945: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.\n",
+ " warnings.warn(\n",
+ "/usr/local/lib/python3.11/dist-packages/huggingface_hub/file_download.py:982: UserWarning: `local_dir_use_symlinks` parameter is deprecated and will be ignored. The process to download files to a local folder has been updated and do not rely on symlinks anymore. You only need to pass a destination folder as`local_dir`.\n",
+ "For more details, check out https://huggingface.co/docs/huggingface_hub/main/en/guides/download#download-files-to-local-folder.\n",
+ " warnings.warn(\n",
+ "Fetching 19 files: 0% 0/19 [00:00, ?it/s]\n",
+ "chat_template.json: 1.11kB [00:00, 2.48MB/s]\n",
+ "\n",
+ "generation_config.json: 100% 74.0/74.0 [00:00<00:00, 727kB/s]\n",
+ "\n",
+ "configuration_dots.py: 2.93kB [00:00, 15.1MB/s]\n",
+ "\n",
+ ".gitattributes: 1.52kB [00:00, 8.96MB/s]\n",
+ "Fetching 19 files: 5% 1/19 [00:00<00:05, 3.39it/s]\n",
+ "config.json: 1.47kB [00:00, 9.48MB/s]\n",
+ "\n",
+ "README.md: 31.1kB [00:00, 76.2MB/s]\n",
+ "\n",
+ "NOTICE: 118kB [00:00, 148MB/s]\n",
+ "\n",
+ "model-00002-of-00002.safetensors: 0% 0.00/1.79G [00:00, ?B/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 0% 0.00/4.29G [00:00, ?B/s]\u001b[A\u001b[A\n",
+ "\n",
+ "\n",
+ "model.safetensors.index.json: 52.2kB [00:00, 153MB/s]\n",
+ "\n",
+ "\n",
+ "\n",
+ "modeling_dots_ocr_vllm.py: 17.5kB [00:00, 53.8MB/s]\n",
+ "\n",
+ "\n",
+ "\n",
+ "modeling_dots_ocr.py: 4.98kB [00:00, 17.7MB/s]\n",
+ "\n",
+ "\n",
+ "\n",
+ "modeling_dots_vision.py: 14.9kB [00:00, 44.7MB/s]\n",
+ "\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 0% 21.0M/4.29G [00:00<00:21, 202MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "\n",
+ "preprocessor_config.json: 100% 347/347 [00:00<00:00, 2.95MB/s]\n",
+ "\n",
+ "model-00002-of-00002.safetensors: 1% 21.0M/1.79G [00:00<00:10, 175MB/s]\u001b[A\n",
+ "\n",
+ "\n",
+ "merges.txt: 0.00B [00:00, ?B/s]\u001b[A\u001b[A\u001b[A\n",
+ "\n",
+ "merges.txt: 1.67MB [00:00, 71.2MB/s]\n",
+ "Fetching 19 files: 42% 8/19 [00:00<00:00, 12.00it/s]\n",
+ "model-00002-of-00002.safetensors: 3% 52.4M/1.79G [00:00<00:07, 227MB/s]\u001b[A\n",
+ "\n",
+ "\n",
+ "special_tokens_map.json: 100% 494/494 [00:00<00:00, 2.97MB/s]\n",
+ "\n",
+ "\n",
+ "\n",
+ "tokenizer.json: 0.00B [00:00, ?B/s]\u001b[A\u001b[A\u001b[A\n",
+ "\n",
+ "\n",
+ "\n",
+ "tokenizer_config.json: 9.31kB [00:00, 33.9MB/s]\n",
+ "\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 2% 105M/4.29G [00:00<00:11, 350MB/s] \u001b[A\u001b[A\n",
+ "\n",
+ "\n",
+ "\n",
+ "tokenizer.json: 7.04MB [00:00, 133MB/s]\n",
+ "vocab.json: 2.78MB [00:00, 102MB/s]\n",
+ "\n",
+ "model-00002-of-00002.safetensors: 5% 94.4M/1.79G [00:00<00:05, 292MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 4% 157M/4.29G [00:00<00:10, 413MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 9% 157M/1.79G [00:00<00:04, 389MB/s] \u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 5% 220M/4.29G [00:00<00:08, 461MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 12% 210M/1.79G [00:00<00:03, 432MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 6% 273M/4.29G [00:00<00:08, 479MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 15% 273M/1.79G [00:00<00:03, 466MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 8% 325M/4.29G [00:00<00:08, 489MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 18% 325M/1.79G [00:00<00:03, 477MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 9% 377M/4.29G [00:00<00:07, 496MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 21% 377M/1.79G [00:00<00:02, 484MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 10% 430M/4.29G [00:00<00:07, 484MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 24% 430M/1.79G [00:01<00:02, 473MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 11% 482M/4.29G [00:01<00:08, 469MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 27% 482M/1.79G [00:01<00:02, 464MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 12% 535M/4.29G [00:01<00:08, 444MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 30% 535M/1.79G [00:01<00:02, 469MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 14% 587M/4.29G [00:01<00:08, 439MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 33% 587M/1.79G [00:01<00:02, 462MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 15% 640M/4.29G [00:01<00:08, 437MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 36% 640M/1.79G [00:01<00:02, 459MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 16% 692M/4.29G [00:03<00:50, 71.2MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 39% 692M/1.79G [00:03<00:15, 72.2MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 17% 744M/4.29G [00:03<00:37, 95.4MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 42% 744M/1.79G [00:03<00:10, 96.8MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 19% 797M/4.29G [00:03<00:27, 125MB/s] \u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 45% 797M/1.79G [00:03<00:07, 127MB/s] \u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 20% 849M/4.29G [00:03<00:21, 159MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 48% 849M/1.79G [00:03<00:05, 161MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 21% 891M/4.29G [00:04<00:17, 189MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 51% 902M/1.79G [00:04<00:04, 201MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 22% 944M/4.29G [00:04<00:14, 229MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 53% 954M/1.79G [00:04<00:03, 242MB/s]\u001b[A\n",
+ "model-00002-of-00002.safetensors: 56% 1.01G/1.79G [00:04<00:02, 288MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 23% 996M/4.29G [00:04<00:12, 269MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 59% 1.06G/1.79G [00:04<00:02, 318MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 24% 1.05G/4.29G [00:04<00:10, 304MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 62% 1.11G/1.79G [00:04<00:01, 343MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 26% 1.10G/4.29G [00:04<00:09, 327MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 65% 1.16G/1.79G [00:04<00:01, 364MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 27% 1.15G/4.29G [00:04<00:08, 353MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 68% 1.22G/1.79G [00:04<00:01, 387MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 28% 1.21G/4.29G [00:04<00:08, 370MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 72% 1.28G/1.79G [00:04<00:01, 425MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 29% 1.26G/4.29G [00:04<00:08, 355MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 75% 1.33G/1.79G [00:05<00:01, 348MB/s]\u001b[A\n",
+ "model-00002-of-00002.safetensors: 78% 1.39G/1.79G [00:05<00:00, 400MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 30% 1.30G/4.29G [00:05<00:12, 240MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 81% 1.45G/1.79G [00:05<00:00, 415MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 31% 1.34G/4.29G [00:05<00:11, 264MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 84% 1.50G/1.79G [00:07<00:04, 68.5MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 32% 1.38G/4.29G [00:07<00:52, 55.7MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 87% 1.55G/1.79G [00:07<00:02, 91.2MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 33% 1.44G/4.29G [00:07<00:36, 78.6MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 90% 1.60G/1.79G [00:07<00:01, 120MB/s] \u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 35% 1.49G/4.29G [00:07<00:26, 106MB/s] \u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 93% 1.66G/1.79G [00:08<00:00, 154MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 36% 1.54G/4.29G [00:08<00:19, 139MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 96% 1.71G/1.79G [00:08<00:00, 194MB/s]\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 37% 1.58G/4.29G [00:08<00:16, 169MB/s]\u001b[A\u001b[A\n",
+ "model-00002-of-00002.safetensors: 99% 1.76G/1.79G [00:08<00:00, 233MB/s]\u001b[A\n",
+ "\n",
+ "model-00002-of-00002.safetensors: 100% 1.79G/1.79G [00:08<00:00, 215MB/s]\n",
+ "\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 39% 1.68G/4.29G [00:08<00:10, 247MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 40% 1.73G/4.29G [00:08<00:08, 294MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 42% 1.78G/4.29G [00:08<00:07, 331MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 43% 1.84G/4.29G [00:08<00:06, 361MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 44% 1.89G/4.29G [00:08<00:06, 385MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 45% 1.94G/4.29G [00:08<00:05, 419MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 46% 1.99G/4.29G [00:09<00:05, 445MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 48% 2.04G/4.29G [00:09<00:04, 466MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 49% 2.10G/4.29G [00:09<00:04, 478MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 50% 2.15G/4.29G [00:09<00:04, 490MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 52% 2.21G/4.29G [00:09<00:04, 508MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 53% 2.28G/4.29G [00:09<00:03, 513MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 54% 2.34G/4.29G [00:09<00:03, 518MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 56% 2.40G/4.29G [00:09<00:03, 531MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 57% 2.46G/4.29G [00:09<00:03, 554MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 59% 2.53G/4.29G [00:10<00:03, 573MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 60% 2.59G/4.29G [00:10<00:02, 585MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 62% 2.65G/4.29G [00:10<00:02, 596MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 63% 2.72G/4.29G [00:10<00:04, 365MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 65% 2.78G/4.29G [00:10<00:03, 404MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 66% 2.84G/4.29G [00:10<00:03, 439MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 68% 2.90G/4.29G [00:10<00:02, 468MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 69% 2.97G/4.29G [00:11<00:02, 489MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 71% 3.03G/4.29G [00:11<00:03, 406MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 72% 3.08G/4.29G [00:11<00:03, 350MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 73% 3.12G/4.29G [00:11<00:03, 327MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 74% 3.17G/4.29G [00:11<00:03, 306MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 75% 3.21G/4.29G [00:11<00:03, 289MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 75% 3.24G/4.29G [00:12<00:03, 281MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 76% 3.27G/4.29G [00:12<00:03, 285MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 77% 3.31G/4.29G [00:12<00:03, 293MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 78% 3.36G/4.29G [00:12<00:02, 315MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 79% 3.41G/4.29G [00:12<00:02, 366MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 81% 3.46G/4.29G [00:12<00:02, 403MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 82% 3.50G/4.29G [00:15<00:18, 42.7MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 83% 3.57G/4.29G [00:15<00:11, 65.4MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 85% 3.63G/4.29G [00:16<00:07, 94.9MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 86% 3.69G/4.29G [00:16<00:04, 132MB/s] \u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 87% 3.75G/4.29G [00:16<00:03, 177MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 89% 3.82G/4.29G [00:16<00:02, 228MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 90% 3.88G/4.29G [00:16<00:01, 284MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 92% 3.94G/4.29G [00:16<00:01, 340MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 93% 4.01G/4.29G [00:16<00:00, 394MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 95% 4.07G/4.29G [00:16<00:00, 438MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 96% 4.13G/4.29G [00:16<00:00, 474MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 98% 4.19G/4.29G [00:17<00:00, 503MB/s]\u001b[A\u001b[A\n",
+ "\n",
+ "model-00001-of-00002.safetensors: 100% 4.29G/4.29G [00:17<00:00, 250MB/s]\n",
+ "Fetching 19 files: 100% 19/19 [00:17<00:00, 1.07it/s]\n",
+ "model downloaded to /content/dots.ocr/weights/DotsOCR\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "import os\n",
+ "from pathlib import Path\n",
+ "\n",
+ "# Set up model path (using a directory in Colab's temporary storage)\n",
+ "hf_model_path = \"./weights/DotsOCR\"\n",
+ "os.environ[\"hf_model_path\"] = hf_model_path\n",
+ "\n",
+ "# Create directory if it doesn't exist\n",
+ "Path(hf_model_path).mkdir(parents=True, exist_ok=True)\n",
+ "\n",
+ "# Add to PYTHONPATH\n",
+ "os.environ[\"PYTHONPATH\"] = f\"{os.path.dirname(hf_model_path)}:{os.environ.get('PYTHONPATH', '')}\"\n",
+ "\n",
+ "# Install required packages\n",
+ "!pip install vllm transformers\n",
+ "\n",
+ "# Modify vllm import (this is a workaround - may need adjustment based on vllm version)\n",
+ "try:\n",
+ " vllm_path = !which vllm\n",
+ " if vllm_path:\n",
+ " vllm_path = vllm_path[0]\n",
+ " !sed -i '/^from vllm\\.entrypoints\\.cli\\.main import main$/a from DotsOCR import modeling_dots_ocr_vllm' {vllm_path}\n",
+ "except:\n",
+ " print(\"Could not automatically modify vllm imports. You may need to do this manually.\")"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "m1eyfkYlTGTs",
+ "outputId": "04548a02-fc8c-4891-8b95-0fad33f0f20e"
+ },
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Collecting vllm\n",
+ " Downloading vllm-0.10.0-cp38-abi3-manylinux1_x86_64.whl.metadata (14 kB)\n",
+ "Requirement already satisfied: transformers in /usr/local/lib/python3.11/dist-packages (4.51.3)\n",
+ "Requirement already satisfied: regex in /usr/local/lib/python3.11/dist-packages (from vllm) (2024.11.6)\n",
+ "Requirement already satisfied: cachetools in /usr/local/lib/python3.11/dist-packages (from vllm) (5.5.2)\n",
+ "Requirement already satisfied: psutil in /usr/local/lib/python3.11/dist-packages (from vllm) (5.9.5)\n",
+ "Requirement already satisfied: sentencepiece in /usr/local/lib/python3.11/dist-packages (from vllm) (0.2.0)\n",
+ "Requirement already satisfied: numpy in /usr/local/lib/python3.11/dist-packages (from vllm) (2.0.2)\n",
+ "Requirement already satisfied: requests>=2.26.0 in /usr/local/lib/python3.11/dist-packages (from vllm) (2.32.3)\n",
+ "Requirement already satisfied: tqdm in /usr/local/lib/python3.11/dist-packages (from vllm) (4.67.1)\n",
+ "Collecting blake3 (from vllm)\n",
+ " Downloading blake3-1.0.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.2 kB)\n",
+ "Requirement already satisfied: py-cpuinfo in /usr/local/lib/python3.11/dist-packages (from vllm) (9.0.0)\n",
+ "Collecting transformers\n",
+ " Downloading transformers-4.55.0-py3-none-any.whl.metadata (39 kB)\n",
+ "Requirement already satisfied: huggingface-hub>=0.33.0 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub[hf_xet]>=0.33.0->vllm) (0.34.3)\n",
+ "Requirement already satisfied: tokenizers>=0.21.1 in /usr/local/lib/python3.11/dist-packages (from vllm) (0.21.4)\n",
+ "Requirement already satisfied: protobuf in /usr/local/lib/python3.11/dist-packages (from vllm) (5.29.5)\n",
+ "Requirement already satisfied: fastapi>=0.115.0 in /usr/local/lib/python3.11/dist-packages (from fastapi[standard]>=0.115.0->vllm) (0.116.1)\n",
+ "Requirement already satisfied: aiohttp in /usr/local/lib/python3.11/dist-packages (from vllm) (3.12.15)\n",
+ "Collecting openai<=1.90.0,>=1.87.0 (from vllm)\n",
+ " Downloading openai-1.90.0-py3-none-any.whl.metadata (26 kB)\n",
+ "Requirement already satisfied: pydantic>=2.10 in /usr/local/lib/python3.11/dist-packages (from vllm) (2.11.7)\n",
+ "Requirement already satisfied: prometheus_client>=0.18.0 in /usr/local/lib/python3.11/dist-packages (from vllm) (0.22.1)\n",
+ "Requirement already satisfied: pillow in /usr/local/lib/python3.11/dist-packages (from vllm) (11.3.0)\n",
+ "Collecting prometheus-fastapi-instrumentator>=7.0.0 (from vllm)\n",
+ " Downloading prometheus_fastapi_instrumentator-7.1.0-py3-none-any.whl.metadata (13 kB)\n",
+ "Requirement already satisfied: tiktoken>=0.6.0 in /usr/local/lib/python3.11/dist-packages (from vllm) (0.9.0)\n",
+ "Collecting lm-format-enforcer<0.11,>=0.10.11 (from vllm)\n",
+ " Downloading lm_format_enforcer-0.10.12-py3-none-any.whl.metadata (17 kB)\n",
+ "Collecting llguidance<0.8.0,>=0.7.11 (from vllm)\n",
+ " Downloading llguidance-0.7.30-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (10 kB)\n",
+ "Collecting outlines_core==0.2.10 (from vllm)\n",
+ " Downloading outlines_core-0.2.10-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (5.8 kB)\n",
+ "Collecting diskcache==5.6.3 (from vllm)\n",
+ " Downloading diskcache-5.6.3-py3-none-any.whl.metadata (20 kB)\n",
+ "Collecting lark==1.2.2 (from vllm)\n",
+ " Downloading lark-1.2.2-py3-none-any.whl.metadata (1.8 kB)\n",
+ "Collecting xgrammar==0.1.21 (from vllm)\n",
+ " Downloading xgrammar-0.1.21-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.3 kB)\n",
+ "Requirement already satisfied: typing_extensions>=4.10 in /usr/local/lib/python3.11/dist-packages (from vllm) (4.14.1)\n",
+ "Requirement already satisfied: filelock>=3.16.1 in /usr/local/lib/python3.11/dist-packages (from vllm) (3.18.0)\n",
+ "Collecting partial-json-parser (from vllm)\n",
+ " Downloading partial_json_parser-0.2.1.1.post6-py3-none-any.whl.metadata (6.1 kB)\n",
+ "Requirement already satisfied: pyzmq>=25.0.0 in /usr/local/lib/python3.11/dist-packages (from vllm) (26.2.1)\n",
+ "Collecting msgspec (from vllm)\n",
+ " Downloading msgspec-0.19.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.9 kB)\n",
+ "Collecting gguf>=0.13.0 (from vllm)\n",
+ " Downloading gguf-0.17.1-py3-none-any.whl.metadata (4.3 kB)\n",
+ "Collecting mistral_common>=1.8.2 (from mistral_common[audio,image]>=1.8.2->vllm)\n",
+ " Downloading mistral_common-1.8.3-py3-none-any.whl.metadata (3.8 kB)\n",
+ "Requirement already satisfied: opencv-python-headless>=4.11.0 in /usr/local/lib/python3.11/dist-packages (from vllm) (4.12.0.88)\n",
+ "Requirement already satisfied: pyyaml in /usr/local/lib/python3.11/dist-packages (from vllm) (6.0.2)\n",
+ "Requirement already satisfied: einops in /usr/local/lib/python3.11/dist-packages (from vllm) (0.8.1)\n",
+ "Collecting compressed-tensors==0.10.2 (from vllm)\n",
+ " Downloading compressed_tensors-0.10.2-py3-none-any.whl.metadata (7.0 kB)\n",
+ "Collecting depyf==0.19.0 (from vllm)\n",
+ " Downloading depyf-0.19.0-py3-none-any.whl.metadata (7.3 kB)\n",
+ "Requirement already satisfied: cloudpickle in /usr/local/lib/python3.11/dist-packages (from vllm) (3.1.1)\n",
+ "Collecting watchfiles (from vllm)\n",
+ " Downloading watchfiles-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.9 kB)\n",
+ "Collecting python-json-logger (from vllm)\n",
+ " Downloading python_json_logger-3.3.0-py3-none-any.whl.metadata (4.0 kB)\n",
+ "Requirement already satisfied: scipy in /usr/local/lib/python3.11/dist-packages (from vllm) (1.16.1)\n",
+ "Collecting ninja (from vllm)\n",
+ " Using cached ninja-1.11.1.4-py3-none-manylinux_2_12_x86_64.manylinux2010_x86_64.whl.metadata (5.0 kB)\n",
+ "Collecting pybase64 (from vllm)\n",
+ " Downloading pybase64-1.4.2-cp311-cp311-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl.metadata (8.7 kB)\n",
+ "Collecting cbor2 (from vllm)\n",
+ " Downloading cbor2-5.6.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (6.0 kB)\n",
+ "Collecting numba==0.61.2 (from vllm)\n",
+ " Downloading numba-0.61.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (2.8 kB)\n",
+ "Collecting ray!=2.44.*,>=2.43.0 (from ray[cgraph]!=2.44.*,>=2.43.0->vllm)\n",
+ " Downloading ray-2.48.0-cp311-cp311-manylinux2014_x86_64.whl.metadata (19 kB)\n",
+ "Collecting torch==2.7.1 (from vllm)\n",
+ " Downloading torch-2.7.1-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (29 kB)\n",
+ "Collecting torchaudio==2.7.1 (from vllm)\n",
+ " Downloading torchaudio-2.7.1-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (6.6 kB)\n",
+ "Collecting torchvision==0.22.1 (from vllm)\n",
+ " Downloading torchvision-0.22.1-cp311-cp311-manylinux_2_28_x86_64.whl.metadata (6.1 kB)\n",
+ "Collecting xformers==0.0.31 (from vllm)\n",
+ " Downloading xformers-0.0.31-cp39-abi3-manylinux_2_28_x86_64.whl.metadata (1.0 kB)\n",
+ "Collecting astor (from depyf==0.19.0->vllm)\n",
+ " Downloading astor-0.8.1-py2.py3-none-any.whl.metadata (4.2 kB)\n",
+ "Requirement already satisfied: dill in /usr/local/lib/python3.11/dist-packages (from depyf==0.19.0->vllm) (0.3.8)\n",
+ "Collecting llvmlite<0.45,>=0.44.0dev0 (from numba==0.61.2->vllm)\n",
+ " Downloading llvmlite-0.44.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.8 kB)\n",
+ "Requirement already satisfied: sympy>=1.13.3 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.1->vllm) (1.13.3)\n",
+ "Requirement already satisfied: networkx in /usr/local/lib/python3.11/dist-packages (from torch==2.7.1->vllm) (3.5)\n",
+ "Requirement already satisfied: jinja2 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.1->vllm) (3.1.6)\n",
+ "Requirement already satisfied: fsspec in /usr/local/lib/python3.11/dist-packages (from torch==2.7.1->vllm) (2025.3.0)\n",
+ "Collecting nvidia-cuda-nvrtc-cu12==12.6.77 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl.metadata (1.5 kB)\n",
+ "Collecting nvidia-cuda-runtime-cu12==12.6.77 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)\n",
+ "Collecting nvidia-cuda-cupti-cu12==12.6.80 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)\n",
+ "Collecting nvidia-cudnn-cu12==9.5.1.17 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_cudnn_cu12-9.5.1.17-py3-none-manylinux_2_28_x86_64.whl.metadata (1.6 kB)\n",
+ "Collecting nvidia-cublas-cu12==12.6.4.1 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)\n",
+ "Collecting nvidia-cufft-cu12==11.3.0.4 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)\n",
+ "Collecting nvidia-curand-cu12==10.3.7.77 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)\n",
+ "Collecting nvidia-cusolver-cu12==11.7.1.2 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)\n",
+ "Collecting nvidia-cusparse-cu12==12.5.4.2 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)\n",
+ "Requirement already satisfied: nvidia-cusparselt-cu12==0.6.3 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.1->vllm) (0.6.3)\n",
+ "Requirement already satisfied: nvidia-nccl-cu12==2.26.2 in /usr/local/lib/python3.11/dist-packages (from torch==2.7.1->vllm) (2.26.2)\n",
+ "Collecting nvidia-nvtx-cu12==12.6.77 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.6 kB)\n",
+ "Collecting nvidia-nvjitlink-cu12==12.6.85 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl.metadata (1.5 kB)\n",
+ "Collecting nvidia-cufile-cu12==1.11.1.6 (from torch==2.7.1->vllm)\n",
+ " Downloading nvidia_cufile_cu12-1.11.1.6-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl.metadata (1.5 kB)\n",
+ "Collecting triton==3.3.1 (from torch==2.7.1->vllm)\n",
+ " Downloading triton-3.3.1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl.metadata (1.5 kB)\n",
+ "Requirement already satisfied: setuptools>=40.8.0 in /usr/local/lib/python3.11/dist-packages (from triton==3.3.1->torch==2.7.1->vllm) (75.2.0)\n",
+ "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.11/dist-packages (from transformers) (25.0)\n",
+ "Requirement already satisfied: safetensors>=0.4.3 in /usr/local/lib/python3.11/dist-packages (from transformers) (0.5.3)\n",
+ "Requirement already satisfied: starlette<0.48.0,>=0.40.0 in /usr/local/lib/python3.11/dist-packages (from fastapi>=0.115.0->fastapi[standard]>=0.115.0->vllm) (0.47.2)\n",
+ "Collecting fastapi-cli>=0.0.8 (from fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm)\n",
+ " Downloading fastapi_cli-0.0.8-py3-none-any.whl.metadata (6.3 kB)\n",
+ "Requirement already satisfied: httpx>=0.23.0 in /usr/local/lib/python3.11/dist-packages (from fastapi[standard]>=0.115.0->vllm) (0.28.1)\n",
+ "Requirement already satisfied: python-multipart>=0.0.18 in /usr/local/lib/python3.11/dist-packages (from fastapi[standard]>=0.115.0->vllm) (0.0.20)\n",
+ "Collecting email-validator>=2.0.0 (from fastapi[standard]>=0.115.0->vllm)\n",
+ " Downloading email_validator-2.2.0-py3-none-any.whl.metadata (25 kB)\n",
+ "Requirement already satisfied: uvicorn>=0.12.0 in /usr/local/lib/python3.11/dist-packages (from uvicorn[standard]>=0.12.0; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm) (0.35.0)\n",
+ "Requirement already satisfied: hf-xet<2.0.0,>=1.1.3 in /usr/local/lib/python3.11/dist-packages (from huggingface-hub>=0.33.0->huggingface-hub[hf_xet]>=0.33.0->vllm) (1.1.5)\n",
+ "Collecting interegular>=0.3.2 (from lm-format-enforcer<0.11,>=0.10.11->vllm)\n",
+ " Downloading interegular-0.3.3-py37-none-any.whl.metadata (3.0 kB)\n",
+ "Requirement already satisfied: jsonschema>=4.21.1 in /usr/local/lib/python3.11/dist-packages (from mistral_common>=1.8.2->mistral_common[audio,image]>=1.8.2->vllm) (4.25.0)\n",
+ "Collecting pydantic-extra-types>=2.10.5 (from pydantic-extra-types[pycountry]>=2.10.5->mistral_common>=1.8.2->mistral_common[audio,image]>=1.8.2->vllm)\n",
+ " Downloading pydantic_extra_types-2.10.5-py3-none-any.whl.metadata (3.9 kB)\n",
+ "Requirement already satisfied: anyio<5,>=3.5.0 in /usr/local/lib/python3.11/dist-packages (from openai<=1.90.0,>=1.87.0->vllm) (4.10.0)\n",
+ "Requirement already satisfied: distro<2,>=1.7.0 in /usr/local/lib/python3.11/dist-packages (from openai<=1.90.0,>=1.87.0->vllm) (1.9.0)\n",
+ "Requirement already satisfied: jiter<1,>=0.4.0 in /usr/local/lib/python3.11/dist-packages (from openai<=1.90.0,>=1.87.0->vllm) (0.10.0)\n",
+ "Requirement already satisfied: sniffio in /usr/local/lib/python3.11/dist-packages (from openai<=1.90.0,>=1.87.0->vllm) (1.3.1)\n",
+ "Requirement already satisfied: annotated-types>=0.6.0 in /usr/local/lib/python3.11/dist-packages (from pydantic>=2.10->vllm) (0.7.0)\n",
+ "Requirement already satisfied: pydantic-core==2.33.2 in /usr/local/lib/python3.11/dist-packages (from pydantic>=2.10->vllm) (2.33.2)\n",
+ "Requirement already satisfied: typing-inspection>=0.4.0 in /usr/local/lib/python3.11/dist-packages (from pydantic>=2.10->vllm) (0.4.1)\n",
+ "Requirement already satisfied: click>=7.0 in /usr/local/lib/python3.11/dist-packages (from ray!=2.44.*,>=2.43.0->ray[cgraph]!=2.44.*,>=2.43.0->vllm) (8.2.1)\n",
+ "Requirement already satisfied: msgpack<2.0.0,>=1.0.0 in /usr/local/lib/python3.11/dist-packages (from ray!=2.44.*,>=2.43.0->ray[cgraph]!=2.44.*,>=2.43.0->vllm) (1.1.1)\n",
+ "Requirement already satisfied: cupy-cuda12x in /usr/local/lib/python3.11/dist-packages (from ray[cgraph]!=2.44.*,>=2.43.0->vllm) (13.3.0)\n",
+ "Requirement already satisfied: charset-normalizer<4,>=2 in /usr/local/lib/python3.11/dist-packages (from requests>=2.26.0->vllm) (3.4.2)\n",
+ "Requirement already satisfied: idna<4,>=2.5 in /usr/local/lib/python3.11/dist-packages (from requests>=2.26.0->vllm) (3.10)\n",
+ "Requirement already satisfied: urllib3<3,>=1.21.1 in /usr/local/lib/python3.11/dist-packages (from requests>=2.26.0->vllm) (2.5.0)\n",
+ "Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.11/dist-packages (from requests>=2.26.0->vllm) (2025.8.3)\n",
+ "Requirement already satisfied: aiohappyeyeballs>=2.5.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->vllm) (2.6.1)\n",
+ "Requirement already satisfied: aiosignal>=1.4.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->vllm) (1.4.0)\n",
+ "Requirement already satisfied: attrs>=17.3.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->vllm) (25.3.0)\n",
+ "Requirement already satisfied: frozenlist>=1.1.1 in /usr/local/lib/python3.11/dist-packages (from aiohttp->vllm) (1.7.0)\n",
+ "Requirement already satisfied: multidict<7.0,>=4.5 in /usr/local/lib/python3.11/dist-packages (from aiohttp->vllm) (6.6.3)\n",
+ "Requirement already satisfied: propcache>=0.2.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->vllm) (0.3.2)\n",
+ "Requirement already satisfied: yarl<2.0,>=1.17.0 in /usr/local/lib/python3.11/dist-packages (from aiohttp->vllm) (1.20.1)\n",
+ "Collecting dnspython>=2.0.0 (from email-validator>=2.0.0->fastapi[standard]>=0.115.0->vllm)\n",
+ " Downloading dnspython-2.7.0-py3-none-any.whl.metadata (5.8 kB)\n",
+ "Requirement already satisfied: typer>=0.15.1 in /usr/local/lib/python3.11/dist-packages (from fastapi-cli>=0.0.8->fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm) (0.16.0)\n",
+ "Collecting rich-toolkit>=0.14.8 (from fastapi-cli>=0.0.8->fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm)\n",
+ " Downloading rich_toolkit-0.14.9-py3-none-any.whl.metadata (999 bytes)\n",
+ "Collecting fastapi-cloud-cli>=0.1.1 (from fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm)\n",
+ " Downloading fastapi_cloud_cli-0.1.5-py3-none-any.whl.metadata (3.2 kB)\n",
+ "Requirement already satisfied: httpcore==1.* in /usr/local/lib/python3.11/dist-packages (from httpx>=0.23.0->fastapi[standard]>=0.115.0->vllm) (1.0.9)\n",
+ "Requirement already satisfied: h11>=0.16 in /usr/local/lib/python3.11/dist-packages (from httpcore==1.*->httpx>=0.23.0->fastapi[standard]>=0.115.0->vllm) (0.16.0)\n",
+ "Requirement already satisfied: MarkupSafe>=2.0 in /usr/local/lib/python3.11/dist-packages (from jinja2->torch==2.7.1->vllm) (3.0.2)\n",
+ "Requirement already satisfied: jsonschema-specifications>=2023.03.6 in /usr/local/lib/python3.11/dist-packages (from jsonschema>=4.21.1->mistral_common>=1.8.2->mistral_common[audio,image]>=1.8.2->vllm) (2025.4.1)\n",
+ "Requirement already satisfied: referencing>=0.28.4 in /usr/local/lib/python3.11/dist-packages (from jsonschema>=4.21.1->mistral_common>=1.8.2->mistral_common[audio,image]>=1.8.2->vllm) (0.36.2)\n",
+ "Requirement already satisfied: rpds-py>=0.7.1 in /usr/local/lib/python3.11/dist-packages (from jsonschema>=4.21.1->mistral_common>=1.8.2->mistral_common[audio,image]>=1.8.2->vllm) (0.26.0)\n",
+ "Collecting pycountry>=23 (from pydantic-extra-types[pycountry]>=2.10.5->mistral_common>=1.8.2->mistral_common[audio,image]>=1.8.2->vllm)\n",
+ " Downloading pycountry-24.6.1-py3-none-any.whl.metadata (12 kB)\n",
+ "Requirement already satisfied: mpmath<1.4,>=1.1.0 in /usr/local/lib/python3.11/dist-packages (from sympy>=1.13.3->torch==2.7.1->vllm) (1.3.0)\n",
+ "Collecting httptools>=0.6.3 (from uvicorn[standard]>=0.12.0; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm)\n",
+ " Downloading httptools-0.6.4-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.6 kB)\n",
+ "Collecting python-dotenv>=0.13 (from uvicorn[standard]>=0.12.0; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm)\n",
+ " Downloading python_dotenv-1.1.1-py3-none-any.whl.metadata (24 kB)\n",
+ "Collecting uvloop>=0.15.1 (from uvicorn[standard]>=0.12.0; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm)\n",
+ " Downloading uvloop-0.21.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (4.9 kB)\n",
+ "Requirement already satisfied: websockets>=10.4 in /usr/local/lib/python3.11/dist-packages (from uvicorn[standard]>=0.12.0; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm) (15.0.1)\n",
+ "Requirement already satisfied: fastrlock>=0.5 in /usr/local/lib/python3.11/dist-packages (from cupy-cuda12x->ray[cgraph]!=2.44.*,>=2.43.0->vllm) (0.8.3)\n",
+ "Requirement already satisfied: soundfile>=0.12.1 in /usr/local/lib/python3.11/dist-packages (from mistral_common>=1.8.2->mistral_common[audio,image]>=1.8.2->vllm) (0.13.1)\n",
+ "Requirement already satisfied: soxr>=0.5.0 in /usr/local/lib/python3.11/dist-packages (from mistral_common>=1.8.2->mistral_common[audio,image]>=1.8.2->vllm) (0.5.0.post1)\n",
+ "Collecting rignore>=0.5.1 (from fastapi-cloud-cli>=0.1.1->fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm)\n",
+ " Downloading rignore-0.6.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (3.8 kB)\n",
+ "Requirement already satisfied: sentry-sdk>=2.20.0 in /usr/local/lib/python3.11/dist-packages (from fastapi-cloud-cli>=0.1.1->fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm) (2.34.1)\n",
+ "Requirement already satisfied: rich>=13.7.1 in /usr/local/lib/python3.11/dist-packages (from rich-toolkit>=0.14.8->fastapi-cli>=0.0.8->fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm) (13.9.4)\n",
+ "Requirement already satisfied: cffi>=1.0 in /usr/local/lib/python3.11/dist-packages (from soundfile>=0.12.1->mistral_common>=1.8.2->mistral_common[audio,image]>=1.8.2->vllm) (1.17.1)\n",
+ "Requirement already satisfied: shellingham>=1.3.0 in /usr/local/lib/python3.11/dist-packages (from typer>=0.15.1->fastapi-cli>=0.0.8->fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm) (1.5.4)\n",
+ "Requirement already satisfied: pycparser in /usr/local/lib/python3.11/dist-packages (from cffi>=1.0->soundfile>=0.12.1->mistral_common>=1.8.2->mistral_common[audio,image]>=1.8.2->vllm) (2.22)\n",
+ "Requirement already satisfied: markdown-it-py>=2.2.0 in /usr/local/lib/python3.11/dist-packages (from rich>=13.7.1->rich-toolkit>=0.14.8->fastapi-cli>=0.0.8->fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm) (3.0.0)\n",
+ "Requirement already satisfied: pygments<3.0.0,>=2.13.0 in /usr/local/lib/python3.11/dist-packages (from rich>=13.7.1->rich-toolkit>=0.14.8->fastapi-cli>=0.0.8->fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm) (2.19.2)\n",
+ "Requirement already satisfied: mdurl~=0.1 in /usr/local/lib/python3.11/dist-packages (from markdown-it-py>=2.2.0->rich>=13.7.1->rich-toolkit>=0.14.8->fastapi-cli>=0.0.8->fastapi-cli[standard]>=0.0.8; extra == \"standard\"->fastapi[standard]>=0.115.0->vllm) (0.1.2)\n",
+ "Downloading vllm-0.10.0-cp38-abi3-manylinux1_x86_64.whl (386.6 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m386.6/386.6 MB\u001b[0m \u001b[31m2.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading compressed_tensors-0.10.2-py3-none-any.whl (169 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m169.0/169.0 kB\u001b[0m \u001b[31m14.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading depyf-0.19.0-py3-none-any.whl (39 kB)\n",
+ "Downloading diskcache-5.6.3-py3-none-any.whl (45 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m45.5/45.5 kB\u001b[0m \u001b[31m5.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading lark-1.2.2-py3-none-any.whl (111 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m111.0/111.0 kB\u001b[0m \u001b[31m11.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading numba-0.61.2-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (3.8 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.8/3.8 MB\u001b[0m \u001b[31m105.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading outlines_core-0.2.10-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (2.3 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m2.3/2.3 MB\u001b[0m \u001b[31m89.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading torch-2.7.1-cp311-cp311-manylinux_2_28_x86_64.whl (821.2 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m821.2/821.2 MB\u001b[0m \u001b[31m2.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading torchaudio-2.7.1-cp311-cp311-manylinux_2_28_x86_64.whl (3.5 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m3.5/3.5 MB\u001b[0m \u001b[31m111.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading torchvision-0.22.1-cp311-cp311-manylinux_2_28_x86_64.whl (7.5 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m7.5/7.5 MB\u001b[0m \u001b[31m133.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading xformers-0.0.31-cp39-abi3-manylinux_2_28_x86_64.whl (117.1 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m117.1/117.1 MB\u001b[0m \u001b[31m20.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading xgrammar-0.1.21-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.8 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m11.8/11.8 MB\u001b[0m \u001b[31m128.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_cublas_cu12-12.6.4.1-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (393.1 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m393.1/393.1 MB\u001b[0m \u001b[31m3.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_cuda_cupti_cu12-12.6.80-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (8.9 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m8.9/8.9 MB\u001b[0m \u001b[31m134.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_cuda_nvrtc_cu12-12.6.77-py3-none-manylinux2014_x86_64.whl (23.7 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m23.7/23.7 MB\u001b[0m \u001b[31m99.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_cuda_runtime_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (897 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m897.7/897.7 kB\u001b[0m \u001b[31m52.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_cudnn_cu12-9.5.1.17-py3-none-manylinux_2_28_x86_64.whl (571.0 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m571.0/571.0 MB\u001b[0m \u001b[31m2.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_cufft_cu12-11.3.0.4-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (200.2 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m200.2/200.2 MB\u001b[0m \u001b[31m6.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_cufile_cu12-1.11.1.6-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (1.1 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m1.1/1.1 MB\u001b[0m \u001b[31m65.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_curand_cu12-10.3.7.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (56.3 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m56.3/56.3 MB\u001b[0m \u001b[31m43.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_cusolver_cu12-11.7.1.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (158.2 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m158.2/158.2 MB\u001b[0m \u001b[31m5.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_cusparse_cu12-12.5.4.2-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (216.6 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m216.6/216.6 MB\u001b[0m \u001b[31m4.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_nvjitlink_cu12-12.6.85-py3-none-manylinux2010_x86_64.manylinux_2_12_x86_64.whl (19.7 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m19.7/19.7 MB\u001b[0m \u001b[31m109.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading nvidia_nvtx_cu12-12.6.77-py3-none-manylinux2014_x86_64.manylinux_2_17_x86_64.whl (89 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m89.3/89.3 kB\u001b[0m \u001b[31m9.5 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading triton-3.3.1-cp311-cp311-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl (155.7 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m155.7/155.7 MB\u001b[0m \u001b[31m6.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading transformers-4.55.0-py3-none-any.whl (11.3 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m11.3/11.3 MB\u001b[0m \u001b[31m141.6 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading gguf-0.17.1-py3-none-any.whl (96 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m96.2/96.2 kB\u001b[0m \u001b[31m9.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading llguidance-0.7.30-cp39-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (15.0 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m15.0/15.0 MB\u001b[0m \u001b[31m129.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading lm_format_enforcer-0.10.12-py3-none-any.whl (44 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m44.3/44.3 kB\u001b[0m \u001b[31m4.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading mistral_common-1.8.3-py3-none-any.whl (6.5 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m6.5/6.5 MB\u001b[0m \u001b[31m133.4 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading openai-1.90.0-py3-none-any.whl (734 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m734.6/734.6 kB\u001b[0m \u001b[31m53.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading prometheus_fastapi_instrumentator-7.1.0-py3-none-any.whl (19 kB)\n",
+ "Downloading ray-2.48.0-cp311-cp311-manylinux2014_x86_64.whl (70.1 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m70.1/70.1 MB\u001b[0m \u001b[31m37.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading blake3-1.0.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (385 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m385.5/385.5 kB\u001b[0m \u001b[31m35.3 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading cbor2-5.6.5-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (249 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m249.2/249.2 kB\u001b[0m \u001b[31m25.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading msgspec-0.19.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (210 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m210.7/210.7 kB\u001b[0m \u001b[31m22.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hUsing cached ninja-1.11.1.4-py3-none-manylinux_2_12_x86_64.manylinux2010_x86_64.whl (422 kB)\n",
+ "Downloading partial_json_parser-0.2.1.1.post6-py3-none-any.whl (10 kB)\n",
+ "Downloading pybase64-1.4.2-cp311-cp311-manylinux1_x86_64.manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_5_x86_64.whl (71 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m71.4/71.4 kB\u001b[0m \u001b[31m7.9 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading python_json_logger-3.3.0-py3-none-any.whl (15 kB)\n",
+ "Downloading watchfiles-1.1.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (453 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m453.1/453.1 kB\u001b[0m \u001b[31m41.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading email_validator-2.2.0-py3-none-any.whl (33 kB)\n",
+ "Downloading fastapi_cli-0.0.8-py3-none-any.whl (10 kB)\n",
+ "Downloading interegular-0.3.3-py37-none-any.whl (23 kB)\n",
+ "Downloading llvmlite-0.44.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (42.4 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m42.4/42.4 MB\u001b[0m \u001b[31m61.1 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading pydantic_extra_types-2.10.5-py3-none-any.whl (38 kB)\n",
+ "Downloading astor-0.8.1-py2.py3-none-any.whl (27 kB)\n",
+ "Downloading dnspython-2.7.0-py3-none-any.whl (313 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m313.6/313.6 kB\u001b[0m \u001b[31m29.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading fastapi_cloud_cli-0.1.5-py3-none-any.whl (18 kB)\n",
+ "Downloading httptools-0.6.4-cp311-cp311-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl (459 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m459.8/459.8 kB\u001b[0m \u001b[31m36.0 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading pycountry-24.6.1-py3-none-any.whl (6.3 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m6.3/6.3 MB\u001b[0m \u001b[31m127.7 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading python_dotenv-1.1.1-py3-none-any.whl (20 kB)\n",
+ "Downloading rich_toolkit-0.14.9-py3-none-any.whl (25 kB)\n",
+ "Downloading uvloop-0.21.0-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (4.0 MB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m4.0/4.0 MB\u001b[0m \u001b[31m114.2 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hDownloading rignore-0.6.4-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (950 kB)\n",
+ "\u001b[2K \u001b[90m━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━\u001b[0m \u001b[32m950.6/950.6 kB\u001b[0m \u001b[31m60.8 MB/s\u001b[0m eta \u001b[36m0:00:00\u001b[0m\n",
+ "\u001b[?25hInstalling collected packages: blake3, uvloop, triton, rignore, python-json-logger, python-dotenv, pycountry, pybase64, partial-json-parser, outlines_core, nvidia-nvtx-cu12, nvidia-nvjitlink-cu12, nvidia-curand-cu12, nvidia-cufile-cu12, nvidia-cuda-runtime-cu12, nvidia-cuda-nvrtc-cu12, nvidia-cuda-cupti-cu12, nvidia-cublas-cu12, ninja, msgspec, llvmlite, llguidance, lark, interegular, httptools, gguf, dnspython, diskcache, cbor2, astor, watchfiles, nvidia-cusparse-cu12, nvidia-cufft-cu12, nvidia-cudnn-cu12, numba, email-validator, depyf, rich-toolkit, pydantic-extra-types, prometheus-fastapi-instrumentator, openai, nvidia-cusolver-cu12, lm-format-enforcer, transformers, torch, ray, fastapi-cloud-cli, fastapi-cli, xgrammar, xformers, torchvision, torchaudio, mistral_common, compressed-tensors, vllm\n",
+ " Attempting uninstall: triton\n",
+ " Found existing installation: triton 3.3.0\n",
+ " Uninstalling triton-3.3.0:\n",
+ " Successfully uninstalled triton-3.3.0\n",
+ " Attempting uninstall: nvidia-nvtx-cu12\n",
+ " Found existing installation: nvidia-nvtx-cu12 12.8.55\n",
+ " Uninstalling nvidia-nvtx-cu12-12.8.55:\n",
+ " Successfully uninstalled nvidia-nvtx-cu12-12.8.55\n",
+ " Attempting uninstall: nvidia-nvjitlink-cu12\n",
+ " Found existing installation: nvidia-nvjitlink-cu12 12.8.61\n",
+ " Uninstalling nvidia-nvjitlink-cu12-12.8.61:\n",
+ " Successfully uninstalled nvidia-nvjitlink-cu12-12.8.61\n",
+ " Attempting uninstall: nvidia-curand-cu12\n",
+ " Found existing installation: nvidia-curand-cu12 10.3.9.55\n",
+ " Uninstalling nvidia-curand-cu12-10.3.9.55:\n",
+ " Successfully uninstalled nvidia-curand-cu12-10.3.9.55\n",
+ " Attempting uninstall: nvidia-cufile-cu12\n",
+ " Found existing installation: nvidia-cufile-cu12 1.13.0.11\n",
+ " Uninstalling nvidia-cufile-cu12-1.13.0.11:\n",
+ " Successfully uninstalled nvidia-cufile-cu12-1.13.0.11\n",
+ " Attempting uninstall: nvidia-cuda-runtime-cu12\n",
+ " Found existing installation: nvidia-cuda-runtime-cu12 12.8.57\n",
+ " Uninstalling nvidia-cuda-runtime-cu12-12.8.57:\n",
+ " Successfully uninstalled nvidia-cuda-runtime-cu12-12.8.57\n",
+ " Attempting uninstall: nvidia-cuda-nvrtc-cu12\n",
+ " Found existing installation: nvidia-cuda-nvrtc-cu12 12.8.61\n",
+ " Uninstalling nvidia-cuda-nvrtc-cu12-12.8.61:\n",
+ " Successfully uninstalled nvidia-cuda-nvrtc-cu12-12.8.61\n",
+ " Attempting uninstall: nvidia-cuda-cupti-cu12\n",
+ " Found existing installation: nvidia-cuda-cupti-cu12 12.8.57\n",
+ " Uninstalling nvidia-cuda-cupti-cu12-12.8.57:\n",
+ " Successfully uninstalled nvidia-cuda-cupti-cu12-12.8.57\n",
+ " Attempting uninstall: nvidia-cublas-cu12\n",
+ " Found existing installation: nvidia-cublas-cu12 12.8.3.14\n",
+ " Uninstalling nvidia-cublas-cu12-12.8.3.14:\n",
+ " Successfully uninstalled nvidia-cublas-cu12-12.8.3.14\n",
+ " Attempting uninstall: llvmlite\n",
+ " Found existing installation: llvmlite 0.43.0\n",
+ " Uninstalling llvmlite-0.43.0:\n",
+ " Successfully uninstalled llvmlite-0.43.0\n",
+ " Attempting uninstall: nvidia-cusparse-cu12\n",
+ " Found existing installation: nvidia-cusparse-cu12 12.5.7.53\n",
+ " Uninstalling nvidia-cusparse-cu12-12.5.7.53:\n",
+ " Successfully uninstalled nvidia-cusparse-cu12-12.5.7.53\n",
+ " Attempting uninstall: nvidia-cufft-cu12\n",
+ " Found existing installation: nvidia-cufft-cu12 11.3.3.41\n",
+ " Uninstalling nvidia-cufft-cu12-11.3.3.41:\n",
+ " Successfully uninstalled nvidia-cufft-cu12-11.3.3.41\n",
+ " Attempting uninstall: nvidia-cudnn-cu12\n",
+ " Found existing installation: nvidia-cudnn-cu12 9.7.1.26\n",
+ " Uninstalling nvidia-cudnn-cu12-9.7.1.26:\n",
+ " Successfully uninstalled nvidia-cudnn-cu12-9.7.1.26\n",
+ " Attempting uninstall: numba\n",
+ " Found existing installation: numba 0.60.0\n",
+ " Uninstalling numba-0.60.0:\n",
+ " Successfully uninstalled numba-0.60.0\n",
+ " Attempting uninstall: openai\n",
+ " Found existing installation: openai 1.98.0\n",
+ " Uninstalling openai-1.98.0:\n",
+ " Successfully uninstalled openai-1.98.0\n",
+ " Attempting uninstall: nvidia-cusolver-cu12\n",
+ " Found existing installation: nvidia-cusolver-cu12 11.7.2.55\n",
+ " Uninstalling nvidia-cusolver-cu12-11.7.2.55:\n",
+ " Successfully uninstalled nvidia-cusolver-cu12-11.7.2.55\n",
+ " Attempting uninstall: transformers\n",
+ " Found existing installation: transformers 4.51.3\n",
+ " Uninstalling transformers-4.51.3:\n",
+ " Successfully uninstalled transformers-4.51.3\n",
+ " Attempting uninstall: torch\n",
+ " Found existing installation: torch 2.7.0+cu128\n",
+ " Uninstalling torch-2.7.0+cu128:\n",
+ " Successfully uninstalled torch-2.7.0+cu128\n",
+ " Attempting uninstall: torchvision\n",
+ " Found existing installation: torchvision 0.22.0+cu128\n",
+ " Uninstalling torchvision-0.22.0+cu128:\n",
+ " Successfully uninstalled torchvision-0.22.0+cu128\n",
+ " Attempting uninstall: torchaudio\n",
+ " Found existing installation: torchaudio 2.7.0+cu128\n",
+ " Uninstalling torchaudio-2.7.0+cu128:\n",
+ " Successfully uninstalled torchaudio-2.7.0+cu128\n",
+ "\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n",
+ "fastai 2.7.19 requires torch<2.7,>=1.10, but you have torch 2.7.1 which is incompatible.\n",
+ "dots-ocr 1.0 requires transformers==4.51.3, but you have transformers 4.55.0 which is incompatible.\u001b[0m\u001b[31m\n",
+ "\u001b[0mSuccessfully installed astor-0.8.1 blake3-1.0.5 cbor2-5.6.5 compressed-tensors-0.10.2 depyf-0.19.0 diskcache-5.6.3 dnspython-2.7.0 email-validator-2.2.0 fastapi-cli-0.0.8 fastapi-cloud-cli-0.1.5 gguf-0.17.1 httptools-0.6.4 interegular-0.3.3 lark-1.2.2 llguidance-0.7.30 llvmlite-0.44.0 lm-format-enforcer-0.10.12 mistral_common-1.8.3 msgspec-0.19.0 ninja-1.11.1.4 numba-0.61.2 nvidia-cublas-cu12-12.6.4.1 nvidia-cuda-cupti-cu12-12.6.80 nvidia-cuda-nvrtc-cu12-12.6.77 nvidia-cuda-runtime-cu12-12.6.77 nvidia-cudnn-cu12-9.5.1.17 nvidia-cufft-cu12-11.3.0.4 nvidia-cufile-cu12-1.11.1.6 nvidia-curand-cu12-10.3.7.77 nvidia-cusolver-cu12-11.7.1.2 nvidia-cusparse-cu12-12.5.4.2 nvidia-nvjitlink-cu12-12.6.85 nvidia-nvtx-cu12-12.6.77 openai-1.90.0 outlines_core-0.2.10 partial-json-parser-0.2.1.1.post6 prometheus-fastapi-instrumentator-7.1.0 pybase64-1.4.2 pycountry-24.6.1 pydantic-extra-types-2.10.5 python-dotenv-1.1.1 python-json-logger-3.3.0 ray-2.48.0 rich-toolkit-0.14.9 rignore-0.6.4 torch-2.7.1 torchaudio-2.7.1 torchvision-0.22.1 transformers-4.55.0 triton-3.3.1 uvloop-0.21.0 vllm-0.10.0 watchfiles-1.1.0 xformers-0.0.31 xgrammar-0.1.21\n",
+ "nohup: failed to run command 'CUDA_VISIBLE_DEVICES=0': No such file or directory\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "from pyngrok import ngrok\n",
+ "public_url = ngrok.connect(8000, bind_tls=True) # Adjust port if needed\n",
+ "print(\"Public URL:\", public_url)"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "iNPRVOjmUxJb",
+ "outputId": "66388365-796e-4489-9285-17ad6ccad0ed"
+ },
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "Public URL: NgrokTunnel: \"https://988ecbb0776c.ngrok-free.app\" -> \"http://localhost:8000\"\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [
+ "!CUDA_VISIBLE_DEVICES=0 vllm serve ./weights/DotsOCR --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --chat-template-content-format string --served-model-name model --trust-remote-code"
+ ],
+ "metadata": {
+ "colab": {
+ "base_uri": "https://localhost:8080/"
+ },
+ "id": "QbYEd_foT2QY",
+ "outputId": "6c980927-042e-498a-e013-a575d4cf5132"
+ },
+ "execution_count": null,
+ "outputs": [
+ {
+ "output_type": "stream",
+ "name": "stdout",
+ "text": [
+ "2025-08-07 20:57:52.107021: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.\n",
+ "2025-08-07 20:57:52.125111: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
+ "WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n",
+ "E0000 00:00:1754600272.146783 10516 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
+ "E0000 00:00:1754600272.153513 10516 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
+ "W0000 00:00:1754600272.170115 10516 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n",
+ "W0000 00:00:1754600272.170145 10516 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n",
+ "W0000 00:00:1754600272.170148 10516 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n",
+ "W0000 00:00:1754600272.170151 10516 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n",
+ "2025-08-07 20:57:52.174913: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\n",
+ "To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\n",
+ "INFO 08-07 20:57:57 [__init__.py:235] Automatically detected platform cuda.\n",
+ "INFO 08-07 20:58:01 [api_server.py:1755] vLLM API server version 0.10.0\n",
+ "INFO 08-07 20:58:01 [cli_args.py:261] non-default args: {'model_tag': './weights/DotsOCR', 'chat_template_content_format': 'string', 'model': './weights/DotsOCR', 'trust_remote_code': True, 'served_model_name': ['model'], 'gpu_memory_utilization': 0.95}\n",
+ "INFO 08-07 20:58:01 [config.py:1604] Using max model len 131072\n",
+ "INFO 08-07 20:58:01 [config.py:2434] Chunked prefill is enabled with max_num_batched_tokens=2048.\n",
+ "2025-08-07 20:58:05.950037: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\n",
+ "WARNING: All log messages before absl::InitializeLog() is called are written to STDERR\n",
+ "E0000 00:00:1754600285.970806 10621 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\n",
+ "E0000 00:00:1754600285.977110 10621 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\n",
+ "W0000 00:00:1754600285.992571 10621 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n",
+ "W0000 00:00:1754600285.992601 10621 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n",
+ "W0000 00:00:1754600285.992604 10621 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n",
+ "W0000 00:00:1754600285.992606 10621 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.\n",
+ "INFO 08-07 20:58:11 [__init__.py:235] Automatically detected platform cuda.\n",
+ "INFO 08-07 20:58:14 [core.py:572] Waiting for init message from front-end.\n",
+ "INFO 08-07 20:58:14 [core.py:71] Initializing a V1 LLM engine (v0.10.0) with config: model='./weights/DotsOCR', speculative_config=None, tokenizer='./weights/DotsOCR', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config={}, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=131072, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(backend='auto', disable_fallback=False, disable_any_whitespace=False, disable_additional_properties=False, reasoning_backend=''), observability_config=ObservabilityConfig(show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None), seed=0, served_model_name=model, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, pooler_config=None, compilation_config={\"level\":3,\"debug_dump_path\":\"\",\"cache_dir\":\"\",\"backend\":\"\",\"custom_ops\":[],\"splitting_ops\":[\"vllm.unified_attention\",\"vllm.unified_attention_with_output\",\"vllm.mamba_mixer2\"],\"use_inductor\":true,\"compile_sizes\":[],\"inductor_compile_config\":{\"enable_auto_functionalized_v2\":false},\"inductor_passes\":{},\"use_cudagraph\":true,\"cudagraph_num_of_warmups\":1,\"cudagraph_capture_sizes\":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],\"cudagraph_copy_inputs\":false,\"full_cuda_graph\":false,\"max_capture_size\":512,\"local_cache_dir\":null}\n",
+ "INFO 08-07 20:58:15 [parallel_state.py:1102] rank 0 in world size 1 is assigned as DP rank 0, PP rank 0, TP rank 0, EP rank 0\n",
+ "WARNING 08-07 20:58:15 [topk_topp_sampler.py:59] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer.\n",
+ "INFO 08-07 20:58:15 [gpu_model_runner.py:1843] Starting to load model ./weights/DotsOCR...\n",
+ "INFO 08-07 20:58:15 [gpu_model_runner.py:1875] Loading model from scratch...\n",
+ "INFO 08-07 20:58:16 [cuda.py:290] Using Flash Attention backend on V1 engine.\n",
+ "Loading safetensors checkpoint shards: 100% 2/2 [00:01<00:00, 1.06it/s]\n",
+ "INFO 08-07 20:58:18 [default_loader.py:262] Loading weights took 1.99 seconds\n",
+ "INFO 08-07 20:58:19 [gpu_model_runner.py:1892] Model loading took 5.7174 GiB and 2.253556 seconds\n",
+ "INFO 08-07 20:58:19 [gpu_model_runner.py:2380] Encoder cache will be initialized with a budget of 14400 tokens, and profiled with 1 image items of the maximum feature size.\n",
+ "The image processor of type `Qwen2VLImageProcessor` is now loaded as a fast processor by default, even if the model checkpoint was saved with a slow processor. This is a breaking change and may produce slightly different outputs. To continue using the slow processor, instantiate this class with `use_fast=False`. Note that this behavior will be extended to all models in a future release.\n",
+ "You have video processor config saved in `preprocessor.json` file which is deprecated. Video processor configs should be saved in their own `video_preprocessor.json` file. You can rename the file or load and save the processor back which renames it automatically. Loading from `preprocessor.json` will be removed in v5.0.\n",
+ "INFO 08-07 20:58:49 [backends.py:530] Using cache directory: /root/.cache/vllm/torch_compile_cache/f40f68567f/rank_0_0/backbone for vLLM's torch.compile\n",
+ "INFO 08-07 20:58:49 [backends.py:541] Dynamo bytecode transform time: 8.76 s\n",
+ "INFO 08-07 20:58:56 [backends.py:161] Directly load the compiled graph(s) for dynamic shape from the cache, took 6.316 s\n",
+ "INFO 08-07 20:58:56 [monitor.py:34] torch.compile takes 8.76 s in total\n",
+ "INFO 08-07 20:58:58 [gpu_worker.py:255] Available KV cache memory: 12.20 GiB\n",
+ "INFO 08-07 20:58:58 [kv_cache_utils.py:833] GPU KV cache size: 456,816 tokens\n",
+ "INFO 08-07 20:58:58 [kv_cache_utils.py:837] Maximum concurrency for 131,072 tokens per request: 3.49x\n",
+ "Capturing CUDA graph shapes: 100% 67/67 [00:02<00:00, 24.17it/s]\n",
+ "INFO 08-07 20:59:01 [gpu_model_runner.py:2485] Graph capturing finished in 3 secs, took 0.44 GiB\n",
+ "INFO 08-07 20:59:01 [core.py:193] init engine (profile, create kv cache, warmup model) took 42.75 seconds\n",
+ "INFO 08-07 20:59:02 [loggers.py:141] Engine 000: vllm cache_config_info with initialization after num_gpu_blocks is: 28551\n",
+ "INFO 08-07 20:59:02 [api_server.py:1818] Starting vLLM API server 0 on http://0.0.0.0:8000\n",
+ "INFO 08-07 20:59:02 [launcher.py:29] Available routes are:\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /openapi.json, Methods: HEAD, GET\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /docs, Methods: HEAD, GET\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /docs/oauth2-redirect, Methods: HEAD, GET\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /redoc, Methods: HEAD, GET\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /health, Methods: GET\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /load, Methods: GET\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /ping, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /ping, Methods: GET\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /tokenize, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /detokenize, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/models, Methods: GET\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /version, Methods: GET\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/responses, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/responses/{response_id}, Methods: GET\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/responses/{response_id}/cancel, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/chat/completions, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/completions, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/embeddings, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /pooling, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /classify, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /score, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/score, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/audio/transcriptions, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/audio/translations, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /rerank, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v1/rerank, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /v2/rerank, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /scale_elastic_ep, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /is_scaling_elastic_ep, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /invocations, Methods: POST\n",
+ "INFO 08-07 20:59:02 [launcher.py:37] Route: /metrics, Methods: GET\n",
+ "\u001b[32mINFO\u001b[0m: Started server process [\u001b[36m10516\u001b[0m]\n",
+ "\u001b[32mINFO\u001b[0m: Waiting for application startup.\n",
+ "\u001b[32mINFO\u001b[0m: Application startup complete.\n",
+ "\u001b[32mINFO\u001b[0m: 2001:818:c61b:b000:457e:d747:75e4:7263:0 - \"\u001b[1mGET / HTTP/1.1\u001b[0m\" \u001b[31m404 Not Found\u001b[0m\n",
+ "\u001b[32mINFO\u001b[0m: 2001:818:c61b:b000:457e:d747:75e4:7263:0 - \"\u001b[1mGET /favicon.ico HTTP/1.1\u001b[0m\" \u001b[31m404 Not Found\u001b[0m\n",
+ "INFO 08-07 21:00:28 [launcher.py:80] Shutting down FastAPI HTTP server.\n",
+ "[rank0]:[W807 21:00:29.608158947 ProcessGroupNCCL.cpp:1479] Warning: WARNING: destroy_process_group() was not called before program exit, which can leak resources. For more info, please see https://pytorch.org/docs/stable/distributed.html#shutdown (function operator())\n",
+ "\u001b[32mINFO\u001b[0m: Shutting down\n",
+ "\u001b[31mERROR\u001b[0m: Traceback (most recent call last):\n",
+ " File \"/usr/local/lib/python3.11/dist-packages/starlette/routing.py\", line 701, in lifespan\n",
+ " await receive()\n",
+ " File \"/usr/local/lib/python3.11/dist-packages/uvicorn/lifespan/on.py\", line 137, in receive\n",
+ " return await self.receive_queue.get()\n",
+ " ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n",
+ " File \"/usr/lib/python3.11/asyncio/queues.py\", line 158, in get\n",
+ " await getter\n",
+ "asyncio.exceptions.CancelledError\n",
+ "\n"
+ ]
+ }
+ ]
+ },
+ {
+ "cell_type": "code",
+ "source": [],
+ "metadata": {
+ "id": "mbOe-1sxU1-r"
+ },
+ "execution_count": null,
+ "outputs": []
+ }
+ ]
+}
\ No newline at end of file
diff --git a/demo/demo_gradio.py b/demo/demo_gradio.py
new file mode 100644
index 0000000000000000000000000000000000000000..3fe4b737e4a99b50322bca86ae1d2f26252b532b
--- /dev/null
+++ b/demo/demo_gradio.py
@@ -0,0 +1,726 @@
+"""
+Layout Inference Web Application with Gradio
+
+A Gradio-based layout inference tool that supports image uploads and multiple backend inference engines.
+It adopts a reference-style interface design while preserving the original inference logic.
+"""
+
+import gradio as gr
+import json
+import os
+import io
+import tempfile
+import base64
+import zipfile
+import uuid
+import re
+from pathlib import Path
+from PIL import Image
+import requests
+import shutil # Import shutil for cleanup
+
+# Local tool imports
+from dots_ocr.utils import dict_promptmode_to_prompt
+from dots_ocr.utils.consts import MIN_PIXELS, MAX_PIXELS
+from dots_ocr.utils.demo_utils.display import read_image
+from dots_ocr.utils.doc_utils import load_images_from_pdf
+
+# Add DotsOCRParser import
+from dots_ocr.parser import DotsOCRParser
+
+
+# ==================== Configuration ====================
+DEFAULT_CONFIG = {
+ 'ip': "127.0.0.1",
+ 'port_vllm': 8000,
+ 'min_pixels': MIN_PIXELS,
+ 'max_pixels': MAX_PIXELS,
+ 'test_images_dir': "./assets/showcase_origin",
+}
+
+# ==================== Global Variables ====================
+# Store current configuration
+current_config = DEFAULT_CONFIG.copy()
+
+# Create DotsOCRParser instance
+dots_parser = DotsOCRParser(
+ ip=DEFAULT_CONFIG['ip'],
+ port=DEFAULT_CONFIG['port_vllm'],
+ dpi=200,
+ min_pixels=DEFAULT_CONFIG['min_pixels'],
+ max_pixels=DEFAULT_CONFIG['max_pixels']
+)
+
+def get_initial_session_state():
+ return {
+ 'processing_results': {
+ 'original_image': None,
+ 'processed_image': None,
+ 'layout_result': None,
+ 'markdown_content': None,
+ 'cells_data': None,
+ 'temp_dir': None,
+ 'session_id': None,
+ 'result_paths': None,
+ 'pdf_results': None
+ },
+ 'pdf_cache': {
+ "images": [],
+ "current_page": 0,
+ "total_pages": 0,
+ "file_type": None,
+ "is_parsed": False,
+ "results": []
+ }
+ }
+
+def read_image_v2(img):
+ """Reads an image, supports URLs and local paths"""
+ if isinstance(img, str) and img.startswith(("http://", "https://")):
+ with requests.get(img, stream=True) as response:
+ response.raise_for_status()
+ img = Image.open(io.BytesIO(response.content))
+ elif isinstance(img, str):
+ img, _, _ = read_image(img, use_native=True)
+ elif isinstance(img, Image.Image):
+ pass
+ else:
+ raise ValueError(f"Invalid image type: {type(img)}")
+ return img
+
+def load_file_for_preview(file_path, session_state):
+ """Loads a file for preview, supports PDF and image files"""
+ pdf_cache = session_state['pdf_cache']
+
+ if not file_path or not os.path.exists(file_path):
+ return None, "0 / 0
", session_state
+
+ file_ext = os.path.splitext(file_path)[1].lower()
+
+ try:
+ if file_ext == '.pdf':
+ pages = load_images_from_pdf(file_path)
+ pdf_cache["file_type"] = "pdf"
+ elif file_ext in ['.jpg', '.jpeg', '.png']:
+ image = Image.open(file_path)
+ pages = [image]
+ pdf_cache["file_type"] = "image"
+ else:
+ return None, "Unsupported file format
", session_state
+ except Exception as e:
+ return None, f"PDF loading failed: {str(e)}
", session_state
+
+ pdf_cache["images"] = pages
+ pdf_cache["current_page"] = 0
+ pdf_cache["total_pages"] = len(pages)
+ pdf_cache["is_parsed"] = False
+ pdf_cache["results"] = []
+
+ return pages[0], f"1 / {len(pages)}
", session_state
+
+def turn_page(direction, session_state):
+ """Page turning function"""
+ pdf_cache = session_state['pdf_cache']
+
+ if not pdf_cache["images"]:
+ return None, "0 / 0
", "", session_state
+
+ if direction == "prev":
+ pdf_cache["current_page"] = max(0, pdf_cache["current_page"] - 1)
+ elif direction == "next":
+ pdf_cache["current_page"] = min(pdf_cache["total_pages"] - 1, pdf_cache["current_page"] + 1)
+
+ index = pdf_cache["current_page"]
+ current_image = pdf_cache["images"][index] # Use the original image by default
+ page_info = f"{index + 1} / {pdf_cache['total_pages']}
"
+
+ current_json = ""
+ if pdf_cache["is_parsed"] and index < len(pdf_cache["results"]):
+ result = pdf_cache["results"][index]
+ if 'cells_data' in result and result['cells_data']:
+ try:
+ current_json = json.dumps(result['cells_data'], ensure_ascii=False, indent=2)
+ except:
+ current_json = str(result.get('cells_data', ''))
+ if 'layout_image' in result and result['layout_image']:
+ current_image = result['layout_image']
+
+ return current_image, page_info, current_json, session_state
+
+def get_test_images():
+ """Gets the list of test images"""
+ test_images = []
+ test_dir = current_config['test_images_dir']
+ if os.path.exists(test_dir):
+ test_images = [os.path.join(test_dir, name) for name in os.listdir(test_dir)
+ if name.lower().endswith(('.png', '.jpg', '.jpeg', '.pdf'))]
+ return test_images
+
+def create_temp_session_dir():
+ """Creates a unique temporary directory for each processing request"""
+ session_id = uuid.uuid4().hex[:8]
+ temp_dir = os.path.join(tempfile.gettempdir(), f"dots_ocr_demo_{session_id}")
+ os.makedirs(temp_dir, exist_ok=True)
+ return temp_dir, session_id
+
+def parse_image_with_high_level_api(parser, image, prompt_mode, fitz_preprocess=False):
+ """
+ Processes using the high-level API parse_image from DotsOCRParser
+ """
+ # Create a temporary session directory
+ temp_dir, session_id = create_temp_session_dir()
+
+ try:
+ # Save the PIL Image as a temporary file
+ temp_image_path = os.path.join(temp_dir, f"input_{session_id}.png")
+ image.save(temp_image_path, "PNG")
+
+ # Use the high-level API parse_image
+ filename = f"demo_{session_id}"
+ results = parser.parse_image(
+ input_path=image,
+ filename=filename,
+ prompt_mode=prompt_mode,
+ save_dir=temp_dir,
+ fitz_preprocess=fitz_preprocess
+ )
+
+ # Parse the results
+ if not results:
+ raise ValueError("No results returned from parser")
+
+ result = results[0] # parse_image returns a list with a single result
+
+ layout_image = None
+ if 'layout_image_path' in result and os.path.exists(result['layout_image_path']):
+ layout_image = Image.open(result['layout_image_path'])
+
+ cells_data = None
+ if 'layout_info_path' in result and os.path.exists(result['layout_info_path']):
+ with open(result['layout_info_path'], 'r', encoding='utf-8') as f:
+ cells_data = json.load(f)
+
+ md_content = None
+ if 'md_content_path' in result and os.path.exists(result['md_content_path']):
+ with open(result['md_content_path'], 'r', encoding='utf-8') as f:
+ md_content = f.read()
+
+ return {
+ 'layout_image': layout_image,
+ 'cells_data': cells_data,
+ 'md_content': md_content,
+ 'filtered': result.get('filtered', False),
+ 'temp_dir': temp_dir,
+ 'session_id': session_id,
+ 'result_paths': result,
+ 'input_width': result.get('input_width', 0),
+ 'input_height': result.get('input_height', 0),
+ }
+ except Exception as e:
+ if os.path.exists(temp_dir):
+ shutil.rmtree(temp_dir, ignore_errors=True)
+ raise e
+
+def parse_pdf_with_high_level_api(parser, pdf_path, prompt_mode):
+ """
+ Processes using the high-level API parse_pdf from DotsOCRParser
+ """
+ # Create a temporary session directory
+ temp_dir, session_id = create_temp_session_dir()
+
+ try:
+ # Use the high-level API parse_pdf
+ filename = f"demo_{session_id}"
+ results = parser.parse_pdf(
+ input_path=pdf_path,
+ filename=filename,
+ prompt_mode=prompt_mode,
+ save_dir=temp_dir
+ )
+
+ # Parse the results
+ if not results:
+ raise ValueError("No results returned from parser")
+
+ # Handle multi-page results
+ parsed_results = []
+ all_md_content = []
+ all_cells_data = []
+
+ for i, result in enumerate(results):
+ page_result = {
+ 'page_no': result.get('page_no', i),
+ 'layout_image': None,
+ 'cells_data': None,
+ 'md_content': None,
+ 'filtered': False
+ }
+
+ # Read the layout image
+ if 'layout_image_path' in result and os.path.exists(result['layout_image_path']):
+ page_result['layout_image'] = Image.open(result['layout_image_path'])
+
+ # Read the JSON data
+ if 'layout_info_path' in result and os.path.exists(result['layout_info_path']):
+ with open(result['layout_info_path'], 'r', encoding='utf-8') as f:
+ page_result['cells_data'] = json.load(f)
+ all_cells_data.extend(page_result['cells_data'])
+
+ # Read the Markdown content
+ if 'md_content_path' in result and os.path.exists(result['md_content_path']):
+ with open(result['md_content_path'], 'r', encoding='utf-8') as f:
+ page_content = f.read()
+ page_result['md_content'] = page_content
+ all_md_content.append(page_content)
+ page_result['filtered'] = result.get('filtered', False)
+ parsed_results.append(page_result)
+
+ combined_md = "\n\n---\n\n".join(all_md_content) if all_md_content else ""
+ return {
+ 'parsed_results': parsed_results,
+ 'combined_md_content': combined_md,
+ 'combined_cells_data': all_cells_data,
+ 'temp_dir': temp_dir,
+ 'session_id': session_id,
+ 'total_pages': len(results)
+ }
+
+ except Exception as e:
+ if os.path.exists(temp_dir):
+ shutil.rmtree(temp_dir, ignore_errors=True)
+ raise e
+
+# ==================== Core Processing Function ====================
+def process_image_inference(session_state, test_image_input, file_input,
+ prompt_mode, server_ip, server_port, min_pixels, max_pixels,
+ fitz_preprocess=False
+ ):
+ """Core function to handle image/PDF inference"""
+ # Use session_state instead of global variables
+ processing_results = session_state['processing_results']
+ pdf_cache = session_state['pdf_cache']
+
+ if processing_results.get('temp_dir') and os.path.exists(processing_results['temp_dir']):
+ try:
+ shutil.rmtree(processing_results['temp_dir'], ignore_errors=True)
+ except Exception as e:
+ print(f"Failed to clean up previous temporary directory: {e}")
+
+ # Reset processing results for the current session
+ session_state['processing_results'] = get_initial_session_state()['processing_results']
+ processing_results = session_state['processing_results']
+
+ current_config.update({
+ 'ip': server_ip,
+ 'port_vllm': server_port,
+ 'min_pixels': min_pixels,
+ 'max_pixels': max_pixels
+ })
+
+ # Update parser configuration
+ dots_parser.ip = server_ip
+ dots_parser.port = server_port
+ dots_parser.min_pixels = min_pixels
+ dots_parser.max_pixels = max_pixels
+
+ input_file_path = file_input if file_input else test_image_input
+
+ if not input_file_path:
+ return None, "Please upload image/PDF file or select test image", "", "", gr.update(value=None), None, "", session_state
+
+ file_ext = os.path.splitext(input_file_path)[1].lower()
+
+ try:
+ if file_ext == '.pdf':
+ # MINIMAL CHANGE: The `process_pdf_file` function is now inlined and uses session_state.
+ preview_image, page_info, session_state = load_file_for_preview(input_file_path, session_state)
+ pdf_result = parse_pdf_with_high_level_api(dots_parser, input_file_path, prompt_mode)
+
+ session_state['pdf_cache']["is_parsed"] = True
+ session_state['pdf_cache']["results"] = pdf_result['parsed_results']
+
+ processing_results.update({
+ 'markdown_content': pdf_result['combined_md_content'],
+ 'cells_data': pdf_result['combined_cells_data'],
+ 'temp_dir': pdf_result['temp_dir'],
+ 'session_id': pdf_result['session_id'],
+ 'pdf_results': pdf_result['parsed_results']
+ })
+
+ total_elements = len(pdf_result['combined_cells_data'])
+ info_text = f"**PDF Information:**\n- Total Pages: {pdf_result['total_pages']}\n- Server: {current_config['ip']}:{current_config['port_vllm']}\n- Total Detected Elements: {total_elements}\n- Session ID: {pdf_result['session_id']}"
+
+ current_page_layout_image = preview_image
+ current_page_json = ""
+ if session_state['pdf_cache']["results"]:
+ first_result = session_state['pdf_cache']["results"][0]
+ if 'layout_image' in first_result and first_result['layout_image']:
+ current_page_layout_image = first_result['layout_image']
+ if first_result.get('cells_data'):
+ try:
+ current_page_json = json.dumps(first_result['cells_data'], ensure_ascii=False, indent=2)
+ except:
+ current_page_json = str(first_result['cells_data'])
+
+ download_zip_path = None
+ if pdf_result['temp_dir']:
+ download_zip_path = os.path.join(pdf_result['temp_dir'], f"layout_results_{pdf_result['session_id']}.zip")
+ with zipfile.ZipFile(download_zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
+ for root, _, files in os.walk(pdf_result['temp_dir']):
+ for file in files:
+ if not file.endswith('.zip'): zipf.write(os.path.join(root, file), os.path.relpath(os.path.join(root, file), pdf_result['temp_dir']))
+
+ return (
+ current_page_layout_image, info_text, pdf_result['combined_md_content'] or "No markdown content generated",
+ pdf_result['combined_md_content'] or "No markdown content generated",
+ gr.update(value=download_zip_path, visible=bool(download_zip_path)), page_info, current_page_json, session_state
+ )
+
+ else: # Image processing
+ image = read_image_v2(input_file_path)
+ session_state['pdf_cache'] = get_initial_session_state()['pdf_cache']
+
+ original_image = image
+ parse_result = parse_image_with_high_level_api(dots_parser, image, prompt_mode, fitz_preprocess)
+
+ if parse_result['filtered']:
+ info_text = f"**Image Information:**\n- Original Size: {original_image.width} x {original_image.height}\n- Processing: JSON parsing failed, using cleaned text output\n- Server: {current_config['ip']}:{current_config['port_vllm']}\n- Session ID: {parse_result['session_id']}"
+ processing_results.update({
+ 'original_image': original_image, 'markdown_content': parse_result['md_content'],
+ 'temp_dir': parse_result['temp_dir'], 'session_id': parse_result['session_id'],
+ 'result_paths': parse_result['result_paths']
+ })
+ return original_image, info_text, parse_result['md_content'], parse_result['md_content'], gr.update(visible=False), None, "", session_state
+
+ md_content_raw = parse_result['md_content'] or "No markdown content generated"
+ processing_results.update({
+ 'original_image': original_image, 'layout_result': parse_result['layout_image'],
+ 'markdown_content': parse_result['md_content'], 'cells_data': parse_result['cells_data'],
+ 'temp_dir': parse_result['temp_dir'], 'session_id': parse_result['session_id'],
+ 'result_paths': parse_result['result_paths']
+ })
+
+ num_elements = len(parse_result['cells_data']) if parse_result['cells_data'] else 0
+ info_text = f"**Image Information:**\n- Original Size: {original_image.width} x {original_image.height}\n- Model Input Size: {parse_result['input_width']} x {parse_result['input_height']}\n- Server: {current_config['ip']}:{current_config['port_vllm']}\n- Detected {num_elements} layout elements\n- Session ID: {parse_result['session_id']}"
+
+ current_json = json.dumps(parse_result['cells_data'], ensure_ascii=False, indent=2) if parse_result['cells_data'] else ""
+
+ download_zip_path = None
+ if parse_result['temp_dir']:
+ download_zip_path = os.path.join(parse_result['temp_dir'], f"layout_results_{parse_result['session_id']}.zip")
+ with zipfile.ZipFile(download_zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
+ for root, _, files in os.walk(parse_result['temp_dir']):
+ for file in files:
+ if not file.endswith('.zip'): zipf.write(os.path.join(root, file), os.path.relpath(os.path.join(root, file), parse_result['temp_dir']))
+
+ return (
+ parse_result['layout_image'], info_text, parse_result['md_content'] or "No markdown content generated",
+ md_content_raw, gr.update(value=download_zip_path, visible=bool(download_zip_path)),
+ None, current_json, session_state
+ )
+ except Exception as e:
+ import traceback
+ traceback.print_exc()
+ return None, f"Error during processing: {e}", "", "", gr.update(value=None), None, "", session_state
+
+# MINIMAL CHANGE: Functions now take `session_state` as an argument.
+def clear_all_data(session_state):
+ """Clears all data"""
+ processing_results = session_state['processing_results']
+
+ if processing_results.get('temp_dir') and os.path.exists(processing_results['temp_dir']):
+ try:
+ shutil.rmtree(processing_results['temp_dir'], ignore_errors=True)
+ except Exception as e:
+ print(f"Failed to clean up temporary directory: {e}")
+
+ # Reset the session state by returning a new initial state
+ new_session_state = get_initial_session_state()
+
+ return (
+ None, # Clear file input
+ "", # Clear test image selection
+ None, # Clear result image
+ "Waiting for processing results...", # Reset info display
+ "## Waiting for processing results...", # Reset Markdown display
+ "🕐 Waiting for parsing result...", # Clear raw Markdown text
+ gr.update(visible=False), # Hide download button
+ "0 / 0
", # Reset page info
+ "🕐 Waiting for parsing result...", # Clear current page JSON
+ new_session_state
+ )
+
+def update_prompt_display(prompt_mode):
+ """Updates the prompt display content"""
+ return dict_promptmode_to_prompt[prompt_mode]
+
+# ==================== Gradio Interface ====================
+def create_gradio_interface():
+ """Creates the Gradio interface"""
+
+ # CSS styles, matching the reference style
+ css = """
+
+ #parse_button {
+ background: #FF576D !important; /* !important 确保覆盖主题默认样式 */
+ border-color: #FF576D !important;
+ }
+ /* 鼠标悬停时的颜色 */
+ #parse_button:hover {
+ background: #F72C49 !important;
+ border-color: #F72C49 !important;
+ }
+
+ #page_info_html {
+ display: flex;
+ align-items: center;
+ justify-content: center;
+ height: 100%;
+ margin: 0 12px;
+ }
+
+ #page_info_box {
+ padding: 8px 20px;
+ font-size: 16px;
+ border: 1px solid #bbb;
+ border-radius: 8px;
+ background-color: #f8f8f8;
+ text-align: center;
+ min-width: 80px;
+ box-shadow: 0 1px 3px rgba(0,0,0,0.1);
+ }
+
+ #markdown_output {
+ min-height: 800px;
+ overflow: auto;
+ }
+
+ footer {
+ visibility: hidden;
+ }
+
+ #info_box {
+ padding: 10px;
+ background-color: #f8f9fa;
+ border-radius: 8px;
+ border: 1px solid #dee2e6;
+ margin: 10px 0;
+ font-size: 14px;
+ }
+
+ #result_image {
+ border-radius: 8px;
+ }
+
+ #markdown_tabs {
+ height: 100%;
+ }
+ """
+
+ with gr.Blocks(theme="ocean", css=css, title='dots.ocr') as demo:
+ session_state = gr.State(value=get_initial_session_state())
+
+ # Title
+ gr.HTML("""
+
+
🔍 dots.ocr
+
+
+ Supports image/PDF layout analysis and structured output
+
+ """)
+
+ with gr.Row():
+ # Left side: Input and Configuration
+ with gr.Column(scale=1, elem_id="left-panel"):
+ gr.Markdown("### 📥 Upload & Select")
+ file_input = gr.File(
+ label="Upload PDF/Image",
+ type="filepath",
+ file_types=[".pdf", ".jpg", ".jpeg", ".png"],
+ )
+
+ test_images = get_test_images()
+ test_image_input = gr.Dropdown(
+ label="Or Select an Example",
+ choices=[""] + test_images,
+ value="",
+ )
+
+ gr.Markdown("### ⚙️ Prompt & Actions")
+ prompt_mode = gr.Dropdown(
+ label="Select Prompt",
+ choices=["prompt_layout_all_en", "prompt_layout_only_en", "prompt_ocr"],
+ value="prompt_layout_all_en",
+ )
+
+ # Display current prompt content
+ prompt_display = gr.Textbox(
+ label="Current Prompt Content",
+ value=dict_promptmode_to_prompt[list(dict_promptmode_to_prompt.keys())[0]],
+ lines=4,
+ max_lines=8,
+ interactive=False,
+ show_copy_button=True
+ )
+
+ with gr.Row():
+ process_btn = gr.Button("🔍 Parse", variant="primary", scale=2, elem_id="parse_button")
+ clear_btn = gr.Button("🗑️ Clear", variant="secondary", scale=1)
+
+ with gr.Accordion("🛠️ Advanced Configuration", open=False):
+ fitz_preprocess = gr.Checkbox(
+ label="Enable fitz_preprocess for images",
+ value=True,
+ info="Processes image via a PDF-like pipeline (image->pdf->200dpi image). Recommended if your image DPI is low."
+ )
+ with gr.Row():
+ server_ip = gr.Textbox(label="Server IP", value=DEFAULT_CONFIG['ip'])
+ server_port = gr.Number(label="Port", value=DEFAULT_CONFIG['port_vllm'], precision=0)
+ with gr.Row():
+ min_pixels = gr.Number(label="Min Pixels", value=DEFAULT_CONFIG['min_pixels'], precision=0)
+ max_pixels = gr.Number(label="Max Pixels", value=DEFAULT_CONFIG['max_pixels'], precision=0)
+ # Right side: Result Display
+ with gr.Column(scale=6, variant="compact"):
+ with gr.Row():
+ # Result Image
+ with gr.Column(scale=3):
+ gr.Markdown("### 👁️ File Preview")
+ result_image = gr.Image(
+ label="Layout Preview",
+ visible=True,
+ height=800,
+ show_label=False
+ )
+
+ # Page navigation (shown during PDF preview)
+ with gr.Row():
+ prev_btn = gr.Button("⬅ Previous", size="sm")
+ page_info = gr.HTML(
+ value="0 / 0
",
+ elem_id="page_info_html"
+ )
+ next_btn = gr.Button("Next ➡", size="sm")
+
+ # Info Display
+ info_display = gr.Markdown(
+ "Waiting for processing results...",
+ elem_id="info_box"
+ )
+
+ # Markdown Result
+ with gr.Column(scale=3):
+ gr.Markdown("### ✔️ Result Display")
+
+ with gr.Tabs(elem_id="markdown_tabs"):
+ with gr.TabItem("Markdown Render Preview"):
+ md_output = gr.Markdown(
+ "## Please click the parse button to parse or select for single-task recognition...",
+ max_height=600,
+ latex_delimiters=[
+ {"left": "$$", "right": "$$", "display": True},
+ {"left": "$", "right": "$", "display": False}
+ ],
+ show_copy_button=False,
+ elem_id="markdown_output"
+ )
+
+ with gr.TabItem("Markdown Raw Text"):
+ md_raw_output = gr.Textbox(
+ value="🕐 Waiting for parsing result...",
+ label="Markdown Raw Text",
+ max_lines=100,
+ lines=38,
+ show_copy_button=True,
+ elem_id="markdown_output",
+ show_label=False
+ )
+
+ with gr.TabItem("Current Page JSON"):
+ current_page_json = gr.Textbox(
+ value="🕐 Waiting for parsing result...",
+ label="Current Page JSON",
+ max_lines=100,
+ lines=38,
+ show_copy_button=True,
+ elem_id="markdown_output",
+ show_label=False
+ )
+
+ # Download Button
+ with gr.Row():
+ download_btn = gr.DownloadButton(
+ "⬇️ Download Results",
+ visible=False
+ )
+
+ # When the prompt mode changes, update the display content
+ prompt_mode.change(
+ fn=update_prompt_display,
+ inputs=prompt_mode,
+ outputs=prompt_display,
+ )
+
+ # Show preview on file upload
+ file_input.upload(
+ # fn=lambda file_data, state: load_file_for_preview(file_data, state),
+ fn=load_file_for_preview,
+ inputs=[file_input, session_state],
+ outputs=[result_image, page_info, session_state]
+ )
+
+ # Also handle test image selection
+ test_image_input.change(
+ # fn=lambda path, state: load_file_for_preview(path, state),
+ fn=load_file_for_preview,
+ inputs=[test_image_input, session_state],
+ outputs=[result_image, page_info, session_state]
+ )
+
+ prev_btn.click(
+ fn=lambda s: turn_page("prev", s),
+ inputs=[session_state],
+ outputs=[result_image, page_info, current_page_json, session_state]
+ )
+
+ next_btn.click(
+ fn=lambda s: turn_page("next", s),
+ inputs=[session_state],
+ outputs=[result_image, page_info, current_page_json, session_state]
+ )
+
+ process_btn.click(
+ fn=process_image_inference,
+ inputs=[
+ session_state, test_image_input, file_input,
+ prompt_mode, server_ip, server_port, min_pixels, max_pixels,
+ fitz_preprocess
+ ],
+ outputs=[
+ result_image, info_display, md_output, md_raw_output,
+ download_btn, page_info, current_page_json, session_state
+ ]
+ )
+
+ clear_btn.click(
+ fn=clear_all_data,
+ inputs=[session_state],
+ outputs=[
+ file_input, test_image_input,
+ result_image, info_display, md_output, md_raw_output,
+ download_btn, page_info, current_page_json, session_state
+ ]
+ )
+
+ return demo
+
+# ==================== Main Program ====================
+if __name__ == "__main__":
+ import sys
+ port = int(sys.argv[1])
+ demo = create_gradio_interface()
+ demo.queue().launch(
+ server_name="0.0.0.0",
+ server_port=port,
+ debug=True
+ )
diff --git a/demo/demo_gradio_annotion.py b/demo/demo_gradio_annotion.py
new file mode 100644
index 0000000000000000000000000000000000000000..a9c695c44fee122224dbabab2078bef3be84ceb0
--- /dev/null
+++ b/demo/demo_gradio_annotion.py
@@ -0,0 +1,666 @@
+"""
+Layout Inference Web Application with Gradio - Annotation Version
+
+A Gradio-based layout inference tool that supports image uploads and multiple backend inference engines.
+This version adds an image annotation feature, allowing users to draw bounding boxes on an image and send both the image and the boxes to the model.
+"""
+
+import gradio as gr
+import json
+import os
+import io
+import tempfile
+import base64
+import zipfile
+import uuid
+import re
+from pathlib import Path
+from PIL import Image
+import requests
+from gradio_image_annotation import image_annotator
+
+# Local utility imports
+from dots_ocr.utils import dict_promptmode_to_prompt
+from dots_ocr.utils.consts import MIN_PIXELS, MAX_PIXELS
+from dots_ocr.utils.demo_utils.display import read_image
+from dots_ocr.utils.doc_utils import load_images_from_pdf
+
+# Add DotsOCRParser import
+from dots_ocr.parser import DotsOCRParser
+
+# ==================== Configuration ====================
+DEFAULT_CONFIG = {
+ 'ip': "127.0.0.1",
+ 'port_vllm': 8000,
+ 'min_pixels': MIN_PIXELS,
+ 'max_pixels': MAX_PIXELS,
+ 'test_images_dir': "./assets/showcase_origin",
+}
+
+# ==================== Global Variables ====================
+# Store the current configuration
+current_config = DEFAULT_CONFIG.copy()
+
+# Create a DotsOCRParser instance
+dots_parser = DotsOCRParser(
+ ip=DEFAULT_CONFIG['ip'],
+ port=DEFAULT_CONFIG['port_vllm'],
+ dpi=200,
+ min_pixels=DEFAULT_CONFIG['min_pixels'],
+ max_pixels=DEFAULT_CONFIG['max_pixels']
+)
+
+# Store processing results
+processing_results = {
+ 'original_image': None,
+ 'processed_image': None,
+ 'layout_result': None,
+ 'markdown_content': None,
+ 'cells_data': None,
+ 'temp_dir': None,
+ 'session_id': None,
+ 'result_paths': None,
+ 'annotation_data': None # Store annotation data
+}
+
+# ==================== Utility Functions ====================
+def read_image_v2(img):
+ """Reads an image, supporting URLs and local paths."""
+ if isinstance(img, str) and img.startswith(("http://", "https://")):
+ with requests.get(img, stream=True) as response:
+ response.raise_for_status()
+ img = Image.open(io.BytesIO(response.content))
+ elif isinstance(img, str):
+ img, _, _ = read_image(img, use_native=True)
+ elif isinstance(img, Image.Image):
+ pass
+ else:
+ raise ValueError(f"Invalid image type: {type(img)}")
+ return img
+
+def get_test_images():
+ """Gets the list of test images."""
+ test_images = []
+ test_dir = current_config['test_images_dir']
+ if os.path.exists(test_dir):
+ test_images = [os.path.join(test_dir, name) for name in os.listdir(test_dir)
+ if name.lower().endswith(('.png', '.jpg', '.jpeg'))]
+ return test_images
+
+def create_temp_session_dir():
+ """Creates a unique temporary directory for each processing request."""
+ session_id = uuid.uuid4().hex[:8]
+ temp_dir = os.path.join(tempfile.gettempdir(), f"dots_ocr_demo_{session_id}")
+ os.makedirs(temp_dir, exist_ok=True)
+ return temp_dir, session_id
+
+def parse_image_with_bbox(parser, image, prompt_mode, bbox=None, fitz_preprocess=False):
+ """
+ Processes an image using DotsOCRParser, with support for the bbox parameter.
+ """
+ # Create a temporary session directory
+ temp_dir, session_id = create_temp_session_dir()
+
+ try:
+ # Save the PIL Image to a temporary file
+ temp_image_path = os.path.join(temp_dir, f"input_{session_id}.png")
+ image.save(temp_image_path, "PNG")
+
+ # Use the high-level parse_image interface, passing the bbox parameter
+ filename = f"demo_{session_id}"
+ results = parser.parse_image(
+ input_path=temp_image_path,
+ filename=filename,
+ prompt_mode=prompt_mode,
+ save_dir=temp_dir,
+ bbox=bbox,
+ fitz_preprocess=fitz_preprocess
+ )
+
+ # Parse the results
+ if not results:
+ raise ValueError("No results returned from parser")
+
+ result = results[0] # parse_image returns a list with a single result
+
+ # Read the result files
+ layout_image = None
+ cells_data = None
+ md_content = None
+ filtered = False
+
+ # Read the layout image
+ if 'layout_image_path' in result and os.path.exists(result['layout_image_path']):
+ layout_image = Image.open(result['layout_image_path'])
+
+ # Read the JSON data
+ if 'layout_info_path' in result and os.path.exists(result['layout_info_path']):
+ with open(result['layout_info_path'], 'r', encoding='utf-8') as f:
+ cells_data = json.load(f)
+
+ # Read the Markdown content
+ if 'md_content_path' in result and os.path.exists(result['md_content_path']):
+ with open(result['md_content_path'], 'r', encoding='utf-8') as f:
+ md_content = f.read()
+
+ # Check for the original response file (if JSON parsing fails)
+ if 'filtered' in result:
+ filtered = result['filtered']
+
+ return {
+ 'layout_image': layout_image,
+ 'cells_data': cells_data,
+ 'md_content': md_content,
+ 'filtered': filtered,
+ 'temp_dir': temp_dir,
+ 'session_id': session_id,
+ 'result_paths': result
+ }
+
+ except Exception as e:
+ # Clean up the temporary directory on error
+ import shutil
+ if os.path.exists(temp_dir):
+ shutil.rmtree(temp_dir, ignore_errors=True)
+ raise e
+
+def process_annotation_data(annotation_data):
+ """Processes annotation data, converting it to the format required by the model."""
+ if not annotation_data or not annotation_data.get('boxes'):
+ return None, None
+
+ # Get image and box data
+ image = annotation_data.get('image')
+ boxes = annotation_data.get('boxes', [])
+
+ if not boxes:
+ return image, None
+
+ # Ensure the image is in PIL Image format
+ if image is not None:
+ import numpy as np
+ if isinstance(image, np.ndarray):
+ image = Image.fromarray(image)
+ elif not isinstance(image, Image.Image):
+ # If it's another format, try to convert it
+ try:
+ image = Image.open(image) if isinstance(image, str) else Image.fromarray(image)
+ except Exception as e:
+ print(f"Image format conversion failed: {e}")
+ return None, None
+
+ # Get the coordinate information of the box (only one box)
+ box = boxes[0]
+ bbox = [box['xmin'], box['ymin'], box['xmax'], box['ymax']]
+
+ return image, bbox
+
+# ==================== Core Processing Function ====================
+def process_image_inference_with_annotation(annotation_data, test_image_input,
+ prompt_mode, server_ip, server_port, min_pixels, max_pixels,
+ fitz_preprocess=False
+ ):
+ """Core function for image inference, supporting annotation data."""
+ global current_config, processing_results, dots_parser
+
+ # First, clean up previous processing results
+ if processing_results.get('temp_dir') and os.path.exists(processing_results['temp_dir']):
+ import shutil
+ try:
+ shutil.rmtree(processing_results['temp_dir'], ignore_errors=True)
+ except Exception as e:
+ print(f"Failed to clean up previous temporary directory: {e}")
+
+ # Reset processing results
+ processing_results = {
+ 'original_image': None,
+ 'processed_image': None,
+ 'layout_result': None,
+ 'markdown_content': None,
+ 'cells_data': None,
+ 'temp_dir': None,
+ 'session_id': None,
+ 'result_paths': None,
+ 'annotation_data': annotation_data
+ }
+
+ # Update configuration
+ current_config.update({
+ 'ip': server_ip,
+ 'port_vllm': server_port,
+ 'min_pixels': min_pixels,
+ 'max_pixels': max_pixels
+ })
+
+ # Update parser configuration
+ dots_parser.ip = server_ip
+ dots_parser.port = server_port
+ dots_parser.min_pixels = min_pixels
+ dots_parser.max_pixels = max_pixels
+
+ # Determine the input source and process annotation data
+ image = None
+ bbox = None
+
+ # Prioritize processing annotation data
+ if annotation_data and annotation_data.get('image') is not None:
+ image, bbox = process_annotation_data(annotation_data)
+ if image is not None:
+ # If there's a bbox, force the use of 'prompt_grounding_ocr' mode
+ assert bbox is not None
+ prompt_mode = "prompt_grounding_ocr"
+
+ # If there's no annotation data, check the test image input
+ if image is None and test_image_input and test_image_input != "":
+ try:
+ image = read_image_v2(test_image_input)
+ except Exception as e:
+ return None, f"Failed to read test image: {e}", "", "", gr.update(value=None), ""
+
+ if image is None:
+ return None, "Please select a test image or add an image in the annotation component", "", "", gr.update(value=None), ""
+ if bbox is None:
+ return "Please select a bounding box by mouse", "Please select a bounding box by mouse", "", "", gr.update(value=None)
+
+ try:
+ # Process using DotsOCRParser, passing the bbox parameter
+ original_image = image
+ parse_result = parse_image_with_bbox(dots_parser, image, prompt_mode, bbox, fitz_preprocess)
+
+ # Extract parsing results
+ layout_image = parse_result['layout_image']
+ cells_data = parse_result['cells_data']
+ md_content = parse_result['md_content']
+ filtered = parse_result['filtered']
+
+ # Store the results
+ processing_results.update({
+ 'original_image': original_image,
+ 'processed_image': None,
+ 'layout_result': layout_image,
+ 'markdown_content': md_content,
+ 'cells_data': cells_data,
+ 'temp_dir': parse_result['temp_dir'],
+ 'session_id': parse_result['session_id'],
+ 'result_paths': parse_result['result_paths'],
+ 'annotation_data': annotation_data
+ })
+
+ # Handle the case where parsing fails
+ if filtered:
+ info_text = f"""
+**Image Information:**
+- Original Dimensions: {original_image.width} x {original_image.height}
+- Processing Mode: {'Region OCR' if bbox else 'Full Image OCR'}
+- Processing Status: JSON parsing failed, using cleaned text output
+- Server: {current_config['ip']}:{current_config['port_vllm']}
+- Session ID: {parse_result['session_id']}
+- Box Coordinates: {bbox if bbox else 'None'}
+ """
+
+ return (
+ md_content or "No markdown content generated",
+ info_text,
+ md_content or "No markdown content generated",
+ md_content or "No markdown content generated",
+ gr.update(visible=False),
+ ""
+ )
+
+ # Handle the case where JSON parsing succeeds
+ num_elements = len(cells_data) if cells_data else 0
+ info_text = f"""
+**Image Information:**
+- Original Dimensions: {original_image.width} x {original_image.height}
+- Processing Mode: {'Region OCR' if bbox else 'Full Image OCR'}
+- Server: {current_config['ip']}:{current_config['port_vllm']}
+- Detected {num_elements} layout elements
+- Session ID: {parse_result['session_id']}
+- Box Coordinates: {bbox if bbox else 'None'}
+ """
+
+ # Current page JSON output
+ current_json = ""
+ if cells_data:
+ try:
+ current_json = json.dumps(cells_data, ensure_ascii=False, indent=2)
+ except:
+ current_json = str(cells_data)
+
+ # Create a downloadable ZIP file
+ download_zip_path = None
+ if parse_result['temp_dir']:
+ download_zip_path = os.path.join(parse_result['temp_dir'], f"layout_results_{parse_result['session_id']}.zip")
+ try:
+ with zipfile.ZipFile(download_zip_path, 'w', zipfile.ZIP_DEFLATED) as zipf:
+ for root, dirs, files in os.walk(parse_result['temp_dir']):
+ for file in files:
+ if file.endswith('.zip'):
+ continue
+ file_path = os.path.join(root, file)
+ arcname = os.path.relpath(file_path, parse_result['temp_dir'])
+ zipf.write(file_path, arcname)
+ except Exception as e:
+ print(f"Failed to create download ZIP: {e}")
+ download_zip_path = None
+
+ return (
+ md_content or "No markdown content generated",
+ info_text,
+ md_content or "No markdown content generated",
+ md_content or "No markdown content generated",
+ gr.update(value=download_zip_path, visible=True) if download_zip_path else gr.update(visible=False),
+ current_json
+ )
+
+ except Exception as e:
+ return f"An error occurred during processing: {e}", f"An error occurred during processing: {e}", "", "", gr.update(value=None), ""
+
+def load_image_to_annotator(test_image_input):
+ """Loads an image into the annotation component."""
+ image = None
+
+ # Check the test image input
+ if test_image_input and test_image_input != "":
+ try:
+ image = read_image_v2(test_image_input)
+ except Exception as e:
+ return None
+
+ if image is None:
+ return None
+
+ # Return the format required by the annotation component
+ return {
+ "image": image,
+ "boxes": []
+ }
+
+def clear_all_data():
+ """Clears all data."""
+ global processing_results
+
+ # Clean up the temporary directory
+ if processing_results.get('temp_dir') and os.path.exists(processing_results['temp_dir']):
+ import shutil
+ try:
+ shutil.rmtree(processing_results['temp_dir'], ignore_errors=True)
+ except Exception as e:
+ print(f"Failed to clean up temporary directory: {e}")
+
+ # Reset processing results
+ processing_results = {
+ 'original_image': None,
+ 'processed_image': None,
+ 'layout_result': None,
+ 'markdown_content': None,
+ 'cells_data': None,
+ 'temp_dir': None,
+ 'session_id': None,
+ 'result_paths': None,
+ 'annotation_data': None
+ }
+
+ return (
+ "", # Clear test image selection
+ None, # Clear annotation component
+ "Waiting for processing results...", # Reset info display
+ "## Waiting for processing results...", # Reset Markdown display
+ "🕐 Waiting for parsing results...", # Clear raw Markdown text
+ gr.update(visible=False), # Hide download button
+ "🕐 Waiting for parsing results..." # Clear JSON
+ )
+
+def update_prompt_display(prompt_mode):
+ """Updates the displayed prompt content."""
+ return dict_promptmode_to_prompt[prompt_mode]
+
+# ==================== Gradio Interface ====================
+def create_gradio_interface():
+ """Creates the Gradio interface."""
+
+ # CSS styling to match the reference style
+ css = """
+ footer {
+ visibility: hidden;
+ }
+
+ #info_box {
+ padding: 10px;
+ background-color: #f8f9fa;
+ border-radius: 8px;
+ border: 1px solid #dee2e6;
+ margin: 10px 0;
+ font-size: 14px;
+ }
+
+ #markdown_tabs {
+ height: 100%;
+ }
+
+ #annotation_component {
+ border-radius: 8px;
+ }
+ """
+
+ with gr.Blocks(theme="ocean", css=css, title='dots.ocr - Annotation') as demo:
+
+ # Title
+ gr.HTML("""
+
+
🔍 dots.ocr - Annotation Version
+
+
+ Supports image annotation, drawing boxes, and sending box information to the model for OCR.
+
+ """)
+
+ with gr.Row():
+ # Left side: Input and Configuration
+ with gr.Column(scale=1, variant="compact"):
+ gr.Markdown("### 📁 Select Example")
+ test_images = get_test_images()
+ test_image_input = gr.Dropdown(
+ label="Select Example",
+ choices=[""] + test_images,
+ value="",
+ show_label=True
+ )
+
+ # Button to load image into the annotation component
+ load_btn = gr.Button("📷 Load Image to Annotation Area", variant="secondary")
+
+ prompt_mode = gr.Dropdown(
+ label="Select Prompt",
+ # choices=["prompt_layout_all_en", "prompt_layout_only_en", "prompt_ocr", "prompt_grounding_ocr"],
+ choices=["prompt_grounding_ocr"],
+ value="prompt_grounding_ocr",
+ show_label=True,
+ info="If a box is drawn, 'prompt_grounding_ocr' mode will be used automatically."
+ )
+
+ # Display the current prompt content
+ prompt_display = gr.Textbox(
+ label="Current Prompt Content",
+ # value=dict_promptmode_to_prompt[list(dict_promptmode_to_prompt.keys())[0]],
+ value=dict_promptmode_to_prompt["prompt_grounding_ocr"],
+ lines=4,
+ max_lines=8,
+ interactive=False,
+ show_copy_button=True
+ )
+
+ gr.Markdown("### ⚙️ Actions")
+ process_btn = gr.Button("🔍 Parse", variant="primary")
+ clear_btn = gr.Button("🗑️ Clear", variant="secondary")
+
+ gr.Markdown("### 🛠️ Configuration")
+
+ fitz_preprocess = gr.Checkbox(
+ label="Enable fitz_preprocess",
+ value=False,
+ info="Performs fitz preprocessing on the image input, converting the image to a PDF and then to a 200dpi image."
+ )
+
+ with gr.Row():
+ server_ip = gr.Textbox(
+ label="Server IP",
+ value=DEFAULT_CONFIG['ip']
+ )
+ server_port = gr.Number(
+ label="Port",
+ value=DEFAULT_CONFIG['port_vllm'],
+ precision=0
+ )
+
+ with gr.Row():
+ min_pixels = gr.Number(
+ label="Min Pixels",
+ value=DEFAULT_CONFIG['min_pixels'],
+ precision=0
+ )
+ max_pixels = gr.Number(
+ label="Max Pixels",
+ value=DEFAULT_CONFIG['max_pixels'],
+ precision=0
+ )
+
+ # Right side: Result Display
+ with gr.Column(scale=6, variant="compact"):
+ with gr.Row():
+ # Image Annotation Area
+ with gr.Column(scale=3):
+ gr.Markdown("### 🎯 Image Annotation Area")
+ gr.Markdown("""
+ **Instructions:**
+ - Method 1: Select an example image on the left and click "Load Image to Annotation Area".
+ - Method 2: Upload an image directly in the annotation area below (drag and drop or click to upload).
+ - Use the mouse to draw a box on the image to select the region for recognition.
+ - Only one box can be drawn. To draw a new one, please delete the old one first.
+ - **Hotkey: Press the Delete key to remove the selected box.**
+ - After drawing a box, clicking Parse will automatically use the Region OCR mode.
+ """)
+
+ annotator = image_annotator(
+ value=None,
+ label="Image Annotation",
+ height=600,
+ show_label=False,
+ elem_id="annotation_component",
+ single_box=True, # Only allow one box; a new box will replace the old one
+ box_min_size=10,
+ interactive=True,
+ disable_edit_boxes=True, # Disable the edit dialog
+ label_list=["OCR Region"], # Set the default label
+ label_colors=[(255, 0, 0)], # Set color to red
+ use_default_label=True, # Use the default label
+ image_type="pil" # Ensure it returns a PIL Image format
+ )
+
+ # Information Display
+ info_display = gr.Markdown(
+ "Waiting for processing results...",
+ elem_id="info_box"
+ )
+
+ # Result Display Area
+ with gr.Column(scale=3):
+ gr.Markdown("### ✅ Results")
+
+ with gr.Tabs(elem_id="markdown_tabs"):
+ with gr.TabItem("Markdown Rendered View"):
+ md_output = gr.Markdown(
+ "## Please upload an image and click the Parse button for recognition...",
+ label="Markdown Preview",
+ max_height=1000,
+ latex_delimiters=[
+ {"left": "$$", "right": "$$", "display": True},
+ {"left": "$", "right": "$", "display": False},
+ ],
+ show_copy_button=False,
+ elem_id="markdown_output"
+ )
+
+ with gr.TabItem("Markdown Raw Text"):
+ md_raw_output = gr.Textbox(
+ value="🕐 Waiting for parsing results...",
+ label="Markdown Raw Text",
+ max_lines=100,
+ lines=38,
+ show_copy_button=True,
+ elem_id="markdown_output",
+ show_label=False
+ )
+
+ with gr.TabItem("JSON Result"):
+ json_output = gr.Textbox(
+ value="🕐 Waiting for parsing results...",
+ label="JSON Result",
+ max_lines=100,
+ lines=38,
+ show_copy_button=True,
+ elem_id="markdown_output",
+ show_label=False
+ )
+
+ # Download Button
+ with gr.Row():
+ download_btn = gr.DownloadButton(
+ "⬇️ Download Results",
+ visible=False
+ )
+
+ # Event Binding
+
+ # When the prompt mode changes, update the displayed content
+ prompt_mode.change(
+ fn=update_prompt_display,
+ inputs=prompt_mode,
+ outputs=prompt_display,
+ show_progress=False
+ )
+
+ # Load image into the annotation component
+ load_btn.click(
+ fn=load_image_to_annotator,
+ inputs=[test_image_input],
+ outputs=annotator,
+ show_progress=False
+ )
+
+ # Process Inference
+ process_btn.click(
+ fn=process_image_inference_with_annotation,
+ inputs=[
+ annotator, test_image_input,
+ prompt_mode, server_ip, server_port, min_pixels, max_pixels,
+ fitz_preprocess
+ ],
+ outputs=[
+ md_output, info_display, md_raw_output, md_raw_output,
+ download_btn, json_output
+ ],
+ show_progress=True
+ )
+
+ # Clear Data
+ clear_btn.click(
+ fn=clear_all_data,
+ outputs=[
+ test_image_input, annotator,
+ info_display, md_output, md_raw_output,
+ download_btn, json_output
+ ],
+ show_progress=False
+ )
+
+ return demo
+
+# ==================== Main Program ====================
+if __name__ == "__main__":
+ demo = create_gradio_interface()
+ demo.queue().launch(
+ server_name="0.0.0.0",
+ server_port=7861, # Use a different port to avoid conflicts
+ debug=True
+ )
diff --git a/demo/demo_hf.py b/demo/demo_hf.py
new file mode 100644
index 0000000000000000000000000000000000000000..1fb7530fae8a7392c8e63199912aa7ad7de8983c
--- /dev/null
+++ b/demo/demo_hf.py
@@ -0,0 +1,71 @@
+import os
+if "LOCAL_RANK" not in os.environ:
+ os.environ["LOCAL_RANK"] = "0"
+
+import torch
+from transformers import AutoModelForCausalLM, AutoProcessor, AutoTokenizer
+from qwen_vl_utils import process_vision_info
+from dots_ocr.utils import dict_promptmode_to_prompt
+
+def inference(image_path, prompt, model, processor):
+ # image_path = "demo/demo_image1.jpg"
+ messages = [
+ {
+ "role": "user",
+ "content": [
+ {
+ "type": "image",
+ "image": image_path
+ },
+ {"type": "text", "text": prompt}
+ ]
+ }
+ ]
+
+
+ # Preparation for inference
+ text = processor.apply_chat_template(
+ messages,
+ tokenize=False,
+ add_generation_prompt=True
+ )
+ image_inputs, video_inputs = process_vision_info(messages)
+ inputs = processor(
+ text=[text],
+ images=image_inputs,
+ videos=video_inputs,
+ padding=True,
+ return_tensors="pt",
+ )
+
+ inputs = inputs.to("cuda")
+
+ # Inference: Generation of the output
+ generated_ids = model.generate(**inputs, max_new_tokens=24000)
+ generated_ids_trimmed = [
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
+ ]
+ output_text = processor.batch_decode(
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
+ )
+ print(output_text)
+
+
+
+if __name__ == "__main__":
+ # We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image and video scenarios.
+ model_path = "./weights/DotsOCR"
+ model = AutoModelForCausalLM.from_pretrained(
+ model_path,
+ attn_implementation="flash_attention_2",
+ torch_dtype=torch.bfloat16,
+ device_map="auto",
+ trust_remote_code=True
+ )
+ processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True)
+
+ image_path = "demo/demo_image1.jpg"
+ for prompt_mode, prompt in dict_promptmode_to_prompt.items():
+ print(f"prompt: {prompt}")
+ inference(image_path, prompt, model, processor)
+
\ No newline at end of file
diff --git a/demo/demo_image1.jpg b/demo/demo_image1.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..488d8d30d8239567e50e634059c50797638e5974
--- /dev/null
+++ b/demo/demo_image1.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90345584ccc2c4a883779e5d47693276e8cf3fe752700af4f03b3142ab46cfa2
+size 772990
diff --git a/demo/demo_pdf1.pdf b/demo/demo_pdf1.pdf
new file mode 100644
index 0000000000000000000000000000000000000000..5c3f9a91e0f4acc415de6bb1ae4fa8901bb5f8cf
--- /dev/null
+++ b/demo/demo_pdf1.pdf
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:570c44a595f52e963d0522fb561b338c327550b37974448f4e4f43c605b72f42
+size 461448
diff --git a/demo/demo_streamlit.py b/demo/demo_streamlit.py
new file mode 100644
index 0000000000000000000000000000000000000000..44f6d4cc67e3df6e13c9e7616c93aba061010e3f
--- /dev/null
+++ b/demo/demo_streamlit.py
@@ -0,0 +1,222 @@
+"""
+Layout Inference Web Application
+
+A Streamlit-based layout inference tool that supports image uploads and multiple backend inference engines.
+"""
+
+import streamlit as st
+import json
+import os
+import io
+import tempfile
+from PIL import Image
+import requests
+
+# Local utility imports
+
+# from utils import infer
+
+from dots_ocr.utils import dict_promptmode_to_prompt
+from dots_ocr.utils.format_transformer import layoutjson2md
+from dots_ocr.utils.layout_utils import draw_layout_on_image, post_process_cells
+from dots_ocr.utils.image_utils import get_input_dimensions, get_image_by_fitz_doc
+from dots_ocr.model.inference import inference_with_vllm
+from dots_ocr.utils.consts import MIN_PIXELS, MAX_PIXELS
+
+import os
+from PIL import Image
+from dots_ocr.utils.demo_utils.display import read_image
+
+
+
+# ==================== Configuration ====================
+DEFAULT_CONFIG = {
+ 'ip': "127.0.0.1",
+ 'port_vllm': 8000,
+ 'min_pixels': MIN_PIXELS,
+ 'max_pixels': MAX_PIXELS,
+ 'test_images_dir': "./assets/showcase_origin",
+}
+
+# ==================== Utility Functions ====================
+
+
+@st.cache_resource
+def read_image_v2(img: str):
+ if img.startswith(("http://", "https://")):
+ with requests.get(img, stream=True) as response:
+ response.raise_for_status()
+ img = Image.open(io.BytesIO(response.content))
+
+ if isinstance(img, str):
+ # img = transform_image_path(img)
+ img, _, _ = read_image(img, use_native=True)
+ elif isinstance(img, Image.Image):
+ pass
+ else:
+ raise ValueError(f"Invalid image type: {type(img)}")
+ return img
+
+
+# ==================== UI Components ====================
+def create_config_sidebar():
+ """Create configuration sidebar"""
+ st.sidebar.header("Configuration Parameters")
+
+ config = {}
+ config['prompt_key'] = st.sidebar.selectbox("Prompt Mode", list(dict_promptmode_to_prompt.keys()))
+ config['ip'] = st.sidebar.text_input("Server IP", DEFAULT_CONFIG['ip'])
+ config['port'] = st.sidebar.number_input("Port", min_value=1000, max_value=9999, value=DEFAULT_CONFIG['port_vllm'])
+ # config['eos_word'] = st.sidebar.text_input("EOS Word", DEFAULT_CONFIG['eos_word'])
+
+ # Image configuration
+ st.sidebar.subheader("Image Configuration")
+ config['min_pixels'] = st.sidebar.number_input("Min Pixels", value=DEFAULT_CONFIG['min_pixels'])
+ config['max_pixels'] = st.sidebar.number_input("Max Pixels", value=DEFAULT_CONFIG['max_pixels'])
+
+ return config
+
+def get_image_input():
+ """Get image input"""
+ st.markdown("#### Image Input")
+
+ input_mode = st.pills(label="Select input method", options=["Upload Image", "Enter Image URL/Path", "Select Test Image"], key="input_mode", label_visibility="collapsed")
+
+ if input_mode == "Upload Image":
+ # File uploader
+ uploaded_file = st.file_uploader("Upload Image", type=["png", "jpg", "jpeg"])
+ if uploaded_file is not None:
+ with tempfile.NamedTemporaryFile(delete=False, suffix='.png') as tmp_file:
+ tmp_file.write(uploaded_file.getvalue())
+ return tmp_file.name
+ elif input_mode == 'Enter Image URL/Path':
+ # URL input
+ img_url_input = st.text_input("Enter Image URL/Path")
+ return img_url_input
+
+ elif input_mode == 'Select Test Image':
+ # Test image selection
+ test_images = []
+ test_dir = DEFAULT_CONFIG['test_images_dir']
+ if os.path.exists(test_dir):
+ test_images = [os.path.join(test_dir, name) for name in os.listdir(test_dir)]
+ img_url_test = st.selectbox("Select Test Image", [""] + test_images)
+ return img_url_test
+ else:
+ raise ValueError(f"Invalid input mode: {input_mode}")
+
+ return None
+
+
+
+def process_and_display_results(output: str, image: Image.Image, config: dict):
+ """Process and display inference results"""
+ prompt, response = output['prompt'], output['response']
+
+ try:
+ col1, col2 = st.columns(2)
+ # st.markdown('---')
+ cells = json.loads(response)
+ # image = Image.open(img_url)
+
+ # Post-processing
+ cells = post_process_cells(
+ image, cells,
+ image.width, image.height,
+ min_pixels=config['min_pixels'],
+ max_pixels=config['max_pixels']
+ )
+
+ # Calculate input dimensions
+ input_width, input_height = get_input_dimensions(
+ image,
+ min_pixels=config['min_pixels'],
+ max_pixels=config['max_pixels']
+ )
+ st.markdown('---')
+ st.write(f'Input Dimensions: {input_width} x {input_height}')
+ # st.write(f'Prompt: {prompt}')
+ # st.markdown(f'模型原始输出: {result}', unsafe_allow_html=True)
+ # st.write('模型原始输出:')
+ # st.write(response)
+ # st.write('后处理结果:', str(cells))
+ st.text_area('Original Model Output', response, height=200)
+ st.text_area('Post-processed Result', str(cells), height=200)
+ # 显示结果
+ # st.title("Layout推理结果")
+
+ with col1:
+ # st.markdown("##### 可视化结果")
+ new_image = draw_layout_on_image(
+ image, cells,
+ resized_height=None, resized_width=None,
+ # text_key='text',
+ fill_bbox=True, draw_bbox=True
+ )
+ st.markdown('##### Visualization Result')
+ st.image(new_image, width=new_image.width)
+ # st.write(f"尺寸: {new_image.width} x {new_image.height}")
+
+ with col2:
+ # st.markdown("##### Markdown格式")
+ md_code = layoutjson2md(image, cells, text_key='text')
+ # md_code = fix_streamlit_formula(md_code)
+ st.markdown('##### Markdown Format')
+ st.markdown(md_code, unsafe_allow_html=True)
+
+ except json.JSONDecodeError:
+ st.error("Model output is not a valid JSON format")
+ except Exception as e:
+ st.error(f"Error processing results: {e}")
+
+# ==================== Main Application ====================
+def main():
+ """Main application function"""
+ st.set_page_config(page_title="Layout Inference Tool", layout="wide")
+ st.title("🔍 Layout Inference Tool")
+
+ # Configuration
+ config = create_config_sidebar()
+ prompt = dict_promptmode_to_prompt[config['prompt_key']]
+ st.sidebar.info(f"Current Prompt: {prompt}")
+
+ # Image input
+ img_url = get_image_input()
+ start_button = st.button('🚀 Start Inference', type="primary")
+
+ if img_url is not None and img_url.strip() != "":
+ try:
+ # processed_image = read_image_v2(img_url)
+ origin_image = read_image_v2(img_url)
+ st.write(f"Original Dimensions: {origin_image.width} x {origin_image.height}")
+ # processed_image = get_image_by_fitz_doc(origin_image, target_dpi=200)
+ processed_image = origin_image
+ except Exception as e:
+ st.error(f"Failed to read image: {e}")
+ return
+ else:
+ st.info("Please enter an image URL/path or upload an image")
+ return
+
+ output = None
+ # Inference button
+ if start_button:
+ with st.spinner(f"Inferring... Server: {config['ip']}:{config['port']}"):
+
+ response = inference_with_vllm(
+ processed_image, prompt, config['ip'], config['port'],
+ # config['min_pixels'], config['max_pixels']
+ )
+ output = {
+ 'prompt': prompt,
+ 'response': response,
+ }
+ else:
+ st.image(processed_image, width=500)
+
+ # Process results
+ if output:
+ process_and_display_results(output, processed_image, config)
+
+if __name__ == "__main__":
+ main()
diff --git a/demo/demo_vllm.py b/demo/demo_vllm.py
new file mode 100644
index 0000000000000000000000000000000000000000..28542ff90a279e6a80979f30592b3353fade82a5
--- /dev/null
+++ b/demo/demo_vllm.py
@@ -0,0 +1,42 @@
+import argparse
+import os
+
+from openai import OpenAI
+from transformers.utils.versions import require_version
+from PIL import Image
+import io
+import base64
+from dots_ocr.utils import dict_promptmode_to_prompt
+from dots_ocr.model.inference import inference_with_vllm
+
+
+parser = argparse.ArgumentParser()
+parser.add_argument("--ip", type=str, default="localhost")
+parser.add_argument("--port", type=str, default="8000")
+parser.add_argument("--model_name", type=str, default="model")
+parser.add_argument("--prompt_mode", type=str, default="prompt_layout_all_en")
+
+args = parser.parse_args()
+
+require_version("openai>=1.5.0", "To fix: pip install openai>=1.5.0")
+
+
+def main():
+ addr = f"http://{args.ip}:{args.port}/v1"
+ image_path = "demo/demo_image1.jpg"
+ prompt = dict_promptmode_to_prompt[args.prompt_mode]
+ image = Image.open(image_path)
+ response = inference_with_vllm(
+ image,
+ prompt,
+ ip=args.ip,
+ port=args.port,
+ temperature=0.1,
+ top_p=0.9,
+ model_name=args.model_name,
+ )
+ print(f"response: {response}")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/demo/launch_model_vllm.sh b/demo/launch_model_vllm.sh
new file mode 100644
index 0000000000000000000000000000000000000000..6df89bdbc383ad93a8a7d6253e9af7d5995d36fe
--- /dev/null
+++ b/demo/launch_model_vllm.sh
@@ -0,0 +1,17 @@
+# download model to /path/to/model
+if [ -z "$NODOWNLOAD" ]; then
+ python3 tools/download_model.py
+fi
+
+# register model to vllm
+hf_model_path=./weights/DotsOCR # Path to your downloaded model weights
+export PYTHONPATH=$(dirname "$hf_model_path"):$PYTHONPATH
+sed -i '/^from vllm\.entrypoints\.cli\.main import main$/a\
+from DotsOCR import modeling_dots_ocr_vllm' `which vllm`
+
+# launch vllm server
+model_name=model
+CUDA_VISIBLE_DEVICES=0 vllm serve ${hf_model_path} --tensor-parallel-size 1 --gpu-memory-utilization 0.95 --chat-template-content-format string --served-model-name ${model_name} --trust-remote-code
+
+# # run python demo after launch vllm server
+# python demo/demo_vllm.py
\ No newline at end of file
diff --git a/docker/Dockerfile b/docker/Dockerfile
new file mode 100644
index 0000000000000000000000000000000000000000..20bff5c10b7ad894dfb421bbcdc263754a3dd574
--- /dev/null
+++ b/docker/Dockerfile
@@ -0,0 +1,4 @@
+from vllm/vllm-openai:v0.9.1
+
+RUN pip3 install flash_attn==2.8.0.post2
+RUN pip3 install transformers==4.51.3
\ No newline at end of file
diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml
new file mode 100644
index 0000000000000000000000000000000000000000..ba3ae2b20a1f71b45217784d06a7ff80519e6ccd
--- /dev/null
+++ b/docker/docker-compose.yml
@@ -0,0 +1,44 @@
+version: '3.8'
+
+services:
+ dots-ocr-server:
+ image: dots-ocr:latest
+ container_name: dots-ocr-container
+ ports:
+ - "8000:8000"
+ volumes:
+ #download model to local,model url:https://www.modelscope.cn/models/rednote-hilab/dots.ocr
+ - ./model/dots.ocr:/workspace/weights/DotsOCR
+ environment:
+ - PYTHONPATH=/workspace/weights:$PYTHONPATH
+ deploy:
+ resources:
+ reservations:
+ devices:
+ - capabilities: [gpu]
+ device_ids: ['0']
+ entrypoint: /bin/bash
+ command:
+ - -c
+ - |
+ set -ex;
+ echo '--- Starting setup and server ---';
+ echo 'Modifying vllm entrypoint...';
+ # This sed command patches the vllm entrypoint script to import the custom modeling code.
+ sed -i '/^from vllm\.entrypoints\.cli\.main import main/a from DotsOCR import modeling_dots_ocr_vllm' $(which vllm) && \
+ echo 'vllm script after patch:';
+ # Show the patched part of the vllm script for verification.
+ grep -A 1 'from vllm.entrypoints.cli.main import main' $(which vllm) && \
+ echo 'Starting server...';
+ # Use 'exec' to replace the current shell process with the vllm server,
+ # ensuring logs are properly forwarded to Docker's standard output.
+ exec vllm serve /workspace/weights/DotsOCR \
+ --tensor-parallel-size 1 \
+ --gpu-memory-utilization 0.8 \
+ --chat-template-content-format string \
+ --served-model-name dotsocr-model \
+ --trust-remote-code
+
+
+
+
diff --git a/dots.ocr LICENSE AGREEMENT b/dots.ocr LICENSE AGREEMENT
new file mode 100644
index 0000000000000000000000000000000000000000..8823e239815d17b58f43942649a92c19d38295a7
--- /dev/null
+++ b/dots.ocr LICENSE AGREEMENT
@@ -0,0 +1,109 @@
+dots.ocr LICENSE AGREEMENT
+
+Effective Date: [ August 8, 2025]
+
+Copyright Holder: [Xingyin Information Technology (Shanghai) Co., Ltd]
+
+This License Agreement (“Agreement”) governs Your use, reproduction, modification, and distribution of dots.ocr (the "Model Materials"). This Agreement is designed to maximize the openness and use of the Model Materials while addressing the unique legal, ethical, and technical challenges posed by large language models.
+
+WHEREAS, Licensor has developed the dots.ocr document parsing model and intends to distribute the Model Materials under an open‑source framework;
+WHEREAS, traditional open-source licenses (e.g., the MIT License) may not fully address the complexity inherent complexities of document parsing models, namely their multiple components (code, weights, training data), potential ethical risks, data‑governance issues, and intellectual‑property and liability questions regarding AI‑generated content;
+WHEREAS, Licensor seeks to provide a legal framework that ensures maximum access to and use of the Model Materials while clearly defining the rights, obligations, and liabilities of Licensee;
+
+THEREFORE, the parties agree that, subject to the MIT License, they shall be bound by the following terms and conditions:
+
+1. Definitions and Interpretation
+Purpose: To define key terms used in this Agreement, particularly "Model Materials," ensuring clarity of the license scope beyond traditional software code. To clarify the order of precedence between this Agreement and the MIT License to avoid conflict.
+
+1.1 “Licensor” shall mean the entity providing the Model Materials under this Agreement, namely [Xingyin Information Technology (Shanghai) Co., Ltd].
+
+1.2 “Licensee” or "You" shall mean any individual or entity exercising permissions granted by this Agreement.
+
+1.3 “Model Materials” shall mean all materials provided by Licensor under this Agreement, including but not limited to:
+ (a) one or more machine‑learning models, including architecture and trained parameters (i.e., model weights);
+ (b) all associated preprocessing, training, inference, and fine‑tuning code;
+ (c) training datasets and evaluation scripts (or their detailed descriptions and access mechanisms); and
+ (d) any accompanying documentation, metadata, and tools.
+The above Model Materials shall be subject to the content published on the Licensor’s website or GitHub repository at https://github.com/rednote-hilab/dots.ocr.
+
+1.4 “Outputs” shall mean any content generated through the use of the Model Materials, such as text, tables, code,layout information, and formulas extracted from documents.
+
+1.5 “MIT License” shall mean The MIT Open Source License published by the Massachusetts Institute of Technology.
+
+1.6 Priority of Agreement. In the event of any conflict or inconsistency between this Agreement and the MIT License, the terms of the MIT License shall prevail. However, if the terms of the MIT License are ambiguous or silent on a particular matter, the provisions of this Agreement shall apply and supplement the MIT License.
+
+2. Grant of Rights and Scope of Use
+
+Purpose: To grant broad, permissive rights to the Licensee for the Model Materials—including code, weights, data, and documentation—to ensure maximum openness and flexibility while clarifying the free use of model-generated content. Additionally, it clarifies the feasibility of transitioning from open-source to commercial‑use and the use of OpenAPI interfaces.
+
+2.1 Grant of Copyright License. Subject to Licensee's compliance with this Agreement, Licensor hereby grants Licensee a perpetual, worldwide, non‑exclusive, no-charge, royalty‑free copyright license to use (run or test), reproduce, modify, create derivative works of, merge, publish, distribute the Model Materials; sublicense and/or sell copies of the Model Materials or any derivative works thereof; and incorporate the unmodified or modified Model Materials into proprietary products or services, including for commercial purposes, software‑as‑a‑service (SaaS) offerings, or via OpenAPI or other interfaces.
+
+2.2 Fundamental Capabilities. The Model Materials only provide the fundamental model’s capabilities. Licensees may develop derivative AI applications or undertake task‑specific training thereon.
+
+2.3 From Open Source to Commercial Use. The open-source release does not preclude Licensor’s commercial exploitation of the Model Materials, in whole or in part. Any such commercial use shall, at that time, be subject to license agreements between Licensor and applicable users.
+
+2.4 API‑Service Exception. Licensees who access the Model Materials through API calls or provide model services via API interfaces(without directly distributing model weights )shall not be subject to this Agreement unless otherwise expressly agreed. Instead, such use shall be governed by the API terms of use published by Licensor (if any).
+
+3. Acceptable Use Policy and Prohibited Uses
+
+3.1 Responsible Use. Licensee must use the Model Materials in a responsible, ethical, and lawful manner, in compliance with all applicable laws, regulations, industry standards, and best practices.
+
+3.2 Enterprise On‑Premises Deployment. The Licensee may deploy the Model Materials in closed‑source, on‑premises enterprise environments.
+
+3.3 Prohibited Uses. Any breach of the prohibitions below will result in the automatic termination of all licenses granted under this Agreement. Licensee agrees not to use the Model Materials or any derivative works thereof, in connection with:
+(a) Identification and Utilization of Illegal/Harmful Content:Includes identifying graphic/text materials used for counterfeiting certificates/invoices, perpetrating fraud, or launching cyberattacks; or processing images containing illegal content such as violence, criminal activities, disinformation, or child exploitation.
+(b) Privacy Infringement and Discriminatory Practices:Extracting personal sensitive information (e.g., ID numbers, medical records, biometric data) or protected characteristics (e.g., race, gender) from images without legal authorization or consent, for purposes of privacy violation, automated discriminatory decision-making, or harassment.
+(c) Copyright Restrictions:Licensees shall not use the tool for unauthorized digitization of publications/document scanning or bulk scraping of content. Any use involving publications or other copyright-protected materials must first obtain relevant permissions.
+
+4. Intellectual Property Ownership and Contributions
+
+4.1 Licensor's Copyright Reservation. Licensor reserves all right, title, and interest in and to the Model Materials (including the model architecture, parameters, code, and original training data), except as expressly licensed herein. The original copyright of the Model Materials belongs to the Licensor.
+
+4.2 Patent License. Subject to the terms and conditions of this Agreement, Licensor hereby grants Licensee a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Model Materials, where such license applies only to those patent claims licensable by the Lisensor that are necessarily infringed by its contribution(s).
+If Licensee institutes patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Model Materials constitute direct or contributory patent infringement, then any patent licenses granted under this License for the Model Materials shall terminate as of the date such litigation is asserted or filed.
+
+4.3 Outputs: The Outputs generated through the use of the Model Materials generally refer to text, tables, layouts, and other content extracted from documents or images. The extracted content itself does not generate new intellectual property rights, and all intellectual property remains with the original authors or copyright holders. The Licensee is responsible for due diligence regarding the legality of the Outputs, particularly where the content extracted by the OCR model may be substantially similar to existing copyrighted works, which could present intellectual property infringement risks. The Licensor assumes no liability for such infringements.
+4.4 Trademarks. Nothing in this License permits Licensee to make use of Licensor’s trademarks, trade names, logos (e.g., “rednote,” “Xiaohongshu,” “dots.ocr”) or to otherwise suggest endorsement or misrepresent the relationship between the parties, unless Licensor’s prior written approval is granted.
+
+5. Data Governance, Privacy, and Security
+
+5.1 Data Quality and Bias. Licensee shall use training data from lawful sources and is encouraged to conduct due diligence before deploying the Model Materials and to take reasonable steps to mitigate any known biases in its training data or applications.
+
+5.2 Privacy Protection.
+ (a) Sensitive‑Data Restrictions. It is prohibited to use the Model Materials to process,or extract infer sensitive personal data protected under specific laws (such as GDPR or HIPAA), particularly when dealing with documents containing personally identifiable information (such as ID numbers, health data, financial information, etc.), unless Licensee has obtained all necessary consents, lawful basis, or authorizations, and has implemented adequate anonymization, pseudonymization, or other privacy-enhancing technologies.
+ (b) Data Minimization and Purpose Limitation. The Licensee shall follow the principle of data minimization when using the OCR Model, processing only the user data necessary for specific, explicit, and lawful purposes. Specifically, the OCR Model should avoid processing unnecessary sensitive data and ensure compliance with applicable privacy protection laws during data handling.
+ (c) Transparency. Licensee shall provide clear and transparent privacy policies and terms of use when processing user data, particularly during document scanning and information extraction. .
+
+5.3 Security Measures. Licensee shall implement appropriate technical and administrative safeguards to protect the Model Materials and any associated data against unauthorized access, disclosure, alteration, or destruction. Such measures may include, but are not limited to, encryption, access controls, logging, and audit trails.
+
+5.4 Further Training. Licensee may only use user‑provided input or Outputs for training, fine-tuning, or improving other AI models if it has obtained the specific and informed consent of data subjects.
+
+6. Disclaimer of Warranty and Limitation of Liability
+
+6.1 “AS IS” Basis. Unless required by applicable law, the Model Materials are provided on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE. Licensee is solely responsible for determining the appropriateness of using or redistributing the Model Materials and assume any risks associated with the exercise of permissions under this License. Licensor does not provide any warranty of non-infringement but represents that no infringing code has been knowingly included.
+
+6.2 Outputs Disclaimer. As a neutral technology, Licensor disclaims all liability for the accuracy, completeness, reliability, safety, legality, or suitability of any Outputs. The Licensee is solely responsible for verifying the accuracy and appropriateness of AI-generated content and shall provide appropriate disclosures when publishing or relying upon such content.
+
+6.3 Limitation of Liability and Recourse. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, shall Licensor or contributors be liable for any claims, damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Model Materials (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Licensor has been advised of the possibility of such damages. If such losses are incurred, recourse may be sought against the Licensee responsible for causing the loss.
+
+6.4 Content‑Filtering Disclaimer. Although the Model Materials may include content‑filtering mechanisms, Licensor makes no warranties of any kind regarding the stability, quality, accuracy, completeness, or any specific outcome of Outputs. Licensee is solely responsible for reviewing, verifying, and performing quality control on Outputs and assumes all associated risks and liabilities.
+
+7. Attribution and License Reservation
+
+7.1 License. When distributing or redistributing the Model Materials, Licensee must give any other recipients of the Model Materials a copy of this Agreement.
+
+7.2 Copyright and Notices. When distributing any part of the Model Materials, Licensee must retain all copyright, patent, trademark, and attribution notices included in the Model Materials.
+
+7.3 Attribution. Licensee is encouraged to prominently display the name of Licensor and the Model Materials in any public statements, products, or services that contain the Model Materials (or any derivative works thereof), to promote transparency and community trust. If Licensee distributes modified weights or fine‑tuned models based on the Model Materials, Licensee must prominently display the following statement in the related website or documentation: “Built with dots.ocr.”
+
+8. Governing Law and Dispute Resolution
+
+8.1 Governing Law. This Agreement shall be governed by and construed in accordance with the laws of the People’s Republic of China, without regard to its conflict of laws principles.
+
+8.2 Dispute Resolution. Any dispute claim, or disagreement arising out of or relating to this Agreement shall first be resolved through amicable consultation. If such consultation fails, the dispute shall be submitted to the Hangzhou Arbitration Commission for arbitration. The arbitration shall be conducted in accordance with the laws of China, and the place of arbitration shall be [Hangzhou, China]. The arbitral award shall be final and binding upon both parties.
+
+9. Regulatory Compliance Amendments
+In the event that any part of this Agreement becomes invalid or requires adjustment due to changes in applicable laws or regulations, Licensor reserves the right to issue a revised version of this Agreement. Licensee shall migrate to the new version within [e.g., ninety (90)] days of its release; otherwise, all rights granted under this Agreement shall automatically terminate.
+
+10. Security Reporting
+Licensee discovering any security vulnerability in the Model Materials may report it to Licensor via: dots-feedback@xiaohongshu.com. Licensee shall not disclose vulnerability details until Licensor issues an official remediation, unless otherwise required by law.
\ No newline at end of file
diff --git a/dots_ocr/__init__.py b/dots_ocr/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..34c4e1353a91761cfcb92a9adf5cbe8a402ed78f
--- /dev/null
+++ b/dots_ocr/__init__.py
@@ -0,0 +1 @@
+from .parser import DotsOCRParser
\ No newline at end of file
diff --git a/dots_ocr/model/inference.py b/dots_ocr/model/inference.py
new file mode 100644
index 0000000000000000000000000000000000000000..64662023512fec5358326dc650be4564f4d48fd9
--- /dev/null
+++ b/dots_ocr/model/inference.py
@@ -0,0 +1,50 @@
+import json
+import io
+import base64
+import math
+from PIL import Image
+import requests
+from dots_ocr.utils.image_utils import PILimage_to_base64
+from openai import OpenAI
+import os
+
+
+def inference_with_vllm(
+ image,
+ prompt,
+ ip="localhost",
+ port=8000,
+ temperature=0.1,
+ top_p=0.9,
+ max_completion_tokens=32768,
+ model_name='model',
+ ):
+
+ addr = f"http://{ip}:{port}/v1"
+ client = OpenAI(api_key="{}".format(os.environ.get("API_KEY", "0")), base_url=addr)
+ messages = []
+ messages.append(
+ {
+ "role": "user",
+ "content": [
+ {
+ "type": "image_url",
+ "image_url": {"url": PILimage_to_base64(image)},
+ },
+ {"type": "text", "text": f"<|img|><|imgpad|><|endofimg|>{prompt}"} # if no "<|img|><|imgpad|><|endofimg|>" here,vllm v1 will add "\n" here
+ ],
+ }
+ )
+ try:
+ response = client.chat.completions.create(
+ messages=messages,
+ model=model_name,
+ max_completion_tokens=max_completion_tokens,
+ temperature=temperature,
+ top_p=top_p)
+ response = response.choices[0].message.content
+ return response
+ except requests.exceptions.RequestException as e:
+ print(f"request error: {e}")
+ return None
+
diff --git a/dots_ocr/parser.py b/dots_ocr/parser.py
new file mode 100644
index 0000000000000000000000000000000000000000..cc11ff21d4f914ef7f02262e80ad9923ee22f19a
--- /dev/null
+++ b/dots_ocr/parser.py
@@ -0,0 +1,428 @@
+import os
+import json
+from tqdm import tqdm
+from multiprocessing.pool import ThreadPool, Pool
+import argparse
+
+
+from dots_ocr.model.inference import inference_with_vllm
+from dots_ocr.utils.consts import image_extensions, MIN_PIXELS, MAX_PIXELS
+from dots_ocr.utils.image_utils import get_image_by_fitz_doc, fetch_image, smart_resize
+from dots_ocr.utils.doc_utils import fitz_doc_to_image, load_images_from_pdf
+from dots_ocr.utils.prompts import dict_promptmode_to_prompt
+from dots_ocr.utils.layout_utils import post_process_output, draw_layout_on_image, pre_process_bboxes
+from dots_ocr.utils.format_transformer import layoutjson2md
+
+
+class DotsOCRParser:
+ """
+ parse image or pdf file
+ """
+
+ def __init__(self,
+ ip='localhost',
+ port=8000,
+ model_name='model',
+ temperature=0.1,
+ top_p=1.0,
+ max_completion_tokens=16384,
+ num_thread=64,
+ dpi = 200,
+ output_dir="./output",
+ min_pixels=None,
+ max_pixels=None,
+ use_hf=False,
+ ):
+ self.dpi = dpi
+
+ # default args for vllm server
+ self.ip = ip
+ self.port = port
+ self.model_name = model_name
+ # default args for inference
+ self.temperature = temperature
+ self.top_p = top_p
+ self.max_completion_tokens = max_completion_tokens
+ self.num_thread = num_thread
+ self.output_dir = output_dir
+ self.min_pixels = min_pixels
+ self.max_pixels = max_pixels
+
+ self.use_hf = use_hf
+ if self.use_hf:
+ self._load_hf_model()
+ print(f"use hf model, num_thread will be set to 1")
+ else:
+ print(f"use vllm model, num_thread will be set to {self.num_thread}")
+ assert self.min_pixels is None or self.min_pixels >= MIN_PIXELS
+ assert self.max_pixels is None or self.max_pixels <= MAX_PIXELS
+
+ def _load_hf_model(self):
+ import torch
+ from transformers import AutoModelForCausalLM, AutoProcessor, AutoTokenizer
+ from qwen_vl_utils import process_vision_info
+
+ model_path = "./weights/DotsOCR"
+ self.model = AutoModelForCausalLM.from_pretrained(
+ model_path,
+ attn_implementation="flash_attention_2",
+ torch_dtype=torch.bfloat16,
+ device_map="auto",
+ trust_remote_code=True
+ )
+ self.processor = AutoProcessor.from_pretrained(model_path, trust_remote_code=True,use_fast=True)
+ self.process_vision_info = process_vision_info
+
+ def _inference_with_hf(self, image, prompt):
+ messages = [
+ {
+ "role": "user",
+ "content": [
+ {
+ "type": "image",
+ "image": image
+ },
+ {"type": "text", "text": prompt}
+ ]
+ }
+ ]
+
+ # Preparation for inference
+ text = self.processor.apply_chat_template(
+ messages,
+ tokenize=False,
+ add_generation_prompt=True
+ )
+ image_inputs, video_inputs = self.process_vision_info(messages)
+ inputs = self.processor(
+ text=[text],
+ images=image_inputs,
+ videos=video_inputs,
+ padding=True,
+ return_tensors="pt",
+ )
+
+ inputs = inputs.to("cuda")
+
+ # Inference: Generation of the output
+ generated_ids = self.model.generate(**inputs, max_new_tokens=24000)
+ generated_ids_trimmed = [
+ out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
+ ]
+ response = self.processor.batch_decode(
+ generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
+ )[0]
+ return response
+
+ def _inference_with_vllm(self, image, prompt):
+ response = inference_with_vllm(
+ image,
+ prompt,
+ model_name=self.model_name,
+ ip=self.ip,
+ port=self.port,
+ temperature=self.temperature,
+ top_p=self.top_p,
+ max_completion_tokens=self.max_completion_tokens,
+ )
+ return response
+
+ def get_prompt(self, prompt_mode, bbox=None, origin_image=None, image=None, min_pixels=None, max_pixels=None):
+ prompt = dict_promptmode_to_prompt[prompt_mode]
+ if prompt_mode == 'prompt_grounding_ocr':
+ assert bbox is not None
+ bboxes = [bbox]
+ bbox = pre_process_bboxes(origin_image, bboxes, input_width=image.width, input_height=image.height, min_pixels=min_pixels, max_pixels=max_pixels)[0]
+ prompt = prompt + str(bbox)
+ return prompt
+
+ # def post_process_results(self, response, prompt_mode, save_dir, save_name, origin_image, image, min_pixels, max_pixels)
+ def _parse_single_image(
+ self,
+ origin_image,
+ prompt_mode,
+ save_dir,
+ save_name,
+ source="image",
+ page_idx=0,
+ bbox=None,
+ fitz_preprocess=False,
+ ):
+ min_pixels, max_pixels = self.min_pixels, self.max_pixels
+ if prompt_mode == "prompt_grounding_ocr":
+ min_pixels = min_pixels or MIN_PIXELS # preprocess image to the final input
+ max_pixels = max_pixels or MAX_PIXELS
+ if min_pixels is not None: assert min_pixels >= MIN_PIXELS, f"min_pixels should >= {MIN_PIXELS}"
+ if max_pixels is not None: assert max_pixels <= MAX_PIXELS, f"max_pixels should <+ {MAX_PIXELS}"
+
+ if source == 'image' and fitz_preprocess:
+ image = get_image_by_fitz_doc(origin_image, target_dpi=self.dpi)
+ image = fetch_image(image, min_pixels=min_pixels, max_pixels=max_pixels)
+ else:
+ image = fetch_image(origin_image, min_pixels=min_pixels, max_pixels=max_pixels)
+ input_height, input_width = smart_resize(image.height, image.width)
+ prompt = self.get_prompt(prompt_mode, bbox, origin_image, image, min_pixels=min_pixels, max_pixels=max_pixels)
+ if self.use_hf:
+ response = self._inference_with_hf(image, prompt)
+ else:
+ response = self._inference_with_vllm(image, prompt)
+ result = {'page_no': page_idx,
+ "input_height": input_height,
+ "input_width": input_width
+ }
+ if source == 'pdf':
+ save_name = f"{save_name}_page_{page_idx}"
+ if prompt_mode in ['prompt_layout_all_en', 'prompt_layout_only_en', 'prompt_grounding_ocr']:
+ cells, filtered = post_process_output(
+ response,
+ prompt_mode,
+ origin_image,
+ image,
+ min_pixels=min_pixels,
+ max_pixels=max_pixels,
+ )
+ if filtered and prompt_mode != 'prompt_layout_only_en': # model output json failed, use filtered process
+ json_file_path = os.path.join(save_dir, f"{save_name}.json")
+ with open(json_file_path, 'w', encoding="utf-8") as w:
+ json.dump(response, w, ensure_ascii=False)
+
+ image_layout_path = os.path.join(save_dir, f"{save_name}.jpg")
+ origin_image.save(image_layout_path)
+ result.update({
+ 'layout_info_path': json_file_path,
+ 'layout_image_path': image_layout_path,
+ })
+
+ md_file_path = os.path.join(save_dir, f"{save_name}.md")
+ with open(md_file_path, "w", encoding="utf-8") as md_file:
+ md_file.write(cells)
+ result.update({
+ 'md_content_path': md_file_path
+ })
+ result.update({
+ 'filtered': True
+ })
+ else:
+ try:
+ image_with_layout = draw_layout_on_image(origin_image, cells)
+ except Exception as e:
+ print(f"Error drawing layout on image: {e}")
+ image_with_layout = origin_image
+
+ json_file_path = os.path.join(save_dir, f"{save_name}.json")
+ with open(json_file_path, 'w', encoding="utf-8") as w:
+ json.dump(cells, w, ensure_ascii=False)
+
+ image_layout_path = os.path.join(save_dir, f"{save_name}.jpg")
+ image_with_layout.save(image_layout_path)
+ result.update({
+ 'layout_info_path': json_file_path,
+ 'layout_image_path': image_layout_path,
+ })
+ if prompt_mode != "prompt_layout_only_en": # no text md when detection only
+ md_content = layoutjson2md(origin_image, cells, text_key='text')
+ md_content_no_hf = layoutjson2md(origin_image, cells, text_key='text', no_page_hf=True) # used for clean output or metric of omnidocbench、olmbench
+ md_file_path = os.path.join(save_dir, f"{save_name}.md")
+ with open(md_file_path, "w", encoding="utf-8") as md_file:
+ md_file.write(md_content)
+ md_nohf_file_path = os.path.join(save_dir, f"{save_name}_nohf.md")
+ with open(md_nohf_file_path, "w", encoding="utf-8") as md_file:
+ md_file.write(md_content_no_hf)
+ result.update({
+ 'md_content_path': md_file_path,
+ 'md_content_nohf_path': md_nohf_file_path,
+ })
+ else:
+ image_layout_path = os.path.join(save_dir, f"{save_name}.jpg")
+ origin_image.save(image_layout_path)
+ result.update({
+ 'layout_image_path': image_layout_path,
+ })
+
+ md_content = response
+ md_file_path = os.path.join(save_dir, f"{save_name}.md")
+ with open(md_file_path, "w", encoding="utf-8") as md_file:
+ md_file.write(md_content)
+ result.update({
+ 'md_content_path': md_file_path,
+ })
+
+ return result
+
+ def parse_image(self, input_path, filename, prompt_mode, save_dir, bbox=None, fitz_preprocess=False):
+ origin_image = fetch_image(input_path)
+ result = self._parse_single_image(origin_image, prompt_mode, save_dir, filename, source="image", bbox=bbox, fitz_preprocess=fitz_preprocess)
+ result['file_path'] = input_path
+ return [result]
+
+ def parse_pdf(self, input_path, filename, prompt_mode, save_dir):
+ print(f"loading pdf: {input_path}")
+ images_origin = load_images_from_pdf(input_path, dpi=self.dpi)
+ total_pages = len(images_origin)
+ tasks = [
+ {
+ "origin_image": image,
+ "prompt_mode": prompt_mode,
+ "save_dir": save_dir,
+ "save_name": filename,
+ "source":"pdf",
+ "page_idx": i,
+ } for i, image in enumerate(images_origin)
+ ]
+
+ def _execute_task(task_args):
+ return self._parse_single_image(**task_args)
+
+ if self.use_hf:
+ num_thread = 1
+ else:
+ num_thread = min(total_pages, self.num_thread)
+ print(f"Parsing PDF with {total_pages} pages using {num_thread} threads...")
+
+ results = []
+ with ThreadPool(num_thread) as pool:
+ with tqdm(total=total_pages, desc="Processing PDF pages") as pbar:
+ for result in pool.imap_unordered(_execute_task, tasks):
+ results.append(result)
+ pbar.update(1)
+
+ results.sort(key=lambda x: x["page_no"])
+ for i in range(len(results)):
+ results[i]['file_path'] = input_path
+ return results
+
+ def parse_file(self,
+ input_path,
+ output_dir="",
+ prompt_mode="prompt_layout_all_en",
+ bbox=None,
+ fitz_preprocess=False
+ ):
+ output_dir = output_dir or self.output_dir
+ output_dir = os.path.abspath(output_dir)
+ filename, file_ext = os.path.splitext(os.path.basename(input_path))
+ save_dir = os.path.join(output_dir, filename)
+ os.makedirs(save_dir, exist_ok=True)
+
+ if file_ext == '.pdf':
+ results = self.parse_pdf(input_path, filename, prompt_mode, save_dir)
+ elif file_ext in image_extensions:
+ results = self.parse_image(input_path, filename, prompt_mode, save_dir, bbox=bbox, fitz_preprocess=fitz_preprocess)
+ else:
+ raise ValueError(f"file extension {file_ext} not supported, supported extensions are {image_extensions} and pdf")
+
+ print(f"Parsing finished, results saving to {save_dir}")
+ with open(os.path.join(output_dir, os.path.basename(filename)+'.jsonl'), 'w', encoding="utf-8") as w:
+ for result in results:
+ w.write(json.dumps(result, ensure_ascii=False) + '\n')
+
+ return results
+
+
+
+def main():
+ prompts = list(dict_promptmode_to_prompt.keys())
+ parser = argparse.ArgumentParser(
+ description="dots.ocr Multilingual Document Layout Parser",
+ )
+
+ parser.add_argument(
+ "input_path", type=str,
+ help="Input PDF/image file path"
+ )
+
+ parser.add_argument(
+ "--output", type=str, default="./output",
+ help="Output directory (default: ./output)"
+ )
+
+ parser.add_argument(
+ "--prompt", choices=prompts, type=str, default="prompt_layout_all_en",
+ help="prompt to query the model, different prompts for different tasks"
+ )
+ parser.add_argument(
+ '--bbox',
+ type=int,
+ nargs=4,
+ metavar=('x1', 'y1', 'x2', 'y2'),
+ help='should give this argument if you want to prompt_grounding_ocr'
+ )
+ parser.add_argument(
+ "--ip", type=str, default="localhost",
+ help=""
+ )
+ parser.add_argument(
+ "--port", type=int, default=8000,
+ help=""
+ )
+ parser.add_argument(
+ "--model_name", type=str, default="model",
+ help=""
+ )
+ parser.add_argument(
+ "--temperature", type=float, default=0.1,
+ help=""
+ )
+ parser.add_argument(
+ "--top_p", type=float, default=1.0,
+ help=""
+ )
+ parser.add_argument(
+ "--dpi", type=int, default=200,
+ help=""
+ )
+ parser.add_argument(
+ "--max_completion_tokens", type=int, default=16384,
+ help=""
+ )
+ parser.add_argument(
+ "--num_thread", type=int, default=16,
+ help=""
+ )
+ parser.add_argument(
+ "--no_fitz_preprocess", action='store_true',
+ help="False will use tikz dpi upsample pipeline, good for images which has been render with low dpi, but maybe result in higher computational costs"
+ )
+ parser.add_argument(
+ "--min_pixels", type=int, default=None,
+ help=""
+ )
+ parser.add_argument(
+ "--max_pixels", type=int, default=None,
+ help=""
+ )
+ parser.add_argument(
+ "--use_hf", type=bool, default=False,
+ help=""
+ )
+ args = parser.parse_args()
+
+ dots_ocr_parser = DotsOCRParser(
+ ip=args.ip,
+ port=args.port,
+ model_name=args.model_name,
+ temperature=args.temperature,
+ top_p=args.top_p,
+ max_completion_tokens=args.max_completion_tokens,
+ num_thread=args.num_thread,
+ dpi=args.dpi,
+ output_dir=args.output,
+ min_pixels=args.min_pixels,
+ max_pixels=args.max_pixels,
+ use_hf=args.use_hf,
+ )
+
+ fitz_preprocess = not args.no_fitz_preprocess
+ if fitz_preprocess:
+ print(f"Using fitz preprocess for image input, check the change of the image pixels")
+ result = dots_ocr_parser.parse_file(
+ args.input_path,
+ prompt_mode=args.prompt,
+ bbox=args.bbox,
+ fitz_preprocess=fitz_preprocess,
+ )
+
+
+
+if __name__ == "__main__":
+ main()
diff --git a/dots_ocr/utils/__init__.py b/dots_ocr/utils/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..85dbde987ba69ae34af3ab9abcd56f7757e6a3f2
--- /dev/null
+++ b/dots_ocr/utils/__init__.py
@@ -0,0 +1 @@
+from .prompts import dict_promptmode_to_prompt
\ No newline at end of file
diff --git a/dots_ocr/utils/consts.py b/dots_ocr/utils/consts.py
new file mode 100644
index 0000000000000000000000000000000000000000..7e119f8f00583232301cdf7970041a5d1037f8d0
--- /dev/null
+++ b/dots_ocr/utils/consts.py
@@ -0,0 +1,5 @@
+MIN_PIXELS=3136
+MAX_PIXELS=11289600
+IMAGE_FACTOR=28
+
+image_extensions = {'.jpg', '.jpeg', '.png'}
diff --git a/dots_ocr/utils/demo_utils/display.py b/dots_ocr/utils/demo_utils/display.py
new file mode 100644
index 0000000000000000000000000000000000000000..92a97b3059406ee84f6ac3d542557f87fa3621c0
--- /dev/null
+++ b/dots_ocr/utils/demo_utils/display.py
@@ -0,0 +1,61 @@
+import os
+from PIL import Image
+
+
+def is_valid_image_path(image_path):
+ """
+ Checks if the image path is valid.
+
+ Args:
+ image_path: The path to the image.
+
+ Returns:
+ bool: True if the path is valid, False otherwise.
+ """
+ if not os.path.exists(image_path):
+ return False
+
+ # Check if the file extension is one of the common image formats.
+ image_extensions = ['.jpg', '.jpeg', '.png', '.gif', '.bmp']
+ _, extension = os.path.splitext(image_path)
+ if extension.lower() in image_extensions:
+ return True
+ else:
+ return False
+
+
+def read_image(image_path, use_native=False):
+ """
+ Reads an image and resizes it while maintaining aspect ratio.
+
+ Args:
+ image_path: The path to the image.
+ use_native: If True, the max dimension of the original image is used as the max size.
+ If False, max size is set to 1024.
+
+ Returns:
+ tuple: (resized_image, original_width, original_height)
+ """
+ # Create a default 512x512 blue image as a fallback.
+ image = Image.new('RGB', (512, 512), color=(0, 0, 255))
+
+ if is_valid_image_path(image_path):
+ image = Image.open(image_path)
+ else:
+ raise FileNotFoundError(f"{image_path}: Image path does not exist")
+
+ w, h = image.size
+ if use_native:
+ max_size = max(w, h)
+ else:
+ max_size = 1024
+
+ if w > h:
+ new_w = max_size
+ new_h = int(h * max_size / w)
+ else:
+ new_h = max_size
+ new_w = int(w * max_size / h)
+
+ image = image.resize((new_w, new_h))
+ return image, w, h
diff --git a/dots_ocr/utils/doc_utils.py b/dots_ocr/utils/doc_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..e044abdff214099de211d0892faa19fa30e57bbd
--- /dev/null
+++ b/dots_ocr/utils/doc_utils.py
@@ -0,0 +1,60 @@
+import fitz
+import numpy as np
+import enum
+from pydantic import BaseModel, Field
+from PIL import Image
+
+
+class SupportedPdfParseMethod(enum.Enum):
+ OCR = 'ocr'
+ TXT = 'txt'
+
+
+class PageInfo(BaseModel):
+ """The width and height of page
+ """
+ w: float = Field(description='the width of page')
+ h: float = Field(description='the height of page')
+
+
+def fitz_doc_to_image(doc, target_dpi=200, origin_dpi=None) -> dict:
+ """Convert fitz.Document to image, Then convert the image to numpy array.
+
+ Args:
+ doc (_type_): pymudoc page
+ dpi (int, optional): reset the dpi of dpi. Defaults to 200.
+
+ Returns:
+ dict: {'img': numpy array, 'width': width, 'height': height }
+ """
+ from PIL import Image
+ mat = fitz.Matrix(target_dpi / 72, target_dpi / 72)
+ pm = doc.get_pixmap(matrix=mat, alpha=False)
+
+ if pm.width > 4500 or pm.height > 4500:
+ mat = fitz.Matrix(72 / 72, 72 / 72) # use fitz default dpi
+ pm = doc.get_pixmap(matrix=mat, alpha=False)
+
+ image = Image.frombytes('RGB', (pm.width, pm.height), pm.samples)
+ return image
+
+
+def load_images_from_pdf(pdf_file, dpi=200, start_page_id=0, end_page_id=None) -> list:
+ images = []
+ with fitz.open(pdf_file) as doc:
+ pdf_page_num = doc.page_count
+ end_page_id = (
+ end_page_id
+ if end_page_id is not None and end_page_id >= 0
+ else pdf_page_num - 1
+ )
+ if end_page_id > pdf_page_num - 1:
+ print('end_page_id is out of range, use images length')
+ end_page_id = pdf_page_num - 1
+
+ for index in range(0, doc.page_count):
+ if start_page_id <= index <= end_page_id:
+ page = doc[index]
+ img = fitz_doc_to_image(page, target_dpi=dpi)
+ images.append(img)
+ return images
\ No newline at end of file
diff --git a/dots_ocr/utils/format_transformer.py b/dots_ocr/utils/format_transformer.py
new file mode 100644
index 0000000000000000000000000000000000000000..ada4c1c31fe7251dbf66e7e35c204520f8db23f1
--- /dev/null
+++ b/dots_ocr/utils/format_transformer.py
@@ -0,0 +1,206 @@
+import os
+import sys
+import json
+import re
+
+from PIL import Image
+from dots_ocr.utils.image_utils import PILimage_to_base64
+
+
+def has_latex_markdown(text: str) -> bool:
+ """
+ Checks if a string contains LaTeX markdown patterns.
+
+ Args:
+ text (str): The string to check.
+
+ Returns:
+ bool: True if LaTeX markdown is found, otherwise False.
+ """
+ if not isinstance(text, str):
+ return False
+
+ # Define regular expression patterns for LaTeX markdown
+ latex_patterns = [
+ r'\$\$.*?\$\$', # Block-level math formula $$...$$
+ r'\$[^$\n]+?\$', # Inline math formula $...$
+ r'\\begin\{.*?\}.*?\\end\{.*?\}', # LaTeX environment \begin{...}...\end{...}
+ r'\\[a-zA-Z]+\{.*?\}', # LaTeX command \command{...}
+ r'\\[a-zA-Z]+', # Simple LaTeX command \command
+ r'\\\[.*?\\\]', # Display math formula \[...\]
+ r'\\\(.*?\\\)', # Inline math formula \(...\)
+ ]
+
+ # Check if any of the patterns match
+ for pattern in latex_patterns:
+ if re.search(pattern, text, re.DOTALL):
+ return True
+
+ return False
+
+
+def clean_latex_preamble(latex_text: str) -> str:
+ """
+ Removes LaTeX preamble commands like document class and package imports.
+
+ Args:
+ latex_text (str): The original LaTeX text.
+
+ Returns:
+ str: The cleaned LaTeX text without preamble commands.
+ """
+ # Define patterns to be removed
+ patterns = [
+ r'\\documentclass\{[^}]+\}', # \documentclass{...}
+ r'\\usepackage\{[^}]+\}', # \usepackage{...}
+ r'\\usepackage\[[^\]]*\]\{[^}]+\}', # \usepackage[options]{...}
+ r'\\begin\{document\}', # \begin{document}
+ r'\\end\{document\}', # \end{document}
+ ]
+
+ # Apply each pattern to clean the text
+ cleaned_text = latex_text
+ for pattern in patterns:
+ cleaned_text = re.sub(pattern, '', cleaned_text, flags=re.IGNORECASE)
+
+ return cleaned_text
+
+
+def get_formula_in_markdown(text: str) -> str:
+ """
+ Formats a string containing a formula into a standard Markdown block.
+
+ Args:
+ text (str): The input string, potentially containing a formula.
+
+ Returns:
+ str: The formatted string, ready for Markdown rendering.
+ """
+ # Remove leading/trailing whitespace
+ text = text.strip()
+
+ # Check if it's already enclosed in $$
+ if text.startswith('$$') and text.endswith('$$'):
+ text_new = text[2:-2].strip()
+ if not '$' in text_new:
+ return f"$$\n{text_new}\n$$"
+ else:
+ return text
+
+ # Handle \[...\] format, convert to $$...$$
+ if text.startswith('\\[') and text.endswith('\\]'):
+ inner_content = text[2:-2].strip()
+ return f"$$\n{inner_content}\n$$"
+
+ # Check if it's enclosed in \[ \]
+ if len(re.findall(r'.*\\\[.*\\\].*', text)) > 0:
+ return text
+
+ # Handle inline formulas ($...$)
+ pattern = r'\$([^$]+)\$'
+ matches = re.findall(pattern, text)
+ if len(matches) > 0:
+ # It's an inline formula, return it as is
+ return text
+
+ # If no LaTeX markdown syntax is present, return directly
+ if not has_latex_markdown(text):
+ return text
+
+ # Handle unnecessary LaTeX formatting like \usepackage
+ if 'usepackage' in text:
+ text = clean_latex_preamble(text)
+
+ if text[0] == '`' and text[-1] == '`':
+ text = text[1:-1]
+
+ # Enclose the final text in a $$ block with newlines
+ text = f"$$\n{text}\n$$"
+ return text
+
+
+def clean_text(text: str) -> str:
+ """
+ Cleans text by removing extra whitespace.
+
+ Args:
+ text: The original text.
+
+ Returns:
+ str: The cleaned text.
+ """
+ if not text:
+ return ""
+
+ # Remove leading and trailing whitespace
+ text = text.strip()
+
+ # Replace multiple consecutive whitespace characters with a single space
+ if text[:2] == '`$' and text[-2:] == '$`':
+ text = text[1:-1]
+
+ return text
+
+
+def layoutjson2md(image: Image.Image, cells: list, text_key: str = 'text', no_page_hf: bool = False) -> str:
+ """
+ Converts a layout JSON format to Markdown.
+
+ In the layout JSON, formulas are LaTeX, tables are HTML, and text is Markdown.
+
+ Args:
+ image: A PIL Image object.
+ cells: A list of dictionaries, each representing a layout cell.
+ text_key: The key for the text field in the cell dictionary.
+ no_page_header_footer: If True, skips page headers and footers.
+
+ Returns:
+ str: The text in Markdown format.
+ """
+ text_items = []
+
+ for i, cell in enumerate(cells):
+ x1, y1, x2, y2 = [int(coord) for coord in cell['bbox']]
+ text = cell.get(text_key, "")
+
+ if no_page_hf and cell['category'] in ['Page-header', 'Page-footer']:
+ continue
+
+ if cell['category'] == 'Picture':
+ image_crop = image.crop((x1, y1, x2, y2))
+ image_base64 = PILimage_to_base64(image_crop)
+ text_items.append(f"")
+ elif cell['category'] == 'Formula':
+ text_items.append(get_formula_in_markdown(text))
+ else:
+ text = clean_text(text)
+ text_items.append(f"{text}")
+
+ markdown_text = '\n\n'.join(text_items)
+ return markdown_text
+
+
+def fix_streamlit_formulas(md: str) -> str:
+ """
+ Fixes the format of formulas in Markdown to ensure they display correctly in Streamlit.
+ It adds a newline after the opening $$ and before the closing $$ if they don't already exist.
+
+ Args:
+ md_text (str): The Markdown text to fix.
+
+ Returns:
+ str: The fixed Markdown text.
+ """
+
+ # This inner function will be used by re.sub to perform the replacement
+ def replace_formula(match):
+ content = match.group(1)
+ # If the content already has surrounding newlines, don't add more.
+ if content.startswith('\n'):
+ content = content[1:]
+ if content.endswith('\n'):
+ content = content[:-1]
+ return f'$$\n{content}\n$$'
+
+ # Use regex to find all $$....$$ patterns and replace them using the helper function.
+ return re.sub(r'\$\$(.*?)\$\$', replace_formula, md, flags=re.DOTALL)
diff --git a/dots_ocr/utils/image_utils.py b/dots_ocr/utils/image_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..caef7c3abbf8122e1aec79c7f9c32b550cc80fb5
--- /dev/null
+++ b/dots_ocr/utils/image_utils.py
@@ -0,0 +1,196 @@
+import math
+import base64
+from PIL import Image
+from typing import Tuple
+import os
+from dots_ocr.utils.consts import IMAGE_FACTOR, MIN_PIXELS, MAX_PIXELS
+from dots_ocr.utils.doc_utils import fitz_doc_to_image
+from io import BytesIO
+import fitz
+import requests
+import copy
+
+
+def round_by_factor(number: int, factor: int) -> int:
+ """Returns the closest integer to 'number' that is divisible by 'factor'."""
+ return round(number / factor) * factor
+
+
+def ceil_by_factor(number: int, factor: int) -> int:
+ """Returns the smallest integer greater than or equal to 'number' that is divisible by 'factor'."""
+ return math.ceil(number / factor) * factor
+
+
+def floor_by_factor(number: int, factor: int) -> int:
+ """Returns the largest integer less than or equal to 'number' that is divisible by 'factor'."""
+ return math.floor(number / factor) * factor
+
+
+def smart_resize(
+ height: int,
+ width: int,
+ factor: int = 28,
+ min_pixels: int = 3136,
+ max_pixels: int = 11289600,
+):
+ """Rescales the image so that the following conditions are met:
+
+ 1. Both dimensions (height and width) are divisible by 'factor'.
+
+ 2. The total number of pixels is within the range ['min_pixels', 'max_pixels'].
+
+ 3. The aspect ratio of the image is maintained as closely as possible.
+
+ """
+ if max(height, width) / min(height, width) > 200:
+ raise ValueError(
+ f"absolute aspect ratio must be smaller than 200, got {max(height, width) / min(height, width)}"
+ )
+ h_bar = max(factor, round_by_factor(height, factor))
+ w_bar = max(factor, round_by_factor(width, factor))
+ if h_bar * w_bar > max_pixels:
+ beta = math.sqrt((height * width) / max_pixels)
+ h_bar = max(factor, floor_by_factor(height / beta, factor))
+ w_bar = max(factor, floor_by_factor(width / beta, factor))
+ elif h_bar * w_bar < min_pixels:
+ beta = math.sqrt(min_pixels / (height * width))
+ h_bar = ceil_by_factor(height * beta, factor)
+ w_bar = ceil_by_factor(width * beta, factor)
+ if h_bar * w_bar > max_pixels: # max_pixels first to control the token length
+ beta = math.sqrt((h_bar * w_bar) / max_pixels)
+ h_bar = max(factor, floor_by_factor(h_bar / beta, factor))
+ w_bar = max(factor, floor_by_factor(w_bar / beta, factor))
+ return h_bar, w_bar
+
+
+
+def PILimage_to_base64(image, format='PNG'):
+ buffered = BytesIO()
+ image.save(buffered, format=format)
+ base64_str = base64.b64encode(buffered.getvalue()).decode('utf-8')
+ return f"data:image/{format.lower()};base64,{base64_str}"
+
+
+def to_rgb(pil_image: Image.Image) -> Image.Image:
+ if pil_image.mode == 'RGBA':
+ white_background = Image.new("RGB", pil_image.size, (255, 255, 255))
+ white_background.paste(pil_image, mask=pil_image.split()[3]) # Use alpha channel as mask
+ return white_background
+ else:
+ return pil_image.convert("RGB")
+
+
+# copy from https://github.com/QwenLM/Qwen2.5-VL/blob/main/qwen-vl-utils/src/qwen_vl_utils/vision_process.py
+def fetch_image(
+ image,
+ min_pixels=None,
+ max_pixels=None,
+ resized_height=None,
+ resized_width=None,
+ ) -> Image.Image:
+ assert image is not None, f"image not found, maybe input format error: {image}"
+ image_obj = None
+ if isinstance(image, Image.Image):
+ image_obj = image
+ elif image.startswith("http://") or image.startswith("https://"):
+ # fix memory leak issue while using BytesIO
+ with requests.get(image, stream=True) as response:
+ response.raise_for_status()
+ with BytesIO(response.content) as bio:
+ image_obj = copy.deepcopy(Image.open(bio))
+ elif image.startswith("file://"):
+ image_obj = Image.open(image[7:])
+ elif image.startswith("data:image"):
+ if "base64," in image:
+ _, base64_data = image.split("base64,", 1)
+ data = base64.b64decode(base64_data)
+ # fix memory leak issue while using BytesIO
+ with BytesIO(data) as bio:
+ image_obj = copy.deepcopy(Image.open(bio))
+ else:
+ image_obj = Image.open(image)
+ if image_obj is None:
+ raise ValueError(f"Unrecognized image input, support local path, http url, base64 and PIL.Image, got {image}")
+ image = to_rgb(image_obj)
+ ## resize
+ if resized_height and resized_width:
+ resized_height, resized_width = smart_resize(
+ resized_height,
+ resized_width,
+ factor=IMAGE_FACTOR,
+ )
+ assert resized_height>0 and resized_width>0, f"resized_height: {resized_height}, resized_width: {resized_width}, min_pixels: {min_pixels}, max_pixels:{max_pixels}, width: {width}, height:{height}, "
+ image = image.resize((resized_width, resized_height))
+ elif min_pixels or max_pixels:
+ width, height = image.size
+ if not min_pixels:
+ min_pixels = MIN_PIXELS
+ if not max_pixels:
+ max_pixels = MAX_PIXELS
+ resized_height, resized_width = smart_resize(
+ height,
+ width,
+ factor=IMAGE_FACTOR,
+ min_pixels=min_pixels,
+ max_pixels=max_pixels,
+ )
+ assert resized_height>0 and resized_width>0, f"resized_height: {resized_height}, resized_width: {resized_width}, min_pixels: {min_pixels}, max_pixels:{max_pixels}, width: {width}, height:{height}, "
+ image = image.resize((resized_width, resized_height))
+
+ return image
+
+def get_input_dimensions(
+ image: Image.Image,
+ min_pixels: int,
+ max_pixels: int,
+ factor: int = 28
+) -> Tuple[int, int]:
+ """
+ Gets the resized dimensions of the input image.
+
+ Args:
+ image: The original image.
+ min_pixels: The minimum number of pixels.
+ max_pixels: The maximum number of pixels.
+ factor: The resizing factor.
+
+ Returns:
+ The resized (width, height).
+ """
+ input_height, input_width = smart_resize(
+ image.height,
+ image.width,
+ factor=factor,
+ min_pixels=min_pixels,
+ max_pixels=max_pixels
+ )
+ return input_width, input_height
+
+
+def get_image_by_fitz_doc(image, target_dpi=200):
+ # get image through fitz, to get target dpi image, mainly for higher image
+ if not isinstance(image, Image.Image):
+ assert isinstance(image, str)
+ _, file_ext = os.path.splitext(image)
+ assert file_ext in {'.jpg', '.jpeg', '.png'}
+
+ if image.startswith("http://") or image.startswith("https://"):
+ with requests.get(image, stream=True) as response:
+ response.raise_for_status()
+ data_bytes = response.content
+ else:
+ with open(image, 'rb') as f:
+ data_bytes = f.read()
+
+ image = Image.open(BytesIO(data_bytes))
+ else:
+ data_bytes = BytesIO()
+ image.save(data_bytes, format='PNG')
+
+ origin_dpi = image.info.get('dpi', None)
+ pdf_bytes = fitz.open(stream=data_bytes).convert_to_pdf()
+ doc = fitz.open('pdf', pdf_bytes)
+ page = doc[0]
+ image_fitz = fitz_doc_to_image(page, target_dpi=target_dpi, origin_dpi=origin_dpi)
+
+ return image_fitz
diff --git a/dots_ocr/utils/layout_utils.py b/dots_ocr/utils/layout_utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..fa5dcebe7b404ac249fa7709a4c2c87f2f477464
--- /dev/null
+++ b/dots_ocr/utils/layout_utils.py
@@ -0,0 +1,228 @@
+from PIL import Image
+from typing import Dict, List
+
+import fitz
+from io import BytesIO
+import json
+
+from dots_ocr.utils.image_utils import smart_resize
+from dots_ocr.utils.consts import MIN_PIXELS, MAX_PIXELS
+from dots_ocr.utils.output_cleaner import OutputCleaner
+
+
+# Define a color map (using RGBA format)
+dict_layout_type_to_color = {
+ "Text": (0, 128, 0, 256), # Green, translucent
+ "Picture": (255, 0, 255, 256), # Magenta, translucent
+ "Caption": (255, 165, 0, 256), # Orange, translucent
+ "Section-header": (0, 255, 255, 256), # Cyan, translucent
+ "Footnote": (0, 128, 0, 256), # Green, translucent
+ "Formula": (128, 128, 128, 256), # Gray, translucent
+ "Table": (255, 192, 203, 256), # Pink, translucent
+ "Title": (255, 0, 0, 256), # Red, translucent
+ "List-item": (0, 0, 255, 256), # Blue, translucent
+ "Page-header": (0, 128, 0, 256), # Green, translucent
+ "Page-footer": (128, 0, 128, 256), # Purple, translucent
+ "Other": (165, 42, 42, 256), # Brown, translucent
+ "Unknown": (0, 0, 0, 0),
+}
+
+
+def draw_layout_on_image(image, cells, resized_height=None, resized_width=None, fill_bbox=True, draw_bbox=True):
+ """
+ Draw transparent boxes on an image.
+
+ Args:
+ image: The source PIL Image.
+ cells: A list of cells containing bounding box information.
+ resized_height: The resized height.
+ resized_width: The resized width.
+ fill_bbox: Whether to fill the bounding box.
+ draw_bbox: Whether to draw the bounding box.
+
+ Returns:
+ PIL.Image: The image with drawings.
+ """
+ # origin_image = Image.open(image_path)
+ original_width, original_height = image.size
+
+ # Create a new PDF document
+ doc = fitz.open()
+
+ # Get image information
+ img_bytes = BytesIO()
+ image.save(img_bytes, format='PNG')
+ # pix = fitz.Pixmap(image_path)
+ pix = fitz.Pixmap(img_bytes)
+
+ # Create a page
+ page = doc.new_page(width=pix.width, height=pix.height)
+ page.insert_image(
+ fitz.Rect(0, 0, pix.width, pix.height),
+ # filename=image_path
+ pixmap=pix
+ )
+
+ for i, cell in enumerate(cells):
+ bbox = cell['bbox']
+ layout_type = cell['category']
+ order = i
+
+ top_left = (bbox[0], bbox[1])
+ down_right = (bbox[2], bbox[3])
+ if resized_height and resized_width:
+ scale_x = resized_width / original_width
+ scale_y = resized_height / original_height
+ top_left = (int(bbox[0] / scale_x), int(bbox[1] / scale_y))
+ down_right = (int(bbox[2] / scale_x), int(bbox[3] / scale_y))
+
+ color = dict_layout_type_to_color.get(layout_type, (0, 128, 0, 256))
+ color = [col/255 for col in color[:3]]
+
+ x0, y0, x1, y1 = top_left[0], top_left[1], down_right[0], down_right[1]
+ rect_coords = fitz.Rect(x0, y0, x1, y1)
+ if draw_bbox:
+ if fill_bbox:
+ page.draw_rect(
+ rect_coords,
+ color=None,
+ fill=color,
+ fill_opacity=0.3,
+ width=0.5,
+ overlay=True,
+ ) # Draw the rectangle
+ else:
+ page.draw_rect(
+ rect_coords,
+ color=color,
+ fill=None,
+ fill_opacity=1,
+ width=0.5,
+ overlay=True,
+ ) # Draw the rectangle
+ order_cate = f"{order}_{layout_type}"
+ page.insert_text(
+ (x1, y0 + 20), order_cate, fontsize=20, color=color
+ ) # Insert the index in the top left corner of the rectangle
+
+ # Convert to a Pixmap (maintaining original dimensions)
+ mat = fitz.Matrix(1.0, 1.0)
+ pix = page.get_pixmap(matrix=mat)
+
+ return Image.frombytes("RGB", [pix.width, pix.height], pix.samples)
+
+
+def pre_process_bboxes(
+ origin_image,
+ bboxes,
+ input_width,
+ input_height,
+ factor: int = 28,
+ min_pixels: int = 3136,
+ max_pixels: int = 11289600
+):
+ assert isinstance(bboxes, list) and len(bboxes) > 0 and isinstance(bboxes[0], list)
+ min_pixels = min_pixels or MIN_PIXELS
+ max_pixels = max_pixels or MAX_PIXELS
+ original_width, original_height = origin_image.size
+
+ input_height, input_width = smart_resize(input_height, input_width, min_pixels=min_pixels, max_pixels=max_pixels)
+
+ scale_x = original_width / input_width
+ scale_y = original_height / input_height
+
+ bboxes_out = []
+ for bbox in bboxes:
+ bbox_resized = [
+ int(float(bbox[0]) / scale_x),
+ int(float(bbox[1]) / scale_y),
+ int(float(bbox[2]) / scale_x),
+ int(float(bbox[3]) / scale_y)
+ ]
+ bboxes_out.append(bbox_resized)
+
+ return bboxes_out
+
+def post_process_cells(
+ origin_image: Image.Image,
+ cells: List[Dict],
+ input_width, # server input width, also has smart_resize in server
+ input_height,
+ factor: int = 28,
+ min_pixels: int = 3136,
+ max_pixels: int = 11289600
+) -> List[Dict]:
+ """
+ Post-processes cell bounding boxes, converting coordinates from the resized dimensions back to the original dimensions.
+
+ Args:
+ origin_image: The original PIL Image.
+ cells: A list of cells containing bounding box information.
+ input_width: The width of the input image sent to the server.
+ input_height: The height of the input image sent to the server.
+ factor: Resizing factor.
+ min_pixels: Minimum number of pixels.
+ max_pixels: Maximum number of pixels.
+
+ Returns:
+ A list of post-processed cells.
+ """
+ assert isinstance(cells, list) and len(cells) > 0 and isinstance(cells[0], dict)
+ min_pixels = min_pixels or MIN_PIXELS
+ max_pixels = max_pixels or MAX_PIXELS
+ original_width, original_height = origin_image.size
+
+ input_height, input_width = smart_resize(input_height, input_width, min_pixels=min_pixels, max_pixels=max_pixels)
+
+ scale_x = input_width / original_width
+ scale_y = input_height / original_height
+
+ cells_out = []
+ for cell in cells:
+ bbox = cell['bbox']
+ bbox_resized = [
+ int(float(bbox[0]) / scale_x),
+ int(float(bbox[1]) / scale_y),
+ int(float(bbox[2]) / scale_x),
+ int(float(bbox[3]) / scale_y)
+ ]
+ cell_copy = cell.copy()
+ cell_copy['bbox'] = bbox_resized
+ cells_out.append(cell_copy)
+
+ return cells_out
+
+def is_legal_bbox(cells):
+ for cell in cells:
+ bbox = cell['bbox']
+ if bbox[2] <= bbox[0] or bbox[3] <= bbox[1]:
+ return False
+ return True
+
+def post_process_output(response, prompt_mode, origin_image, input_image, min_pixels=None, max_pixels=None):
+ if prompt_mode in ["prompt_ocr", "prompt_table_html", "prompt_table_latex", "prompt_formula_latex"]:
+ return response
+
+ json_load_failed = False
+ cells = response
+ try:
+ cells = json.loads(cells)
+ cells = post_process_cells(
+ origin_image,
+ cells,
+ input_image.width,
+ input_image.height,
+ min_pixels=min_pixels,
+ max_pixels=max_pixels
+ )
+ return cells, False
+ except Exception as e:
+ print(f"cells post process error: {e}, when using {prompt_mode}")
+ json_load_failed = True
+
+ if json_load_failed:
+ cleaner = OutputCleaner()
+ response_clean = cleaner.clean_model_output(cells)
+ if isinstance(response_clean, list):
+ response_clean = "\n\n".join([cell['text'] for cell in response_clean if 'text' in cell])
+ return response_clean, True
diff --git a/dots_ocr/utils/output_cleaner.py b/dots_ocr/utils/output_cleaner.py
new file mode 100644
index 0000000000000000000000000000000000000000..20246ea3ba46060806b510db941e39c84f0bae89
--- /dev/null
+++ b/dots_ocr/utils/output_cleaner.py
@@ -0,0 +1,623 @@
+#!/usr/bin/env python3
+"""
+Data Cleaning Script - Cleans all data using a simplified regex method and saves the results
+
+Features:
+1. Cleans all cases using a simplified regex method.
+2. Saves the cleaned data for each case.
+3. Ensures the relative order of dicts remains unchanged.
+4. Generates a before-and-after cleaning report.
+"""
+
+import json
+import re
+import os
+from typing import Dict, List, Tuple, Optional, Any
+from dataclasses import dataclass
+from collections import Counter
+import traceback
+
+
+@dataclass
+class CleanedData:
+ """Data structure for cleaned data"""
+ case_id: int
+ original_type: str # 'list' or 'str'
+ original_length: int
+ cleaned_data: List[Dict]
+ cleaning_operations: Dict[str, Any] # Records the cleaning operations performed
+ success: bool
+
+
+class OutputCleaner:
+ """Data Cleaner - Based on a simplified regex method"""
+
+ def __init__(self):
+ # Simplified regular expression patterns
+ self.dict_pattern = re.compile(r'\{[^{}]*?"bbox"\s*:\s*\[[^\]]*?\][^{}]*?\}', re.DOTALL)
+ self.bbox_pattern = re.compile(r'"bbox"\s*:\s*\[([^\]]+)\]')
+ self.missing_delimiter_pattern = re.compile(r'\}\s*\{(?!")')
+
+ self.cleaned_results: List[CleanedData] = []
+
+ def clean_list_data(self, data: List[Dict], case_id: int) -> CleanedData:
+ """Cleans list-type data"""
+
+ print(f"🔧 Cleaning List data - Case {case_id}")
+ print(f" Original items: {len(data)}")
+
+ cleaned_data = []
+ operations = {
+ 'type': 'list',
+ 'bbox_fixes': 0,
+ 'removed_items': 0,
+ 'original_count': len(data)
+ }
+
+ for i, item in enumerate(data):
+ if not isinstance(item, dict):
+ operations['removed_items'] += 1
+ continue
+
+ # Check the bbox field
+ if 'bbox' in item:
+ bbox = item['bbox']
+
+ # Check bbox length - core logic
+ if isinstance(bbox, list) and len(bbox) == 3:
+ print(f" ⚠️ Item {i}: bbox has only 3 coordinates. Removing bbox, keeping category and text.")
+ # Keep only category and text, ensuring order is preserved
+ new_item = {}
+ if 'category' in item:
+ new_item['category'] = item['category']
+ if 'text' in item:
+ new_item['text'] = item['text']
+ if new_item: # Add only if there is valid content
+ cleaned_data.append(new_item)
+ operations['bbox_fixes'] += 1
+ else:
+ operations['removed_items'] += 1
+ continue
+ elif isinstance(bbox, list) and len(bbox) == 4:
+ # bbox is normal, add directly, preserving original order
+ cleaned_data.append(item.copy())
+ continue
+ else:
+ print(f" ❌ Item {i}: Abnormal bbox format, skipping.")
+ operations['removed_items'] += 1
+ continue
+ else:
+ # No bbox field, keep if category exists
+ if 'category' in item:
+ cleaned_data.append(item.copy())
+ continue
+ else:
+ operations['removed_items'] += 1
+
+ operations['final_count'] = len(cleaned_data)
+ print(f" ✅ Cleaning complete: {len(cleaned_data)} items, {operations['bbox_fixes']} bbox fixes, {operations['removed_items']} items removed")
+
+ return CleanedData(
+ case_id=case_id,
+ original_type='list',
+ original_length=len(data),
+ cleaned_data=cleaned_data,
+ cleaning_operations=operations,
+ success=True
+ )
+
+ def clean_string_data(self, data_str: str, case_id: int) -> CleanedData:
+ """Cleans string-type data"""
+
+ print(f"🔧 Cleaning String data - Case {case_id}")
+ print(f" Original length: {len(data_str):,}")
+
+ operations = {
+ 'type': 'str',
+ 'original_length': len(data_str),
+ 'delimiter_fixes': 0,
+ 'tail_truncated': False,
+ 'truncated_length': 0,
+ 'duplicate_dicts_removed': 0,
+ 'final_objects': 0
+ }
+
+ try:
+ # Step 1: Detect and fix missing delimiters
+ data_str, delimiter_fixes = self._fix_missing_delimiters(data_str)
+ operations['delimiter_fixes'] = delimiter_fixes
+
+ # Step 2: Truncate the last incomplete element
+ data_str, tail_truncated = self._truncate_last_incomplete_element(data_str)
+ operations['tail_truncated'] = tail_truncated
+ operations['truncated_length'] = len(data_str)
+
+ # Step 3: Remove duplicate complete dict objects, preserving order
+ data_str, duplicate_removes = self._remove_duplicate_complete_dicts_preserve_order(data_str)
+ operations['duplicate_dicts_removed'] = duplicate_removes
+
+ # Step 4: Ensure correct JSON format
+ data_str = self._ensure_json_format(data_str)
+
+ # Step 5: Try to parse the final result
+ final_data = self._parse_final_json(data_str)
+
+ if final_data is not None:
+ operations['final_objects'] = len(final_data)
+ print(f" ✅ Cleaning complete: {len(final_data)} objects")
+
+ return CleanedData(
+ case_id=case_id,
+ original_type='str',
+ original_length=operations['original_length'],
+ cleaned_data=final_data,
+ cleaning_operations=operations,
+ success=True
+ )
+ else:
+ raise Exception("Could not parse the cleaned data")
+
+ except Exception as e:
+ print(f" ❌ Cleaning failed: {e}")
+ return CleanedData(
+ case_id=case_id,
+ original_type='str',
+ original_length=operations['original_length'],
+ cleaned_data=[],
+ cleaning_operations=operations,
+ success=False
+ )
+
+ def _fix_missing_delimiters(self, text: str) -> Tuple[str, int]:
+ """Fixes missing delimiters"""
+
+ fixes = 0
+
+ def replace_delimiter(match):
+ nonlocal fixes
+ fixes += 1
+ return '},{'
+
+ text = self.missing_delimiter_pattern.sub(replace_delimiter, text)
+
+ if fixes > 0:
+ print(f" ✅ Fixed {fixes} missing delimiters")
+
+ return text, fixes
+
+ def _truncate_last_incomplete_element(self, text: str) -> Tuple[str, bool]:
+ """Truncates the last incomplete element"""
+
+ # For very long text (>50k) or text not ending with ']', directly truncate the last '{"bbox":'
+ needs_truncation = (
+ len(text) > 50000 or
+ not text.strip().endswith(']')
+ )
+
+ if needs_truncation:
+ # Check how many dict objects there are
+ bbox_count = text.count('{"bbox":')
+
+ # If there is only one dict object, do not truncate to avoid deleting the only object
+ if bbox_count <= 1:
+ print(f" ⚠️ Only {bbox_count} dict objects found, skipping truncation to avoid deleting all content")
+ return text, False
+
+ # Find the position of the last '{"bbox":'
+ last_bbox_pos = text.rfind('{"bbox":')
+
+ if last_bbox_pos > 0:
+ # Truncate before this position
+ truncated_text = text[:last_bbox_pos].rstrip()
+
+ # Remove trailing comma
+ if truncated_text.endswith(','):
+ truncated_text = truncated_text[:-1]
+
+ print(f" ✂️ Truncated the last incomplete element, length reduced from {len(text):,} to {len(truncated_text):,}")
+ return truncated_text, True
+
+ return text, False
+
+ def _remove_duplicate_complete_dicts_preserve_order(self, text: str) -> Tuple[str, int]:
+ """Removes duplicate complete dict objects, preserving original order"""
+
+ # Extract all dict objects, preserving order
+ dict_matches = list(self.dict_pattern.finditer(text))
+
+ if not dict_matches:
+ return text, 0
+
+ print(f" 📊 Found {len(dict_matches)} dict objects")
+
+ # Deduplication while preserving order: only keep the first occurrence of a dict
+ unique_dicts = []
+ seen_dict_strings = set()
+ total_duplicates = 0
+
+ for match in dict_matches:
+ dict_str = match.group()
+
+ if dict_str not in seen_dict_strings:
+ unique_dicts.append(dict_str)
+ seen_dict_strings.add(dict_str)
+ else:
+ total_duplicates += 1
+
+ if total_duplicates > 0:
+ # Reconstruct the JSON array, preserving the original order
+ new_text = '[' + ', '.join(unique_dicts) + ']'
+ print(f" ✅ Removed {total_duplicates} duplicate dicts, keeping {len(unique_dicts)} unique dicts (order preserved)")
+ return new_text, total_duplicates
+ else:
+ print(f" ✅ No duplicate dict objects found")
+ return text, 0
+
+ def _ensure_json_format(self, text: str) -> str:
+ """Ensures correct JSON format"""
+
+ text = text.strip()
+
+ if not text.startswith('['):
+ text = '[' + text
+
+ if not text.endswith(']'):
+ # Remove trailing comma
+ text = text.rstrip(',').rstrip()
+ text += ']'
+
+ return text
+
+ def _parse_final_json(self, text: str) -> Optional[List[Dict]]:
+ """Tries to parse the final JSON"""
+
+ try:
+ data = json.loads(text)
+ if isinstance(data, list):
+ return data
+ except json.JSONDecodeError as e:
+ print(f" ❌ JSON parsing failed: {e}")
+
+ # fallback1: Extract valid dict objects
+ valid_dicts = []
+
+ for match in self.dict_pattern.finditer(text):
+ dict_str = match.group()
+ try:
+ dict_obj = json.loads(dict_str)
+ valid_dicts.append(dict_obj)
+ except:
+ continue
+
+ if valid_dicts:
+ print(f" ✅ Extracted {len(valid_dicts)} valid dicts")
+ return valid_dicts
+
+ # fallback2: Special handling for a single incomplete dict
+ return self._handle_single_incomplete_dict(text)
+
+ return None
+
+ def _handle_single_incomplete_dict(self, text: str) -> Optional[List[Dict]]:
+ """Handles the special case of a single incomplete dict"""
+
+ # Check if it's a single incomplete dict case
+ if not text.strip().startswith('[{"bbox":'):
+ return None
+
+ try:
+ # Try to extract bbox coordinates
+ bbox_match = re.search(r'"bbox"\s*:\s*\[([^\]]+)\]', text)
+ if not bbox_match:
+ return None
+
+ bbox_str = bbox_match.group(1)
+ bbox_coords = [int(x.strip()) for x in bbox_str.split(',')]
+
+ if len(bbox_coords) != 4:
+ return None
+
+ # Try to extract category
+ category_match = re.search(r'"category"\s*:\s*"([^"]+)"', text)
+ category = category_match.group(1) if category_match else "Text"
+
+ # Try to extract the beginning of the text (first 10000 characters)
+ text_match = re.search(r'"text"\s*:\s*"([^"]{0,10000})', text)
+ if text_match:
+ text_content = text_match.group(1)
+ else:
+ text_content = ""
+
+ # Construct the fixed dict
+ fixed_dict = {
+ "bbox": bbox_coords,
+ "category": category
+ }
+
+ if text_content:
+ fixed_dict["text"] = text_content
+
+ print(f" 🔧 Special fix: single incomplete dict → {fixed_dict}")
+ return [fixed_dict]
+
+ except Exception as e:
+ print(f" ❌ Special fix failed: {e}")
+ return None
+
+ def remove_duplicate_category_text_pairs_and_bbox(self, data_list: List[dict], case_id: int) -> List[dict]:
+ """Removes duplicate category-text pairs and duplicate bboxes"""
+
+ if not data_list or len(data_list) <= 1:
+ print(f" 📊 Data length {len(data_list)} <= 1, skipping deduplication check")
+ return data_list
+
+ print(f" 📊 Original data length: {len(data_list)}")
+
+ # 1. Count occurrences and positions of each category-text pair
+ category_text_pairs = {}
+ for i, item in enumerate(data_list):
+ if isinstance(item, dict) and 'category' in item and 'text' in item:
+ pair_key = (item.get('category', ''), item.get('text', ''))
+ if pair_key not in category_text_pairs:
+ category_text_pairs[pair_key] = []
+ category_text_pairs[pair_key].append(i)
+
+ # 2. Count occurrences and positions of each bbox
+ bbox_pairs = {}
+ for i, item in enumerate(data_list):
+ if isinstance(item, dict) and 'bbox' in item:
+ bbox = item.get('bbox')
+ if isinstance(bbox, list) and len(bbox) > 0:
+ bbox_key = tuple(bbox) # Convert to tuple to use as a dictionary key
+ if bbox_key not in bbox_pairs:
+ bbox_pairs[bbox_key] = []
+ bbox_pairs[bbox_key].append(i)
+
+ # 3. Identify items to be removed
+ duplicates_to_remove = set()
+
+ # 3a. Process category-text pairs that appear 5 or more times
+ for pair_key, positions in category_text_pairs.items():
+ if len(positions) >= 5:
+ category, text = pair_key
+ # Keep the first occurrence, remove subsequent duplicates
+ positions_to_remove = positions[1:]
+ duplicates_to_remove.update(positions_to_remove)
+
+ print(f" 🔍 Found duplicate category-text pair: category='{category}', first 50 chars of text='{text[:50]}...'")
+ print(f" Count: {len(positions)}, removing at positions: {positions_to_remove}")
+
+ # 3b. Process bboxes that appear 2 or more times
+ for bbox_key, positions in bbox_pairs.items():
+ if len(positions) >= 2:
+ # Keep the first occurrence, remove subsequent duplicates
+ positions_to_remove = positions[1:]
+ duplicates_to_remove.update(positions_to_remove)
+
+ print(f" 🔍 Found duplicate bbox: {list(bbox_key)}")
+ print(f" Count: {len(positions)}, removing at positions: {positions_to_remove}")
+
+ if not duplicates_to_remove:
+ print(f" ✅ No category-text pairs or bboxes found exceeding the duplication threshold")
+ return data_list
+
+ # 4. Remove duplicate items from the original data (preserving order)
+ cleaned_data = []
+ removed_count = 0
+ for i, item in enumerate(data_list):
+ if i not in duplicates_to_remove:
+ cleaned_data.append(item)
+ else:
+ removed_count += 1
+
+ print(f" ✅ Deduplication complete: Removed {removed_count} duplicate items")
+ print(f" 📊 Cleaned data length: {len(cleaned_data)}")
+
+ return cleaned_data
+
+ def clean_model_output(self, model_output: str):
+ try:
+ # Select cleaning method based on data type
+ if isinstance(model_output, list):
+ result = self.clean_list_data(model_output, case_id=0)
+ else:
+ result = self.clean_string_data(str(model_output), case_id=0)
+
+ # Add deduplication step: remove duplicate category-text pairs and bboxes
+ if result and hasattr(result, 'success') and result.success and result.cleaned_data:
+ original_data = result.cleaned_data
+ deduplicated_data = self.remove_duplicate_category_text_pairs_and_bbox(original_data, case_id=0)
+ # Update the cleaned_data in the CleanedData object
+ result.cleaned_data = deduplicated_data
+ return result.cleaned_data
+ except Exception as e:
+ print(f"❌ Case cleaning failed: {e}")
+ return model_output
+
+ def clean_all_data(self, jsonl_path: str) -> List[CleanedData]:
+ """Cleans all data from a JSONL file"""
+
+ print(f"🚀 Starting to clean JSONL file: {jsonl_path}")
+
+ with open(jsonl_path, 'r', encoding='utf-8') as f:
+ lines = f.readlines()
+
+ datas = []
+ for i, line in enumerate(lines):
+ if line.strip():
+ try:
+ data = json.loads(line)
+ predict_field = data.get('predict')
+ case_id = i + 1
+
+ print(f"\n{'='*50}")
+ print(f"🎯 Cleaning Case {case_id}")
+ print(f"{'='*50}")
+
+ # Select cleaning method based on data type
+ if isinstance(predict_field, list):
+ print("📊 Data type: List")
+ result = self.clean_list_data(predict_field, case_id)
+ else:
+ print("📊 Data type: String")
+ result = self.clean_string_data(str(predict_field), case_id)
+
+ # Add deduplication step: remove duplicate category-text pairs and bboxes
+ if result and hasattr(result, 'success') and result.success and result.cleaned_data:
+ print("🔄 Checking for and removing duplicate category-text pairs and bboxes...")
+ original_data = result.cleaned_data
+ deduplicated_data = self.remove_duplicate_category_text_pairs_and_bbox(original_data, case_id)
+ # Update the cleaned_data in the CleanedData object
+ result.cleaned_data = deduplicated_data
+ data['predict_resized'] = result.cleaned_data
+
+ datas.append(data)
+ self.cleaned_results.append(result)
+
+ except Exception as e:
+ print(f"❌ Case {i+1} cleaning failed: {e}")
+ traceback.print_exc()
+
+ save_path = jsonl_path.replace('.jsonl', '_filtered.jsonl')
+ with open(save_path, 'w') as w:
+ for data in datas:
+ w.write(json.dumps(data, ensure_ascii=False) + '\n')
+ print(f"✅ Saved cleaned data to: {save_path}")
+
+ return self.cleaned_results
+
+ def save_cleaned_data(self, output_dir: str):
+ """Saves the cleaned data"""
+
+ print(f"\n💾 Saving cleaned data to: {output_dir}")
+ os.makedirs(output_dir, exist_ok=True)
+
+ # 1. Save cleaned data for each case
+ for result in self.cleaned_results:
+ case_filename = f"cleaned_case_{result.case_id:02d}.json"
+ case_filepath = os.path.join(output_dir, case_filename)
+
+ # Save the cleaned data
+ with open(case_filepath, 'w', encoding='utf-8') as f:
+ json.dump(result.cleaned_data, f, ensure_ascii=False, indent=2)
+
+ print(f" ✅ Case {result.case_id}: {len(result.cleaned_data)} objects → {case_filename}")
+
+ # 2. Save all cleaned data to a single file
+ all_cleaned_data = []
+ for result in self.cleaned_results:
+ all_cleaned_data.append({
+ 'case_id': result.case_id,
+ 'original_type': result.original_type,
+ 'original_length': result.original_length,
+ 'cleaned_objects_count': len(result.cleaned_data),
+ 'success': result.success,
+ 'cleaning_operations': result.cleaning_operations,
+ 'cleaned_data': result.cleaned_data
+ })
+
+ all_data_filepath = os.path.join(output_dir, "all_cleaned_data.json")
+ with open(all_data_filepath, 'w', encoding='utf-8') as f:
+ json.dump(all_cleaned_data, f, ensure_ascii=False, indent=2)
+
+ print(f" 📁 All data: {len(all_cleaned_data)} cases → all_cleaned_data.json")
+
+ # 3. Generate a cleaning report
+ self._generate_cleaning_report(output_dir)
+
+ def _generate_cleaning_report(self, output_dir: str):
+ """Generates a cleaning report"""
+
+ report = []
+ report.append("📊 Data Cleaning Report")
+ report.append("=" * 60)
+ import datetime
+ report.append(f"Processing Time: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
+ report.append("")
+
+ # Overall statistics
+ total_cases = len(self.cleaned_results)
+ successful_cases = sum(1 for r in self.cleaned_results if r.success)
+ total_objects = sum(len(r.cleaned_data) for r in self.cleaned_results)
+
+ report.append("📈 Overall Statistics:")
+ report.append(f" Total Cases: {total_cases}")
+ report.append(f" Successfully Cleaned: {successful_cases}")
+ report.append(f" Success Rate: {successful_cases/total_cases*100:.1f}%")
+ report.append(f" Total Recovered Objects: {total_objects}")
+ report.append("")
+
+ # Detailed statistics
+ list_results = [r for r in self.cleaned_results if r.original_type == 'list']
+ str_results = [r for r in self.cleaned_results if r.original_type == 'str']
+
+ if list_results:
+ report.append("📋 List Type Cleaning Statistics:")
+ for r in list_results:
+ ops = r.cleaning_operations
+ report.append(f" Case {r.case_id}: {ops['original_count']} → {ops['final_count']} objects")
+ if ops['bbox_fixes'] > 0:
+ report.append(f" - bbox fixes: {ops['bbox_fixes']}")
+ if ops['removed_items'] > 0:
+ report.append(f" - invalid items removed: {ops['removed_items']}")
+ report.append("")
+
+ if str_results:
+ report.append("📝 String Type Cleaning Statistics:")
+ for r in str_results:
+ ops = r.cleaning_operations
+ status = "✅" if r.success else "❌"
+ report.append(f" Case {r.case_id} {status}: {ops['original_length']:,} chars → {ops['final_objects']} objects")
+ details = []
+ if ops['delimiter_fixes'] > 0:
+ details.append(f"Delimiter fixes: {ops['delimiter_fixes']}")
+ if ops['tail_truncated']:
+ reduction = ops['original_length'] - ops['truncated_length']
+ details.append(f"Tail truncation: -{reduction:,} chars")
+ if ops['duplicate_dicts_removed'] > 0:
+ details.append(f"Duplicates removed: {ops['duplicate_dicts_removed']}")
+ if details:
+ report.append(f" - {', '.join(details)}")
+ report.append("")
+
+ # Note on data order
+ report.append("🔄 Data Order Guarantee:")
+ report.append(" ✅ The relative order of all dict objects is preserved during cleaning.")
+ report.append(" ✅ When deduplicating, the first occurrence of a dict is kept, and subsequent duplicates are removed.")
+ report.append(" ✅ The order of items in List-type data is fully preserved.")
+
+ # Save the report
+ report_filepath = os.path.join(output_dir, "cleaning_report.txt")
+ with open(report_filepath, 'w', encoding='utf-8') as f:
+ f.write('\n'.join(report))
+
+ print(f" 📋 Cleaning report: cleaning_report.txt")
+
+ # Also print to console
+ print(f"\n{chr(10).join(report)}")
+
+
+def main():
+ """Main function"""
+
+ # Create a data cleaner instance
+ cleaner = OutputCleaner()
+
+ # Input file
+ jsonl_path = "output_with_failcase.jsonl"
+
+ # Output directory
+ output_dir = "output_with_failcase_cleaned"
+
+ # Clean all data
+ results = cleaner.clean_all_data(jsonl_path)
+
+ # Save the cleaned data
+ cleaner.save_cleaned_data(output_dir)
+
+ print(f"\n🎉 Data cleaning complete!")
+ print(f"📁 Cleaned data saved in: {output_dir}")
+
+
+if __name__ == "__main__":
+ main()
\ No newline at end of file
diff --git a/dots_ocr/utils/prompts.py b/dots_ocr/utils/prompts.py
new file mode 100644
index 0000000000000000000000000000000000000000..175a79e18fa4cf966210590db9c240885e08a972
--- /dev/null
+++ b/dots_ocr/utils/prompts.py
@@ -0,0 +1,34 @@
+dict_promptmode_to_prompt = {
+ # prompt_layout_all_en: parse all layout info in json format.
+ "prompt_layout_all_en": """Please output the layout information from the PDF image, including each layout element's bbox, its category, and the corresponding text content within the bbox.
+
+1. Bbox format: [x1, y1, x2, y2]
+
+2. Layout Categories: The possible categories are ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title'].
+
+3. Text Extraction & Formatting Rules:
+ - Picture: For the 'Picture' category, the text field should be omitted.
+ - Formula: Format its text as LaTeX.
+ - Table: Format its text as HTML.
+ - All Others (Text, Title, etc.): Format their text as Markdown.
+
+4. Constraints:
+ - The output text must be the original text from the image, with no translation.
+ - All layout elements must be sorted according to human reading order.
+
+5. Final Output: The entire output must be a single JSON object.
+""",
+
+ # prompt_layout_only_en: layout detection
+ "prompt_layout_only_en": """Please output the layout information from this PDF image, including each layout's bbox and its category. The bbox should be in the format [x1, y1, x2, y2]. The layout categories for the PDF document include ['Caption', 'Footnote', 'Formula', 'List-item', 'Page-footer', 'Page-header', 'Picture', 'Section-header', 'Table', 'Text', 'Title']. Do not output the corresponding text. The layout result should be in JSON format.""",
+
+ # prompt_layout_only_en: parse ocr text except the Page-header and Page-footer
+ "prompt_ocr": """Extract the text content from this image.""",
+
+ # prompt_grounding_ocr: extract text content in the given bounding box
+ "prompt_grounding_ocr": """Extract text from the given bounding box on the image (format: [x1, y1, x2, y2]).\nBounding Box:\n""",
+
+ # "prompt_table_html": """Convert the table in this image to HTML.""",
+ # "prompt_table_latex": """Convert the table in this image to LaTeX.""",
+ # "prompt_formula_latex": """Convert the formula in this image to LaTeX.""",
+}
diff --git a/requirements.txt b/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..f07d3f38a120e0aaa9c00d1aee3ab9b9b7dc0b0b
--- /dev/null
+++ b/requirements.txt
@@ -0,0 +1,11 @@
+# streamlit
+gradio
+gradio_image_annotation
+PyMuPDF
+openai
+qwen_vl_utils
+transformers==4.51.3
+huggingface_hub
+modelscope
+flash-attn==2.8.0.post2
+accelerate
diff --git a/setup.py b/setup.py
new file mode 100644
index 0000000000000000000000000000000000000000..3b12c645b5af8d1f9b24f760dc08f12fcf062bd2
--- /dev/null
+++ b/setup.py
@@ -0,0 +1,17 @@
+from setuptools import setup, find_packages
+
+# 从requirements.txt文件读取依赖
+def parse_requirements(filename):
+ with open(filename, 'r', encoding='utf-8') as f:
+ return f.read().splitlines()
+
+setup(
+ name='dots_ocr',
+ version='1.0',
+ packages=find_packages(),
+ include_package_data=True,
+ install_requires=parse_requirements('requirements.txt'),
+ description='dots.ocr: Multilingual Document Layout Parsing in one Vision-Language Model',
+ url="https://github.com/rednote-hilab/dots.ocr",
+ python_requires=">=3.10",
+)
\ No newline at end of file
diff --git a/tools/download_model.py b/tools/download_model.py
new file mode 100644
index 0000000000000000000000000000000000000000..552fbfe141d8457ae6a6977f3149c65c90443556
--- /dev/null
+++ b/tools/download_model.py
@@ -0,0 +1,24 @@
+from argparse import ArgumentParser
+import os
+
+
+if __name__ == '__main__':
+ parser = ArgumentParser()
+ parser.add_argument('--type', '-t', type=str, default="huggingface")
+ parser.add_argument('--name', '-n', type=str, default="rednote-hilab/dots.ocr")
+ args = parser.parse_args()
+ script_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
+ print(f"Attention: The model save dir dots.ocr should be replace by a name without `.` like DotsOCR, util we merge our code to transformers.")
+ model_dir = os.path.join(script_dir, "weights/DotsOCR")
+ if not os.path.exists(model_dir):
+ os.makedirs(model_dir)
+ if args.type == "huggingface":
+ from huggingface_hub import snapshot_download
+ snapshot_download(repo_id=args.name, local_dir=model_dir, local_dir_use_symlinks=False, resume_download=True)
+ elif args.type == "modelscope":
+ from modelscope import snapshot_download
+ snapshot_download(repo_id=args.name, local_dir=model_dir)
+ else:
+ raise ValueError(f"Invalid type: {args.type}")
+
+ print(f"model downloaded to {model_dir}")