| Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawa- har, Sahaj Agarwal, Hamid Palangi, and Ahmed Awadallah. 2023. Orca: Progressive learning from complex explanation traces of gpt-4. arXiv preprint JarXiv:2306.02707. | Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo Huang, Daogao Liu, Terra Blevins, Danqi Chen, and Luke Zettlemoyer. 2023. Detecting pretraining data from large language models. arXiv preprint arXiv:2310.16789. |
| OpenAI. 2023. Gpt-4 technical report. | Ken Shoemake. 1985. Animating rotation with quater- nion curves. In Proceedings of the 12th annual con- ference on Computer graphics and interactive tech- niques, pages 245-254. |
| Yu Pan, Ye Yuan, Yichun Yin, Zenglin Xu, Lifeng Shang, Xin Jiang, and Qun Liu. 2023. Reusing pre- |
| trained models by multi-linear operators for efficient training. arXiv preprint arXiv:2310.10699. Baolin Peng, Chunyuan Li, Pengcheng He, Michel Gal- ley, and Jianfeng Gao. 2023. Instruction tuning with | Mingxing Tan and Quoc Le. 2019. Efficientnet: Re- thinking model scaling for convolutional neural net- works. In International conference on machine learn- ing, pages 6105-6114. PMLR. |
| gpt-4. arXiv preprint arXiv:2304.03277. | Hugo Touvron, Louis Martin, Kevin Stone, Peter Al- bert, Amjad Almahairi, Yasmine Babaei, Nikolay |
| Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9. | Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023. Llama 2: Open founda- tion and fine-tuned chat models. arXiv preprint arXiv:2307.09288. |
| Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie Millican, Jordan Hoffmann, Francis Song, John Aslanides, Sarah Henderson, Roman Ring, Susan- nah Young, et al. 2021. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint JarXiv:2112.11446. | Lewis Tunstall, Edward Beeching, Nathan Lambert, Nazneen Rajani, Kashif Rasul, Younes Belkada, Shengyi Huang, Leandro von Werra, Clémentine Fourrier, Nathan Habib, et al. 2023. Zephyr: Di- rect distillation of lm alignment. arXiv preprint arXiv:2310.16944. |
| Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano Ermon, Christopher D Manning, and Chelsea Finn. 2023. Direct preference optimization: Your language model is secretly a reward model. arXiv preprint arXiv:2305.18290. | Peihao Wang, Rameswar Panda, Lucas Torroba Hen- nigen, Philip Greengard, Leonid Karlinsky, Roge- rio Feris, David Daniel Cox, Zhangyang Wang, and Yoon Kim. 2023. Learning to grow pretrained mod- els for efficient transformer training. arXiv preprint arXiv:2303.00980. |
| Oscar Sainz, Jon Ander Campos, Iker GarcÃa-Ferrero, Julen Etxaniz, Oier Lopez de Lacalle, and Eneko Agirre. 2023. Nlp evaluation in trouble: On the need to measure llm data contamination for each benchmark. arXiv preprint arXiv: 2310.18018. | Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Al- isa Liu, Noah A Smith, Daniel Khashabi, and Han- naneh Hajishirzi. 2022. Self-instruct: Aligning lan- guage model with self generated instructions. arXiv preprint arXiv:2212.10560. |
| Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavat- ula, and Yejin Choi. 2021. Winogrande: An adver- sarial winograd schema challenge at scale. Commu- nications of the ACM, 64(9):99-106. | Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, An- drew M Dai, and Quoc V Le. 2021. Finetuned lan- guage models are zero-shot learners. arXiv preprint arXiv:2109.01652. |
| Malik Sallam, Nesreen Salim, Muna Barakat, and Alaa Al-Tammemi. 2023. Chatgpt applications in medical, dental, pharmacy, and public health education: A descriptive study highlighting the advantages and limitations. Narra J, 3(1):e103-e103. | Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022a. Emergent abilities of large language models. arXiv preprint rXiv:2206.07682. |
| Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538. | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022b. Chain-of-thought prompting elicits rea- soning in large language models. Advances in Neural Information Processing Systems, 35:24824-24837. |
| Tianxiao Shen, Myle Ott, Michael Auli, and Marc' Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In International conference on machine learning, pages 5719-5728. PMLR. | Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, et al. 2019. Huggingface's transformers: State-of- the-art natural language processing. arXiv preprint arXiv: 1910.03771. |
",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.11860325187444687,
"y": 0.08836975693702698
},
{
"x": 0.48757582157850266,
"y": 0.08836975693702698
},
{
"x": 0.48757582157850266,
"y": 0.1494310088455677
},
{
"x": 0.11860325187444687,
"y": 0.1494310088455677
}
],
"category": "paragraph",
"id": 817,
"content": {
"text": "Subhabrata Mukherjee, Arindam Mitra, Ganesh Jawa-\nhar, Sahaj Agarwal, Hamid Palangi, and Ahmed\nAwadallah. 2023. Orca: Progressive learning from\ncomplex explanation traces of gpt-4. arXiv preprint\nJarXiv:2306.02707.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.1184282973408699,
"y": 0.16742949187755585
},
{
"x": 0.3759814575314522,
"y": 0.16742949187755585
},
{
"x": 0.3759814575314522,
"y": 0.17866524495184422
},
{
"x": 0.1184282973408699,
"y": 0.17866524495184422
}
],
"category": "paragraph",
"id": 818,
"content": {
"text": "OpenAI. 2023. Gpt-4 technical report.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.11881813406944275,
"y": 0.19451776146888733
},
{
"x": 0.48790544271469116,
"y": 0.19451776146888733
},
{
"x": 0.48790544271469116,
"y": 0.2449803352355957
},
{
"x": 0.11881813406944275,
"y": 0.2449803352355957
}
],
"category": "paragraph",
"id": 819,
"content": {
"text": "Yu Pan, Ye Yuan, Yichun Yin, Zenglin Xu, Lifeng\nShang, Xin Jiang, and Qun Liu. 2023. Reusing pre-\ntrained models by multi-linear operators for efficient\ntraining. arXiv preprint arXiv:2310.10699.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.11874296516180038,
"y": 0.2606479823589325
},
{
"x": 0.48737337440252304,
"y": 0.2606479823589325
},
{
"x": 0.48737337440252304,
"y": 0.2983761839568615
},
{
"x": 0.11874296516180038,
"y": 0.2983761839568615
}
],
"category": "paragraph",
"id": 820,
"content": {
"text": "Baolin Peng, Chunyuan Li, Pengcheng He, Michel Gal-\nley, and Jianfeng Gao. 2023. Instruction tuning with\ngpt-4. arXiv preprint arXiv:2304.03277.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.11872205138206482,
"y": 0.31382566690444946
},
{
"x": 0.48657187819480896,
"y": 0.31382566690444946
},
{
"x": 0.48657187819480896,
"y": 0.36431286856532097
},
{
"x": 0.11872205138206482,
"y": 0.36431286856532097
}
],
"category": "paragraph",
"id": 821,
"content": {
"text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan,\nDario Amodei, Ilya Sutskever, et al. 2019. Language\nmodels are unsupervised multitask learners. OpenAI\nblog, 1(8):9.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.11795755475759506,
"y": 0.3801933825016022
},
{
"x": 0.4878315255045891,
"y": 0.3801933825016022
},
{
"x": 0.4878315255045891,
"y": 0.4566226303577423
},
{
"x": 0.11795755475759506,
"y": 0.4566226303577423
}
],
"category": "paragraph",
"id": 822,
"content": {
"text": "Jack W Rae, Sebastian Borgeaud, Trevor Cai, Katie\nMillican, Jordan Hoffmann, Francis Song, John\nAslanides, Sarah Henderson, Roman Ring, Susan-\nnah Young, et al. 2021. Scaling language models:\nMethods, analysis & insights from training gopher.\narXiv preprint JarXiv:2112.11446.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.118254654109478,
"y": 0.4722003936767578
},
{
"x": 0.48732810467481613,
"y": 0.4722003936767578
},
{
"x": 0.48732810467481613,
"y": 0.5333815068006516
},
{
"x": 0.118254654109478,
"y": 0.5333815068006516
}
],
"category": "paragraph",
"id": 823,
"content": {
"text": "Rafael Rafailov, Archit Sharma, Eric Mitchell, Stefano\nErmon, Christopher D Manning, and Chelsea Finn.\n2023. Direct preference optimization: Your language\nmodel is secretly a reward model. arXiv preprint\narXiv:2305.18290.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.11863946914672852,
"y": 0.5512244701385498
},
{
"x": 0.48663368821144104,
"y": 0.5512244701385498
},
{
"x": 0.48663368821144104,
"y": 0.6147750541567802
},
{
"x": 0.11863946914672852,
"y": 0.6147750541567802
}
],
"category": "paragraph",
"id": 824,
"content": {
"text": "Oscar Sainz, Jon Ander Campos, Iker GarcÃa-Ferrero,\nJulen Etxaniz, Oier Lopez de Lacalle, and Eneko\nAgirre. 2023. Nlp evaluation in trouble: On the\nneed to measure llm data contamination for each\nbenchmark. arXiv preprint arXiv: 2310.18018.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.11871670931577682,
"y": 0.6303344964981079
},
{
"x": 0.4882396385073662,
"y": 0.6303344964981079
},
{
"x": 0.4882396385073662,
"y": 0.6808687373995781
},
{
"x": 0.11871670931577682,
"y": 0.6808687373995781
}
],
"category": "paragraph",
"id": 825,
"content": {
"text": "Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavat-\nula, and Yejin Choi. 2021. Winogrande: An adver-\nsarial winograd schema challenge at scale. Commu-\nnications of the ACM, 64(9):99-106.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.11851850897073746,
"y": 0.6965718269348145
},
{
"x": 0.4868195131421089,
"y": 0.6965718269348145
},
{
"x": 0.4868195131421089,
"y": 0.759264275431633
},
{
"x": 0.11851850897073746,
"y": 0.759264275431633
}
],
"category": "paragraph",
"id": 826,
"content": {
"text": "Malik Sallam, Nesreen Salim, Muna Barakat, and Alaa\nAl-Tammemi. 2023. Chatgpt applications in medical,\ndental, pharmacy, and public health education: A\ndescriptive study highlighting the advantages and\nlimitations. Narra J, 3(1):e103-e103.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.1182660236954689,
"y": 0.7757644057273865
},
{
"x": 0.48706573992967606,
"y": 0.7757644057273865
},
{
"x": 0.48706573992967606,
"y": 0.8392473235726357
},
{
"x": 0.1182660236954689,
"y": 0.8392473235726357
}
],
"category": "paragraph",
"id": 827,
"content": {
"text": "Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz,\nAndy Davis, Quoc Le, Geoffrey Hinton, and Jeff\nDean. 2017. Outrageously large neural networks:\nThe sparsely-gated mixture-of-experts layer. arXiv\npreprint arXiv:1701.06538.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.11798635125160217,
"y": 0.854851245880127
},
{
"x": 0.4860951900482178,
"y": 0.854851245880127
},
{
"x": 0.4860951900482178,
"y": 0.9164477623999119
},
{
"x": 0.11798635125160217,
"y": 0.9164477623999119
}
],
"category": "paragraph",
"id": 828,
"content": {
"text": "Tianxiao Shen, Myle Ott, Michael Auli, and\nMarc' Aurelio Ranzato. 2019. Mixture models for\ndiverse machine translation: Tricks of the trade. In\nInternational conference on machine learning, pages\n5719-5728. PMLR.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.5142110586166382,
"y": 0.08827012032270432
},
{
"x": 0.8822716772556305,
"y": 0.08827012032270432
},
{
"x": 0.8822716772556305,
"y": 0.14946550875902176
},
{
"x": 0.5142110586166382,
"y": 0.14946550875902176
}
],
"category": "paragraph",
"id": 829,
"content": {
"text": "Weijia Shi, Anirudh Ajith, Mengzhou Xia, Yangsibo\nHuang, Daogao Liu, Terra Blevins, Danqi Chen,\nand Luke Zettlemoyer. 2023. Detecting pretraining\ndata from large language models. arXiv preprint\narXiv:2310.16789.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.514086902141571,
"y": 0.1625424027442932
},
{
"x": 0.88322913646698,
"y": 0.1625424027442932
},
{
"x": 0.88322913646698,
"y": 0.21314184740185738
},
{
"x": 0.514086902141571,
"y": 0.21314184740185738
}
],
"category": "paragraph",
"id": 830,
"content": {
"text": "Ken Shoemake. 1985. Animating rotation with quater-\nnion curves. In Proceedings of the 12th annual con-\nference on Computer graphics and interactive tech-\nniques, pages 245-254.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.5138094425201416,
"y": 0.22352558374404907
},
{
"x": 0.883263498544693,
"y": 0.22352558374404907
},
{
"x": 0.883263498544693,
"y": 0.2741251550614834
},
{
"x": 0.5138094425201416,
"y": 0.2741251550614834
}
],
"category": "paragraph",
"id": 831,
"content": {
"text": "Mingxing Tan and Quoc Le. 2019. Efficientnet: Re-\nthinking model scaling for convolutional neural net-\nworks. In International conference on machine learn-\ning, pages 6105-6114. PMLR.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.5138667821884155,
"y": 0.28465738892555237
},
{
"x": 0.882964164018631,
"y": 0.28465738892555237
},
{
"x": 0.882964164018631,
"y": 0.3587808459997177
},
{
"x": 0.5138667821884155,
"y": 0.3587808459997177
}
],
"category": "paragraph",
"id": 832,
"content": {
"text": "Hugo Touvron, Louis Martin, Kevin Stone, Peter Al-\nbert, Amjad Almahairi, Yasmine Babaei, Nikolay\nBashlykov, Soumya Batra, Prajjwal Bhargava, Shruti\nBhosale, et al. 2023. Llama 2: Open founda-\ntion and fine-tuned chat models. arXiv preprint\narXiv:2307.09288.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.5136011242866516,
"y": 0.3713318407535553
},
{
"x": 0.8836129903793335,
"y": 0.3713318407535553
},
{
"x": 0.8836129903793335,
"y": 0.4458765983581543
},
{
"x": 0.5136011242866516,
"y": 0.4458765983581543
}
],
"category": "paragraph",
"id": 833,
"content": {
"text": "Lewis Tunstall, Edward Beeching, Nathan Lambert,\nNazneen Rajani, Kashif Rasul, Younes Belkada,\nShengyi Huang, Leandro von Werra, Clémentine\nFourrier, Nathan Habib, et al. 2023. Zephyr: Di-\nrect distillation of lm alignment. arXiv preprint\narXiv:2310.16944.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.5135813355445862,
"y": 0.4585406482219696
},
{
"x": 0.8833425343036652,
"y": 0.4585406482219696
},
{
"x": 0.8833425343036652,
"y": 0.5329826027154922
},
{
"x": 0.5135813355445862,
"y": 0.5329826027154922
}
],
"category": "paragraph",
"id": 834,
"content": {
"text": "Peihao Wang, Rameswar Panda, Lucas Torroba Hen-\nnigen, Philip Greengard, Leonid Karlinsky, Roge-\nrio Feris, David Daniel Cox, Zhangyang Wang, and\nYoon Kim. 2023. Learning to grow pretrained mod-\nels for efficient transformer training. arXiv preprint\narXiv:2303.00980.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.5135125517845154,
"y": 0.5457387566566467
},
{
"x": 0.8831159174442291,
"y": 0.5457387566566467
},
{
"x": 0.8831159174442291,
"y": 0.6090365499258041
},
{
"x": 0.5135125517845154,
"y": 0.6090365499258041
}
],
"category": "paragraph",
"id": 835,
"content": {
"text": "Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Al-\nisa Liu, Noah A Smith, Daniel Khashabi, and Han-\nnaneh Hajishirzi. 2022. Self-instruct: Aligning lan-\nguage model with self generated instructions. arXiv\npreprint arXiv:2212.10560.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.5132407546043396,
"y": 0.6195154190063477
},
{
"x": 0.8834303915500641,
"y": 0.6195154190063477
},
{
"x": 0.8834303915500641,
"y": 0.6811784021556377
},
{
"x": 0.5132407546043396,
"y": 0.6811784021556377
}
],
"category": "paragraph",
"id": 836,
"content": {
"text": "Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin\nGuu, Adams Wei Yu, Brian Lester, Nan Du, An-\ndrew M Dai, and Quoc V Le. 2021. Finetuned lan-\nguage models are zero-shot learners. arXiv preprint\narXiv:2109.01652.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.5134361982345581,
"y": 0.693654477596283
},
{
"x": 0.8830721974372864,
"y": 0.693654477596283
},
{
"x": 0.8830721974372864,
"y": 0.757329635322094
},
{
"x": 0.5134361982345581,
"y": 0.757329635322094
}
],
"category": "paragraph",
"id": 837,
"content": {
"text": "Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel,\nBarret Zoph, Sebastian Borgeaud, Dani Yogatama,\nMaarten Bosma, Denny Zhou, Donald Metzler, et al.\n2022a. Emergent abilities of large language models.\narXiv preprint rXiv:2206.07682.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.5133083462715149,
"y": 0.7679065465927124
},
{
"x": 0.8832537829875946,
"y": 0.7679065465927124
},
{
"x": 0.8832537829875946,
"y": 0.8312373161315918
},
{
"x": 0.5133083462715149,
"y": 0.8312373161315918
}
],
"category": "paragraph",
"id": 838,
"content": {
"text": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten\nBosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou,\net al. 2022b. Chain-of-thought prompting elicits rea-\nsoning in large language models. Advances in Neural\nInformation Processing Systems, 35:24824-24837.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.5135031342506409,
"y": 0.8420345187187195
},
{
"x": 0.8833158910274506,
"y": 0.8420345187187195
},
{
"x": 0.8833158910274506,
"y": 0.9163724258542061
},
{
"x": 0.5135031342506409,
"y": 0.9163724258542061
}
],
"category": "paragraph",
"id": 839,
"content": {
"text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien\nChaumond, Clement Delangue, Anthony Moi, Pier-\nric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz,\net al. 2019. Huggingface's transformers: State-of-\nthe-art natural language processing. arXiv preprint\narXiv: 1910.03771.\n",
"html": "",
"markdown": ""
}
}
]
},
"01030000000167.pdf": {
"elements": [
{
"coordinates": [
{
"x": 0.09167000651359558,
"y": 0.07525166124105453
},
{
"x": 0.9069871008396149,
"y": 0.07525166124105453
},
{
"x": 0.9069871008396149,
"y": 0.14126047492027283
},
{
"x": 0.09167000651359558,
"y": 0.14126047492027283
}
],
"category": "paragraph",
"id": 580,
"content": {
"text": "The acidic cations adsorbed on the negative exchange sites are called the reserve (also residual or potential) and salt-\nreplaceable (also exchangeable) acidity. The reserve and salt-replaceable acidity controls the level of soluble or active\nacidity in the soil solution. Only the active acidity is measured in a routine pH determination. The reserve and salt-\nreplaceable acidity is always many times higher than the active acidity.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.09178180247545242,
"y": 0.15724565088748932
},
{
"x": 0.9069788530468941,
"y": 0.15724565088748932
},
{
"x": 0.9069788530468941,
"y": 0.2027123011648655
},
{
"x": 0.09178180247545242,
"y": 0.2027123011648655
}
],
"category": "paragraph",
"id": 581,
"content": {
"text": "A soil is acid when hydrogen ions predominate in the soil. The degree of acidity is expressed in terms of pH, which is\ndefined as the negative logarithm of the hydrogen ion activity. Therefore, the pH of a 0.01-molar hydrogen ion solution\nis\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.09162614494562149,
"y": 0.2762143611907959
},
{
"x": 0.9070389047265053,
"y": 0.2762143611907959
},
{
"x": 0.9070389047265053,
"y": 0.35965777188539505
},
{
"x": 0.09162614494562149,
"y": 0.35965777188539505
}
],
"category": "paragraph",
"id": 582,
"content": {
"text": "At pH 7, the concentration of H+ ions and OH- ions are equal, and the soil or solution is neutral. At pH values less than 7,\nthe soil is acid; at values more than 7, the soil is alkaline. Most soils vary in pH from about 4 to 10. Soils in areas with high\nrainfall are generally acid with a pH less than 7. Soils developed in high-lime deposits often will be alkaline. Soils high in\ncalcium seldom have pH values higher than 7.5, but the presence of large amounts of calcium carbonate may cause the\npH to be as high as 8.5. Where the pH is higher than 8.5, an excess of sodium is highly probable.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.0916467234492302,
"y": 0.37588512897491455
},
{
"x": 0.9073165580630302,
"y": 0.37588512897491455
},
{
"x": 0.9073165580630302,
"y": 0.40636054426431656
},
{
"x": 0.0916467234492302,
"y": 0.40636054426431656
}
],
"category": "paragraph",
"id": 583,
"content": {
"text": "The most desirable soil pH for most crops in Kansas is 6.8. However, crops like blueberries need a lower pH, and other\ncrops, like alfalfa, need a higher pH. At soil pH less than 5.8, several problems may occur:\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.10279936343431473,
"y": 0.4276825189590454
},
{
"x": 0.24619170278310776,
"y": 0.4276825189590454
},
{
"x": 0.24619170278310776,
"y": 0.43992909882217646
},
{
"x": 0.10279936343431473,
"y": 0.43992909882217646
}
],
"category": "paragraph",
"id": 585,
"content": {
"text": "Al and Mn toxicity\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.10324607789516449,
"y": 0.4455149471759796
},
{
"x": 0.374046191573143,
"y": 0.4455149471759796
},
{
"x": 0.374046191573143,
"y": 0.4581199698150158
},
{
"x": 0.10324607789516449,
"y": 0.4581199698150158
}
],
"category": "paragraph",
"id": 586,
"content": {
"text": "Inhibited growth of N-fixing bacteria\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.10288283228874207,
"y": 0.4630115330219269
},
{
"x": 0.3843258321285248,
"y": 0.4630115330219269
},
{
"x": 0.3843258321285248,
"y": 0.47546359058469534
},
{
"x": 0.10288283228874207,
"y": 0.47546359058469534
}
],
"category": "paragraph",
"id": 587,
"content": {
"text": "Possible deficiencies in Mg and/or Ca.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.10316724330186844,
"y": 0.48046138882637024
},
{
"x": 0.372712142765522,
"y": 0.48046138882637024
},
{
"x": 0.372712142765522,
"y": 0.4932401105761528
},
{
"x": 0.10316724330186844,
"y": 0.4932401105761528
}
],
"category": "paragraph",
"id": 588,
"content": {
"text": "P deficiency (P reacts with Fe and Al)\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.10291241854429245,
"y": 0.4985024631023407
},
{
"x": 0.44859739392995834,
"y": 0.4985024631023407
},
{
"x": 0.44859739392995834,
"y": 0.5111479219049215
},
{
"x": 0.10291241854429245,
"y": 0.5111479219049215
}
],
"category": "paragraph",
"id": 589,
"content": {
"text": "At more than pH 7.5, other problems may occur:\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.10303502529859543,
"y": 0.5161550641059875
},
{
"x": 0.33163078874349594,
"y": 0.5161550641059875
},
{
"x": 0.33163078874349594,
"y": 0.5285903662443161
},
{
"x": 0.10303502529859543,
"y": 0.5285903662443161
}
],
"category": "paragraph",
"id": 590,
"content": {
"text": "Deficiency of Fe, Mn, Cu, or Zn\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.10315758734941483,
"y": 0.5336974263191223
},
{
"x": 0.328160859644413,
"y": 0.5336974263191223
},
{
"x": 0.328160859644413,
"y": 0.5460291849449277
},
{
"x": 0.10315758734941483,
"y": 0.5460291849449277
}
],
"category": "paragraph",
"id": 591,
"content": {
"text": "P deficiency (P reacts with Ca)\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.09321118891239166,
"y": 0.5889560580253601
},
{
"x": 0.26664139330387115,
"y": 0.5889560580253601
},
{
"x": 0.26664139330387115,
"y": 0.6084658429026604
},
{
"x": 0.09321118891239166,
"y": 0.6084658429026604
}
],
"category": "heading1",
"id": 592,
"content": {
"text": "Buffering Capacity\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.09191905707120895,
"y": 0.6378293633460999
},
{
"x": 0.9077371880412102,
"y": 0.6378293633460999
},
{
"x": 0.9077371880412102,
"y": 0.7569837048649788
},
{
"x": 0.09191905707120895,
"y": 0.7569837048649788
}
],
"category": "paragraph",
"id": 593,
"content": {
"text": "Buffering capacity is a measure of the soil's ability to resist a change in pH, directly related to the magnitude of the\nexchange capacity. Small fluctuations in acid or base content can occur without a noticeable pH change as cations are\nadsorbed or released from the exchange complex. Soils with the largest cation exchange capacity have the greatest\nbuffering of a pH change. In other words, two soils may have the same pH (active acidity in soil solution), but the one\nwith the largest cation exchange capacity will have the most acidity stored in reserve and therefore the highest buffering\ncapacity or ability to resist a change in pH. For this reason, it takes less lime to increase the pH of a sandy soil (low CEC)\nby a given amount than it takes to increase the pH of a clay soil (higher CEC) the same amount.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.09325503557920456,
"y": 0.7997015714645386
},
{
"x": 0.30177778750658035,
"y": 0.7997015714645386
},
{
"x": 0.30177778750658035,
"y": 0.8189307693392038
},
{
"x": 0.09325503557920456,
"y": 0.8189307693392038
}
],
"category": "heading1",
"id": 594,
"content": {
"text": "Sources of Soil Acidity\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.0920228436589241,
"y": 0.8485241532325745
},
{
"x": 0.907279796898365,
"y": 0.8485241532325745
},
{
"x": 0.907279796898365,
"y": 0.9141630455851555
},
{
"x": 0.0920228436589241,
"y": 0.9141630455851555
}
],
"category": "paragraph",
"id": 595,
"content": {
"text": "Controlling soil pH is vital to optimal use and productivity of soils. Adding lime is the most effective and practical way\nto raise the pH of acid soils. Elemental sulfur, iron sulfate, or aluminum sulfate can be used to reduce soil pH. Because\nacidity is a concern in Kansas, we will focus on raising soil pH. Understanding the following equations should help you\nunderstand the sources of soil acidity and soil reactions to lime.\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.0922820121049881,
"y": 0.9418616890907288
},
{
"x": 0.11065422743558884,
"y": 0.9418616890907288
},
{
"x": 0.11065422743558884,
"y": 0.949699486605823
},
{
"x": 0.0922820121049881,
"y": 0.949699486605823
}
],
"category": "paragraph",
"id": 596,
"content": {
"text": "124\n",
"html": "",
"markdown": ""
}
},
{
"coordinates": [
{
"x": 0.12750239670276642,
"y": 0.9414671659469604
},
{
"x": 0.3284365087747574,
"y": 0.9414671659469604
},
{
"x": 0.3284365087747574,
"y": 0.9525658572092652
},
{
"x": 0.12750239670276642,
"y": 0.9525658572092652
}
],
"category": "footer",
"id": 597,
"content": {
"text": "Soil Acidity and Adjusting Soil pH\n",
"html": "",
"markdown": ""
}
}
]
},
"01030000000018.pdf": {
"elements": [
{
"coordinates": [
{
"x": 0.12422121316194534,
"y": 0.3154641091823578
},
{
"x": 0.8487181141972542,
"y": 0.3154641091823578
},
{
"x": 0.8487181141972542,
"y": 0.8909057080745697
},
{
"x": 0.12422121316194534,
"y": 0.8909057080745697
}
],
"category": "table",
"id": 203,
"content": {
"text": "",
"html": "