text
stringlengths
0
473k
[SOURCE: https://www.fast.ai/posts/2022-10-19-part2-2022-preview.html] | [TOKENS: 758]
1st Two Lessons of From Deep Learning Foundations to Stable Diffusion Jeremy Howard October 19, 2022 On this page The University of Queensland has opened the course to late registrations, if you want to join the rest of the course live: register here. We started teaching our new course, From Deep Learning Foundations to Stable Diffusion, a couple of weeks ago. The experience of developing and teaching these first lessons has been amazing. Course contributors have included some brilliant folks from Hugging Face, Stability.ai, and fast.ai, and we’ve had some inspirational contributions already from amazing people from Deoldify, Lambda Labs, and more. Some important new papers have come out in the last two weeks, and we’ve covered them already in the course. Because this field is moving so quickly, and there’s so much interest, we’ve decided to release our first two lessons early. In fact, we’re releasing them right now! In total, we’re releasing four videos, with around 5.5 hours of content, covering the following topics (the lesson numbers start at “9”, since this is a continuation of Practical Deep Learning for Coders part 1, which had 8 lessons): These videos will make the most sense if you’ve already completed part 1 of the course, or already have some experience with training and deploying deep learning models (preferably in PyTorch). Lesson 9—Pipelines and concepts This lesson starts with a tutorial on how to use pipelines in the Diffusers library to generate images. Diffusers is (in our opinion!) the best library available at the moment for image generation. It has many features and is very flexible. We explain how to use its many features, and discuss options for accessing the GPU resources needed to use the library. We talk about some of the nifty tweaks available when using Stable Diffusion in Diffusers, and show how to use them: guidance scale (for varying the amount the prompt is used), negative prompts (for removing concepts from an image), image initialisation (for starting with an existing image), textual inversion (for adding your own concepts to generated images), Dreambooth (an alternative approach to textual inversion). The second half of the lesson covers the key concepts involved in Stable Diffusion: You can discuss this lesson, and access links to all notebooks and resources from it, at this forum topic. Lesson 9A—Deep dive In this video Jonathan Whitaker shows us what is happening behind the scenes when we create an image with Stable Diffusion, looking at the different components and processes and how each can be modified for further control over the generation process. He shows how to replicate the sampling loop from scratch, and explains each of the steps involved in more detail: Lesson 9B—Math of diffusion Wasim Lorgat and Tanishq Abraham walk through the math of diffusion models from the ground up. They assume no prerequisite knowledge beyond what you covered in high school. Lesson 10—Custom pipeline This lesson creates a complete Diffusers pipeline from the underlying components: the VAE, unet, scheduler, and tokeniser. By putting them together manually, this gives you the flexibility to fully customise every aspect of the inference process. We also discuss three important new papers that have been released in the last week, which improve inference performance by over 10x, and allow any photo to be “edited” by just describing what the new picture should show. The second half of the lesson begins the “from the foundations” stage of the course, developing a basic matrix class and random number generator from scratch, as well as discussing the use of iterators in Python. You can discuss this lesson, and access links to all notebooks and resources from it, at this forum topic.
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-23-part2-2022-signup.html] | [TOKENS: 643]
Deep Learning Foundations Signup, Open Source Scholarships, & More Jeremy Howard September 23, 2022 On this page Signups might not be available any more. If you are able to do a late signup, note that you may miss doing the first lesson live (although you can watch the recording.) Signups now open Last week we announced our new course, From Deep Learning Foundations to Stable Diffusion, which is part 2 of our Practical Deep Learning for Coders series. Part 1 of the course, which is available now is a free course designed for people with some coding experience, who want to learn how to apply deep learning and machine learning to practical problems. Part 2 will be an in-depth course that starts from the foundations—implementing and GPU-optimising matrix multiplications and initialisations—and goes all the way through to implementing the astounding Stable Diffusion algorithm (and many other papers along the way). Signups for the new part 2 course are now available. Lessons will run weekly from Oct 11th on Tuesdays 6:00 - 8:00PM AEST (UTC/GMT +10 – note that that’s Mondays US time). You can either watch the lesson live (and ask questions and chat live to other participants), or you can watch the recording which is available immediately after the lesson. All participants get access to the course forum, where there will be a bustling community of students like you, along with experienced alumni helping answer your questions. Note that, although 8 lessons are listed on the signup page, this is just our best guess, since we’ve never run this course before and we don’t know exactly how long it will take. We also don’t know what new papers might come out during the course which we may want to cover. It will certainly take at least 7 lessons, but it might take 8 or more lessons. If you can’t commit to attending all of them live, that’s no problem – you can always watch the recordings later at any time. (Oh and also, ignore the “Learning Outcomes” section of the signup page – it looks like the university just copied that from the part 1 course!) To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished part 1 of our course then you’ll be ready! (If you don’t have much background in deep learning, but are a confident coder and have some familiarity with other machine learning approaches, you should be OK to do part 2, given some patience and tenacity—I’ll provide an introduction to every topic we encounter on the way.) Scholarships We’ll provide free access to the live course to the following groups upon request (to get access to the course if you qualify, please complete this form: We’ll also provide free access to the following groups, which have already been automatically added to the course and should have received a notification (to check, login to the forums and see if you see “Part 2 2022”): Free access from 2023 In early 2023 we will be opening up free access to the complete recorded course to everyone.
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-23-part2-2022-signup.html] | [TOKENS: 643]
Deep Learning Foundations Signup, Open Source Scholarships, & More Jeremy Howard September 23, 2022 On this page Signups might not be available any more. If you are able to do a late signup, note that you may miss doing the first lesson live (although you can watch the recording.) Signups now open Last week we announced our new course, From Deep Learning Foundations to Stable Diffusion, which is part 2 of our Practical Deep Learning for Coders series. Part 1 of the course, which is available now is a free course designed for people with some coding experience, who want to learn how to apply deep learning and machine learning to practical problems. Part 2 will be an in-depth course that starts from the foundations—implementing and GPU-optimising matrix multiplications and initialisations—and goes all the way through to implementing the astounding Stable Diffusion algorithm (and many other papers along the way). Signups for the new part 2 course are now available. Lessons will run weekly from Oct 11th on Tuesdays 6:00 - 8:00PM AEST (UTC/GMT +10 – note that that’s Mondays US time). You can either watch the lesson live (and ask questions and chat live to other participants), or you can watch the recording which is available immediately after the lesson. All participants get access to the course forum, where there will be a bustling community of students like you, along with experienced alumni helping answer your questions. Note that, although 8 lessons are listed on the signup page, this is just our best guess, since we’ve never run this course before and we don’t know exactly how long it will take. We also don’t know what new papers might come out during the course which we may want to cover. It will certainly take at least 7 lessons, but it might take 8 or more lessons. If you can’t commit to attending all of them live, that’s no problem – you can always watch the recordings later at any time. (Oh and also, ignore the “Learning Outcomes” section of the signup page – it looks like the university just copied that from the part 1 course!) To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished part 1 of our course then you’ll be ready! (If you don’t have much background in deep learning, but are a confident coder and have some familiarity with other machine learning approaches, you should be OK to do part 2, given some patience and tenacity—I’ll provide an introduction to every topic we encounter on the way.) Scholarships We’ll provide free access to the live course to the following groups upon request (to get access to the course if you qualify, please complete this form: We’ll also provide free access to the following groups, which have already been automatically added to the course and should have received a notification (to check, login to the forums and see if you see “Part 2 2022”): Free access from 2023 In early 2023 we will be opening up free access to the complete recorded course to everyone.
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-23-part2-2022-signup.html] | [TOKENS: 643]
Deep Learning Foundations Signup, Open Source Scholarships, & More Jeremy Howard September 23, 2022 On this page Signups might not be available any more. If you are able to do a late signup, note that you may miss doing the first lesson live (although you can watch the recording.) Signups now open Last week we announced our new course, From Deep Learning Foundations to Stable Diffusion, which is part 2 of our Practical Deep Learning for Coders series. Part 1 of the course, which is available now is a free course designed for people with some coding experience, who want to learn how to apply deep learning and machine learning to practical problems. Part 2 will be an in-depth course that starts from the foundations—implementing and GPU-optimising matrix multiplications and initialisations—and goes all the way through to implementing the astounding Stable Diffusion algorithm (and many other papers along the way). Signups for the new part 2 course are now available. Lessons will run weekly from Oct 11th on Tuesdays 6:00 - 8:00PM AEST (UTC/GMT +10 – note that that’s Mondays US time). You can either watch the lesson live (and ask questions and chat live to other participants), or you can watch the recording which is available immediately after the lesson. All participants get access to the course forum, where there will be a bustling community of students like you, along with experienced alumni helping answer your questions. Note that, although 8 lessons are listed on the signup page, this is just our best guess, since we’ve never run this course before and we don’t know exactly how long it will take. We also don’t know what new papers might come out during the course which we may want to cover. It will certainly take at least 7 lessons, but it might take 8 or more lessons. If you can’t commit to attending all of them live, that’s no problem – you can always watch the recordings later at any time. (Oh and also, ignore the “Learning Outcomes” section of the signup page – it looks like the university just copied that from the part 1 course!) To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished part 1 of our course then you’ll be ready! (If you don’t have much background in deep learning, but are a confident coder and have some familiarity with other machine learning approaches, you should be OK to do part 2, given some patience and tenacity—I’ll provide an introduction to every topic we encounter on the way.) Scholarships We’ll provide free access to the live course to the following groups upon request (to get access to the course if you qualify, please complete this form: We’ll also provide free access to the following groups, which have already been automatically added to the course and should have received a notification (to check, login to the forums and see if you see “Part 2 2022”): Free access from 2023 In early 2023 we will be opening up free access to the complete recorded course to everyone.
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-23-part2-2022-signup.html] | [TOKENS: 643]
Deep Learning Foundations Signup, Open Source Scholarships, & More Jeremy Howard September 23, 2022 On this page Signups might not be available any more. If you are able to do a late signup, note that you may miss doing the first lesson live (although you can watch the recording.) Signups now open Last week we announced our new course, From Deep Learning Foundations to Stable Diffusion, which is part 2 of our Practical Deep Learning for Coders series. Part 1 of the course, which is available now is a free course designed for people with some coding experience, who want to learn how to apply deep learning and machine learning to practical problems. Part 2 will be an in-depth course that starts from the foundations—implementing and GPU-optimising matrix multiplications and initialisations—and goes all the way through to implementing the astounding Stable Diffusion algorithm (and many other papers along the way). Signups for the new part 2 course are now available. Lessons will run weekly from Oct 11th on Tuesdays 6:00 - 8:00PM AEST (UTC/GMT +10 – note that that’s Mondays US time). You can either watch the lesson live (and ask questions and chat live to other participants), or you can watch the recording which is available immediately after the lesson. All participants get access to the course forum, where there will be a bustling community of students like you, along with experienced alumni helping answer your questions. Note that, although 8 lessons are listed on the signup page, this is just our best guess, since we’ve never run this course before and we don’t know exactly how long it will take. We also don’t know what new papers might come out during the course which we may want to cover. It will certainly take at least 7 lessons, but it might take 8 or more lessons. If you can’t commit to attending all of them live, that’s no problem – you can always watch the recordings later at any time. (Oh and also, ignore the “Learning Outcomes” section of the signup page – it looks like the university just copied that from the part 1 course!) To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished part 1 of our course then you’ll be ready! (If you don’t have much background in deep learning, but are a confident coder and have some familiarity with other machine learning approaches, you should be OK to do part 2, given some patience and tenacity—I’ll provide an introduction to every topic we encounter on the way.) Scholarships We’ll provide free access to the live course to the following groups upon request (to get access to the course if you qualify, please complete this form: We’ll also provide free access to the following groups, which have already been automatically added to the course and should have received a notification (to check, login to the forums and see if you see “Part 2 2022”): Free access from 2023 In early 2023 we will be opening up free access to the complete recorded course to everyone.
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-16-part2-2022.html] | [TOKENS: 781]
From Deep Learning Foundations to Stable Diffusion Jeremy Howard September 16, 2022 Three years ago we pioneered Deep Learning from the Foundations, an in depth course that started right from the foundations—implementing and GPU-optimising matrix multiplications and initialisations—and covered from scratch implementations of all the key applications of the fastai library. This year, we’re going “from the foundations” again, but this time we’re going further. Much further! This time, we’re going all the way through to implementing the astounding Stable Diffusion algorithm. That’s the killer app that made the internet freak out, and caused the media to say “you may never believe what you see online again”. Stable diffusion, and diffusion methods in general, are a great learning goal for many reasons. For one thing, of course, you can create amazing stuff with these algorithms! To really take the technique to the next level, and create things that no-one has seen before, you need to really deeply understand what’s happening under the hood. With this understanding, you can craft your own loss functions, initialization methods, multi-model mixups, and more, to create totally new applications that have never been seen before. Just as important: it’s a great learning goal because nearly every key technique in modern deep learning comes together in these methods. Contrastive learning, transformer models, auto-encoders, CLIP embeddings, latent variables, u-nets, resnets, and much more are involved in creating a single image. This is all cutting-edge stuff, so to ensure we bring the latest techniques to you, we’re teaming up with the folks that brought stable diffusion to the world: stability.ai. stability.ai are, in many ways, kindred spirits to fast.ai. They are, like us, a self-funded research lab. And like us, their focus is smashing down any gates that make cutting edge AI less accessible. So it makes sense for us to team up on this audacious goal of teaching stable diffusion from the foundations. The course will be available for free online from early 2023. But if you want to join the course right as it’s made, along with hundreds of the world’s leading deep learning practitioners, then you can register to join the virtual live course through our official academic partner, the University of Queensland (UQ). UQ will have registrations open in the next few days, so keep an eye on the link above. During the live course, we’ll be learning to read and implement the latest papers, with lots of opportunity to practice and get feedback. Many past participants in fast.ai’s live courses have described it as a “life changing” experience… and it’s our sincere hope that this course will be our best ever. To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished fast.ai’s Practical Deep Learning course then you’ll be ready! If you haven’t done that course, but are comfortable with building an SGD training loop from scratch in Python, being competitive in Kaggle competitions, using modern NLP and computer vision algorithms for practical problems, and working with PyTorch and fastai, then you will be ready to start the course. (If you’re not sure, then I strongly recommend getting starting with Practical Deep Learning now—if you push, you’ll be done before the new course starts!) If you’re an alumnus of Deep Learning for Coders, you’ll know that course sets you up to be an effective deep learning practitioner. This new course will take you to the next level, creating novel applications that bring multiple techniques together, and understanding and implementing research papers. Alumni of previous versions of fast.ai’s “part 2” courses have even gone on to publish deep learning papers in top conferences and journals, and have joined highly regarded research labs and startups.
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-16-part2-2022.html] | [TOKENS: 781]
From Deep Learning Foundations to Stable Diffusion Jeremy Howard September 16, 2022 Three years ago we pioneered Deep Learning from the Foundations, an in depth course that started right from the foundations—implementing and GPU-optimising matrix multiplications and initialisations—and covered from scratch implementations of all the key applications of the fastai library. This year, we’re going “from the foundations” again, but this time we’re going further. Much further! This time, we’re going all the way through to implementing the astounding Stable Diffusion algorithm. That’s the killer app that made the internet freak out, and caused the media to say “you may never believe what you see online again”. Stable diffusion, and diffusion methods in general, are a great learning goal for many reasons. For one thing, of course, you can create amazing stuff with these algorithms! To really take the technique to the next level, and create things that no-one has seen before, you need to really deeply understand what’s happening under the hood. With this understanding, you can craft your own loss functions, initialization methods, multi-model mixups, and more, to create totally new applications that have never been seen before. Just as important: it’s a great learning goal because nearly every key technique in modern deep learning comes together in these methods. Contrastive learning, transformer models, auto-encoders, CLIP embeddings, latent variables, u-nets, resnets, and much more are involved in creating a single image. This is all cutting-edge stuff, so to ensure we bring the latest techniques to you, we’re teaming up with the folks that brought stable diffusion to the world: stability.ai. stability.ai are, in many ways, kindred spirits to fast.ai. They are, like us, a self-funded research lab. And like us, their focus is smashing down any gates that make cutting edge AI less accessible. So it makes sense for us to team up on this audacious goal of teaching stable diffusion from the foundations. The course will be available for free online from early 2023. But if you want to join the course right as it’s made, along with hundreds of the world’s leading deep learning practitioners, then you can register to join the virtual live course through our official academic partner, the University of Queensland (UQ). UQ will have registrations open in the next few days, so keep an eye on the link above. During the live course, we’ll be learning to read and implement the latest papers, with lots of opportunity to practice and get feedback. Many past participants in fast.ai’s live courses have described it as a “life changing” experience… and it’s our sincere hope that this course will be our best ever. To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished fast.ai’s Practical Deep Learning course then you’ll be ready! If you haven’t done that course, but are comfortable with building an SGD training loop from scratch in Python, being competitive in Kaggle competitions, using modern NLP and computer vision algorithms for practical problems, and working with PyTorch and fastai, then you will be ready to start the course. (If you’re not sure, then I strongly recommend getting starting with Practical Deep Learning now—if you push, you’ll be done before the new course starts!) If you’re an alumnus of Deep Learning for Coders, you’ll know that course sets you up to be an effective deep learning practitioner. This new course will take you to the next level, creating novel applications that bring multiple techniques together, and understanding and implementing research papers. Alumni of previous versions of fast.ai’s “part 2” courses have even gone on to publish deep learning papers in top conferences and journals, and have joined highly regarded research labs and startups.
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-16-part2-2022.html] | [TOKENS: 781]
From Deep Learning Foundations to Stable Diffusion Jeremy Howard September 16, 2022 Three years ago we pioneered Deep Learning from the Foundations, an in depth course that started right from the foundations—implementing and GPU-optimising matrix multiplications and initialisations—and covered from scratch implementations of all the key applications of the fastai library. This year, we’re going “from the foundations” again, but this time we’re going further. Much further! This time, we’re going all the way through to implementing the astounding Stable Diffusion algorithm. That’s the killer app that made the internet freak out, and caused the media to say “you may never believe what you see online again”. Stable diffusion, and diffusion methods in general, are a great learning goal for many reasons. For one thing, of course, you can create amazing stuff with these algorithms! To really take the technique to the next level, and create things that no-one has seen before, you need to really deeply understand what’s happening under the hood. With this understanding, you can craft your own loss functions, initialization methods, multi-model mixups, and more, to create totally new applications that have never been seen before. Just as important: it’s a great learning goal because nearly every key technique in modern deep learning comes together in these methods. Contrastive learning, transformer models, auto-encoders, CLIP embeddings, latent variables, u-nets, resnets, and much more are involved in creating a single image. This is all cutting-edge stuff, so to ensure we bring the latest techniques to you, we’re teaming up with the folks that brought stable diffusion to the world: stability.ai. stability.ai are, in many ways, kindred spirits to fast.ai. They are, like us, a self-funded research lab. And like us, their focus is smashing down any gates that make cutting edge AI less accessible. So it makes sense for us to team up on this audacious goal of teaching stable diffusion from the foundations. The course will be available for free online from early 2023. But if you want to join the course right as it’s made, along with hundreds of the world’s leading deep learning practitioners, then you can register to join the virtual live course through our official academic partner, the University of Queensland (UQ). UQ will have registrations open in the next few days, so keep an eye on the link above. During the live course, we’ll be learning to read and implement the latest papers, with lots of opportunity to practice and get feedback. Many past participants in fast.ai’s live courses have described it as a “life changing” experience… and it’s our sincere hope that this course will be our best ever. To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished fast.ai’s Practical Deep Learning course then you’ll be ready! If you haven’t done that course, but are comfortable with building an SGD training loop from scratch in Python, being competitive in Kaggle competitions, using modern NLP and computer vision algorithms for practical problems, and working with PyTorch and fastai, then you will be ready to start the course. (If you’re not sure, then I strongly recommend getting starting with Practical Deep Learning now—if you push, you’ll be done before the new course starts!) If you’re an alumnus of Deep Learning for Coders, you’ll know that course sets you up to be an effective deep learning practitioner. This new course will take you to the next level, creating novel applications that bring multiple techniques together, and understanding and implementing research papers. Alumni of previous versions of fast.ai’s “part 2” courses have even gone on to publish deep learning papers in top conferences and journals, and have joined highly regarded research labs and startups.
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-16-part2-2022.html] | [TOKENS: 781]
From Deep Learning Foundations to Stable Diffusion Jeremy Howard September 16, 2022 Three years ago we pioneered Deep Learning from the Foundations, an in depth course that started right from the foundations—implementing and GPU-optimising matrix multiplications and initialisations—and covered from scratch implementations of all the key applications of the fastai library. This year, we’re going “from the foundations” again, but this time we’re going further. Much further! This time, we’re going all the way through to implementing the astounding Stable Diffusion algorithm. That’s the killer app that made the internet freak out, and caused the media to say “you may never believe what you see online again”. Stable diffusion, and diffusion methods in general, are a great learning goal for many reasons. For one thing, of course, you can create amazing stuff with these algorithms! To really take the technique to the next level, and create things that no-one has seen before, you need to really deeply understand what’s happening under the hood. With this understanding, you can craft your own loss functions, initialization methods, multi-model mixups, and more, to create totally new applications that have never been seen before. Just as important: it’s a great learning goal because nearly every key technique in modern deep learning comes together in these methods. Contrastive learning, transformer models, auto-encoders, CLIP embeddings, latent variables, u-nets, resnets, and much more are involved in creating a single image. This is all cutting-edge stuff, so to ensure we bring the latest techniques to you, we’re teaming up with the folks that brought stable diffusion to the world: stability.ai. stability.ai are, in many ways, kindred spirits to fast.ai. They are, like us, a self-funded research lab. And like us, their focus is smashing down any gates that make cutting edge AI less accessible. So it makes sense for us to team up on this audacious goal of teaching stable diffusion from the foundations. The course will be available for free online from early 2023. But if you want to join the course right as it’s made, along with hundreds of the world’s leading deep learning practitioners, then you can register to join the virtual live course through our official academic partner, the University of Queensland (UQ). UQ will have registrations open in the next few days, so keep an eye on the link above. During the live course, we’ll be learning to read and implement the latest papers, with lots of opportunity to practice and get feedback. Many past participants in fast.ai’s live courses have described it as a “life changing” experience… and it’s our sincere hope that this course will be our best ever. To get the most out of this course, you should be a reasonably confident deep learning practitioner. If you’ve finished fast.ai’s Practical Deep Learning course then you’ll be ready! If you haven’t done that course, but are comfortable with building an SGD training loop from scratch in Python, being competitive in Kaggle competitions, using modern NLP and computer vision algorithms for practical problems, and working with PyTorch and fastai, then you will be ready to start the course. (If you’re not sure, then I strongly recommend getting starting with Practical Deep Learning now—if you push, you’ll be done before the new course starts!) If you’re an alumnus of Deep Learning for Coders, you’ll know that course sets you up to be an effective deep learning practitioner. This new course will take you to the next level, creating novel applications that bring multiple techniques together, and understanding and implementing research papers. Alumni of previous versions of fast.ai’s “part 2” courses have even gone on to publish deep learning papers in top conferences and journals, and have joined highly regarded research labs and startups.
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-06-homeschooling.html] | [TOKENS: 2029]
My family’s unlikely homeschooling journey Rachel Thomas September 6, 2022 On this page My husband Jeremy and I never intended to homeschool, and yet we have now, unexpectedly, committed to homeschooling long-term. Prior to the pandemic, we both worked full-time in careers that we loved and found meaningful, and we sent our daughter to a full-day Montessori school. Although I struggled with significant health issues, I felt unbelievably lucky and fulfilled in both my family life and my professional life. The pandemic upended my careful balance. Every family is different, with different needs, circumstances, and constraints, and what works for one may not work for others. My intention here is primarily to share the journey of my own (very privileged) family. Our unplanned introduction to homeschooling For the first year of the pandemic, most schools in California, where we lived at the time, were closed. Like countless other families, we were unexpectedly thrust into the world of virtual-school and home-school. We ended up participating in an innovative online program that did NOT try to replicate in-person school. A few key differences: From August 2020 - March 2021, our daughter was with a small group online, where daily she would spend 1 hour on socio-emotional development (including games, getting to know each other, and discussing feelings), 1 hour on reading, and 1 hour on math. For reading and math, the children each worked at their own pace through engaging games, and could ask the teacher and each other questions whenever they needed help. At the end of these 8 months, our daughter, along with several other kids in her small group, were several years beyond their age levels in both math and reading. It had never been our goal for her to end up accelerated; Jeremy and I were mostly trying to keep her happy and busy for a few hours so we could get some of our own work done. She also had fun and made close friends, who she continues to have video calls and Minecraft virtual playdates with regularly. Our unconventional views Although there are plenty of ways to homeschool that don’t involve any screens or technology, Jeremy and I have made use of online tutors, long-distance friendships, educational apps, videos, and web-based games, as key parts of our approach. One thing that helped us going into the pandemic is that we have never treated online/long-distance relationships as inferior to in-person relationships. We both have meaningful friendships that occur primarily, or even entirely, through phone calls, video chats, texts, and online interactions. I have made several big moves since I graduated from high school (moving from Texas to Pennsylvania to North Carolina back to Pennsylvania again and then to California) and I was used to family and close friends being long distance. We live far from our families, and our daughter was already accustomed to chatting with her grandparents on both sides via video calls. My daughter’s best friend is now a child she has never met in person, but has been skyping with almost daily for the last 2 years. Another thing that made this transition easier is that Jeremy and I have never been anti-screen time. In fact, we don’t consider “screen time” a useful category, since a child passively watching a movie alone is different than skyping with their grandparent is different than playing an educational game interactively with their parent beside them. While we almost never let our daughter do things passively and alone with screens, we enjoy relational and educational screen time. Furthermore, we focus on including other positive life elements (e.g. making sure she is getting enough time outside, being physically active, reading, getting enough sleep, etc) rather than emphasising limits. A return to in-person school In 2021, our family immigrated from the USA to my husband’s home country of Australia, and we enrolled our daughter at an in-person school, which she attended from late April - early Dec 2021. Our state had closed borders and almost no cases of covid transmission during this time. By all measures, the school she attended is great: friendly families, kind staff, and a fun performing arts program. While our daughter adjusted quickly to the new environment and made friends, she was quite bored. She lost her previous curiosity and enthusiasm, became more passive, and started to spend a lot of time zoning out. The school tried to accommodate her, letting her join an older grade level for math each day. While the material was initially new, she still found the pace too slow. She started to get very upset at home practising piano or playing chess (activities she previously loved, but where mistakes are inevitable), because she had grown accustomed to getting everything right without trying. At one point, all schools in our region closed during an 8-day snap lockdown. Our daughter was disappointed when the lockdown ended and she had to return to school. When homeschooling works well (and when it doesn’t) Over the summer holidays (Dec 2021-Jan 2022), our state pivoted from zero covid to promoting mass infection as “necessary”. We pulled our daughter out of school, initially intending that it would just be a temporary measure until her age group could be fully vaccinated (vaccine rollout was later in Australia than in the USA). However, we immediately saw positive changes, with her regaining her old curiosity, enthusiasm, and proactive nature, all of which she had lost being in school. Her perfectionism disappeared and she began to enjoy challenges again. We supplemented her online classes with in-person playdates, extracurriculars, and sports (due to covid risks, we wear masks and stay outdoors for all of these). We are fortunate to live in a beautiful part of the world, where we can spend most of the year outside. We enjoy visiting the beaches, forests, and parks in our region. Our daughter is happy: playing Minecraft with friends online, learning tennis with other local children, riding bikes as a family, spending hours absorbed in books of her own choosing, enjoying piano and chess again, running around in nature, and learning at her own pace. Homeschooled kids typically score 15 to 30 percentile points above public-school students on standardised academic achievement tests, and 87% of studies on social development “showed clearly positive outcomes for the homeschooled compared to those in conventional schools”. However, it is understandable that many children had negative experiences with virtual learning in the past 2 years, given that programs were often hastily thrown together with inadequate resources and inappropriately structured to try to mimic in-person school, against the stressful backdrop of a global pandemic. Many parents faced the impossible task of simultaneously needing to work full-time and help their children full-time (and many other parents did not even have the option to stay home). Every family is different, and virtual learning or homeschooling will not suit everyone. There are children who need in-person services only offered within schools; parents whose work constraints don’t allow for it; and kids who thrive being with tons of other kids. Despite the difficulty of the pandemic, there are a variety of families who found that virtual or homeschooling was better for their particular kids. Some parents have shared about children with ADHD who found in-person school too distracting; children who were facing bullying or violence at school; kids who couldn’t get enough sleep on a traditional school schedule; Black and Latino families whose cultural heritages were not being reflected in curriculums. I enjoyed these article featuring a few such families: Covid Risks I have had brain surgery twice, was hospitalised in the ICU with a life-threatening brain infection, and have a number of chronic health issues. I am both at higher risk for negative outcomes from covid AND acutely aware of how losing your health can destroy your life. It is lonely and difficult being high-risk in a society that has given up on protecting others. While I am nervous about the long-term impact that homeschooling will have on my career (on top of how my existing health issues already hinder it), acquiring additional disabilities would be far, far worse. I have been disturbed to follow the ever-accumulating research on cardiovascular, neurological, and immune system harms that can be caused by covid, even in previously healthy people, even in the vaccinated, and even in children. While vaccines significantly reduce risk of death, unfortunately they provide only a limited reduction in Long Covid risk. Immunity wanes, and people face cumulative risks with each new covid infection (so even if you’ve had covid once or twice, it is best to try to avoid reinfections). I am alarmed that leaders are encouraging mass, repeated infections of a generation of children. Given all this, I am relieved that our decision to continue homeschooling was relatively clear. It much better suits our daughter’s needs AND drastically reduces our family’s covid risk. We can nurture her innate curiosity, protect her intrinsic motivation, and provide in-person social options that are entirely outdoors and safer than being indoors all day at school. Most families are not so fortunate and many face difficult choices, with no good options. The Broader Picture I believe that high-quality, equitable, and safe public education is important for a healthy democracy, and I worry about the various ongoing ways in which education is being undermined and attacked. Furthermore, due to a lack of covid protections in communities, high-risk children and children with high-risk families are being shut out of in-person school options in the USA, Australia, and many other places. While the workplaces of politicians and a handful of schools in ultra-wealthy areas installed expensive ventilation upgrades, the majority of schools in the USA and Australia have not had any ventilation upgrades, nor received air purifiers. All children deserve access to an education that is safe, fits their needs, and will allow them to thrive. Even when homeschooling does work, it is often still just an individual solution to systemic problems. Related Posts A few other posts that you may be interested in, related to my views on education and teaching:
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-06-homeschooling.html] | [TOKENS: 2029]
My family’s unlikely homeschooling journey Rachel Thomas September 6, 2022 On this page My husband Jeremy and I never intended to homeschool, and yet we have now, unexpectedly, committed to homeschooling long-term. Prior to the pandemic, we both worked full-time in careers that we loved and found meaningful, and we sent our daughter to a full-day Montessori school. Although I struggled with significant health issues, I felt unbelievably lucky and fulfilled in both my family life and my professional life. The pandemic upended my careful balance. Every family is different, with different needs, circumstances, and constraints, and what works for one may not work for others. My intention here is primarily to share the journey of my own (very privileged) family. Our unplanned introduction to homeschooling For the first year of the pandemic, most schools in California, where we lived at the time, were closed. Like countless other families, we were unexpectedly thrust into the world of virtual-school and home-school. We ended up participating in an innovative online program that did NOT try to replicate in-person school. A few key differences: From August 2020 - March 2021, our daughter was with a small group online, where daily she would spend 1 hour on socio-emotional development (including games, getting to know each other, and discussing feelings), 1 hour on reading, and 1 hour on math. For reading and math, the children each worked at their own pace through engaging games, and could ask the teacher and each other questions whenever they needed help. At the end of these 8 months, our daughter, along with several other kids in her small group, were several years beyond their age levels in both math and reading. It had never been our goal for her to end up accelerated; Jeremy and I were mostly trying to keep her happy and busy for a few hours so we could get some of our own work done. She also had fun and made close friends, who she continues to have video calls and Minecraft virtual playdates with regularly. Our unconventional views Although there are plenty of ways to homeschool that don’t involve any screens or technology, Jeremy and I have made use of online tutors, long-distance friendships, educational apps, videos, and web-based games, as key parts of our approach. One thing that helped us going into the pandemic is that we have never treated online/long-distance relationships as inferior to in-person relationships. We both have meaningful friendships that occur primarily, or even entirely, through phone calls, video chats, texts, and online interactions. I have made several big moves since I graduated from high school (moving from Texas to Pennsylvania to North Carolina back to Pennsylvania again and then to California) and I was used to family and close friends being long distance. We live far from our families, and our daughter was already accustomed to chatting with her grandparents on both sides via video calls. My daughter’s best friend is now a child she has never met in person, but has been skyping with almost daily for the last 2 years. Another thing that made this transition easier is that Jeremy and I have never been anti-screen time. In fact, we don’t consider “screen time” a useful category, since a child passively watching a movie alone is different than skyping with their grandparent is different than playing an educational game interactively with their parent beside them. While we almost never let our daughter do things passively and alone with screens, we enjoy relational and educational screen time. Furthermore, we focus on including other positive life elements (e.g. making sure she is getting enough time outside, being physically active, reading, getting enough sleep, etc) rather than emphasising limits. A return to in-person school In 2021, our family immigrated from the USA to my husband’s home country of Australia, and we enrolled our daughter at an in-person school, which she attended from late April - early Dec 2021. Our state had closed borders and almost no cases of covid transmission during this time. By all measures, the school she attended is great: friendly families, kind staff, and a fun performing arts program. While our daughter adjusted quickly to the new environment and made friends, she was quite bored. She lost her previous curiosity and enthusiasm, became more passive, and started to spend a lot of time zoning out. The school tried to accommodate her, letting her join an older grade level for math each day. While the material was initially new, she still found the pace too slow. She started to get very upset at home practising piano or playing chess (activities she previously loved, but where mistakes are inevitable), because she had grown accustomed to getting everything right without trying. At one point, all schools in our region closed during an 8-day snap lockdown. Our daughter was disappointed when the lockdown ended and she had to return to school. When homeschooling works well (and when it doesn’t) Over the summer holidays (Dec 2021-Jan 2022), our state pivoted from zero covid to promoting mass infection as “necessary”. We pulled our daughter out of school, initially intending that it would just be a temporary measure until her age group could be fully vaccinated (vaccine rollout was later in Australia than in the USA). However, we immediately saw positive changes, with her regaining her old curiosity, enthusiasm, and proactive nature, all of which she had lost being in school. Her perfectionism disappeared and she began to enjoy challenges again. We supplemented her online classes with in-person playdates, extracurriculars, and sports (due to covid risks, we wear masks and stay outdoors for all of these). We are fortunate to live in a beautiful part of the world, where we can spend most of the year outside. We enjoy visiting the beaches, forests, and parks in our region. Our daughter is happy: playing Minecraft with friends online, learning tennis with other local children, riding bikes as a family, spending hours absorbed in books of her own choosing, enjoying piano and chess again, running around in nature, and learning at her own pace. Homeschooled kids typically score 15 to 30 percentile points above public-school students on standardised academic achievement tests, and 87% of studies on social development “showed clearly positive outcomes for the homeschooled compared to those in conventional schools”. However, it is understandable that many children had negative experiences with virtual learning in the past 2 years, given that programs were often hastily thrown together with inadequate resources and inappropriately structured to try to mimic in-person school, against the stressful backdrop of a global pandemic. Many parents faced the impossible task of simultaneously needing to work full-time and help their children full-time (and many other parents did not even have the option to stay home). Every family is different, and virtual learning or homeschooling will not suit everyone. There are children who need in-person services only offered within schools; parents whose work constraints don’t allow for it; and kids who thrive being with tons of other kids. Despite the difficulty of the pandemic, there are a variety of families who found that virtual or homeschooling was better for their particular kids. Some parents have shared about children with ADHD who found in-person school too distracting; children who were facing bullying or violence at school; kids who couldn’t get enough sleep on a traditional school schedule; Black and Latino families whose cultural heritages were not being reflected in curriculums. I enjoyed these article featuring a few such families: Covid Risks I have had brain surgery twice, was hospitalised in the ICU with a life-threatening brain infection, and have a number of chronic health issues. I am both at higher risk for negative outcomes from covid AND acutely aware of how losing your health can destroy your life. It is lonely and difficult being high-risk in a society that has given up on protecting others. While I am nervous about the long-term impact that homeschooling will have on my career (on top of how my existing health issues already hinder it), acquiring additional disabilities would be far, far worse. I have been disturbed to follow the ever-accumulating research on cardiovascular, neurological, and immune system harms that can be caused by covid, even in previously healthy people, even in the vaccinated, and even in children. While vaccines significantly reduce risk of death, unfortunately they provide only a limited reduction in Long Covid risk. Immunity wanes, and people face cumulative risks with each new covid infection (so even if you’ve had covid once or twice, it is best to try to avoid reinfections). I am alarmed that leaders are encouraging mass, repeated infections of a generation of children. Given all this, I am relieved that our decision to continue homeschooling was relatively clear. It much better suits our daughter’s needs AND drastically reduces our family’s covid risk. We can nurture her innate curiosity, protect her intrinsic motivation, and provide in-person social options that are entirely outdoors and safer than being indoors all day at school. Most families are not so fortunate and many face difficult choices, with no good options. The Broader Picture I believe that high-quality, equitable, and safe public education is important for a healthy democracy, and I worry about the various ongoing ways in which education is being undermined and attacked. Furthermore, due to a lack of covid protections in communities, high-risk children and children with high-risk families are being shut out of in-person school options in the USA, Australia, and many other places. While the workplaces of politicians and a handful of schools in ultra-wealthy areas installed expensive ventilation upgrades, the majority of schools in the USA and Australia have not had any ventilation upgrades, nor received air purifiers. All children deserve access to an education that is safe, fits their needs, and will allow them to thrive. Even when homeschooling does work, it is often still just an individual solution to systemic problems. Related Posts A few other posts that you may be interested in, related to my views on education and teaching:
========================================
[SOURCE: https://www.fast.ai/posts/2022-09-06-homeschooling.html] | [TOKENS: 2029]
My family’s unlikely homeschooling journey Rachel Thomas September 6, 2022 On this page My husband Jeremy and I never intended to homeschool, and yet we have now, unexpectedly, committed to homeschooling long-term. Prior to the pandemic, we both worked full-time in careers that we loved and found meaningful, and we sent our daughter to a full-day Montessori school. Although I struggled with significant health issues, I felt unbelievably lucky and fulfilled in both my family life and my professional life. The pandemic upended my careful balance. Every family is different, with different needs, circumstances, and constraints, and what works for one may not work for others. My intention here is primarily to share the journey of my own (very privileged) family. Our unplanned introduction to homeschooling For the first year of the pandemic, most schools in California, where we lived at the time, were closed. Like countless other families, we were unexpectedly thrust into the world of virtual-school and home-school. We ended up participating in an innovative online program that did NOT try to replicate in-person school. A few key differences: From August 2020 - March 2021, our daughter was with a small group online, where daily she would spend 1 hour on socio-emotional development (including games, getting to know each other, and discussing feelings), 1 hour on reading, and 1 hour on math. For reading and math, the children each worked at their own pace through engaging games, and could ask the teacher and each other questions whenever they needed help. At the end of these 8 months, our daughter, along with several other kids in her small group, were several years beyond their age levels in both math and reading. It had never been our goal for her to end up accelerated; Jeremy and I were mostly trying to keep her happy and busy for a few hours so we could get some of our own work done. She also had fun and made close friends, who she continues to have video calls and Minecraft virtual playdates with regularly. Our unconventional views Although there are plenty of ways to homeschool that don’t involve any screens or technology, Jeremy and I have made use of online tutors, long-distance friendships, educational apps, videos, and web-based games, as key parts of our approach. One thing that helped us going into the pandemic is that we have never treated online/long-distance relationships as inferior to in-person relationships. We both have meaningful friendships that occur primarily, or even entirely, through phone calls, video chats, texts, and online interactions. I have made several big moves since I graduated from high school (moving from Texas to Pennsylvania to North Carolina back to Pennsylvania again and then to California) and I was used to family and close friends being long distance. We live far from our families, and our daughter was already accustomed to chatting with her grandparents on both sides via video calls. My daughter’s best friend is now a child she has never met in person, but has been skyping with almost daily for the last 2 years. Another thing that made this transition easier is that Jeremy and I have never been anti-screen time. In fact, we don’t consider “screen time” a useful category, since a child passively watching a movie alone is different than skyping with their grandparent is different than playing an educational game interactively with their parent beside them. While we almost never let our daughter do things passively and alone with screens, we enjoy relational and educational screen time. Furthermore, we focus on including other positive life elements (e.g. making sure she is getting enough time outside, being physically active, reading, getting enough sleep, etc) rather than emphasising limits. A return to in-person school In 2021, our family immigrated from the USA to my husband’s home country of Australia, and we enrolled our daughter at an in-person school, which she attended from late April - early Dec 2021. Our state had closed borders and almost no cases of covid transmission during this time. By all measures, the school she attended is great: friendly families, kind staff, and a fun performing arts program. While our daughter adjusted quickly to the new environment and made friends, she was quite bored. She lost her previous curiosity and enthusiasm, became more passive, and started to spend a lot of time zoning out. The school tried to accommodate her, letting her join an older grade level for math each day. While the material was initially new, she still found the pace too slow. She started to get very upset at home practising piano or playing chess (activities she previously loved, but where mistakes are inevitable), because she had grown accustomed to getting everything right without trying. At one point, all schools in our region closed during an 8-day snap lockdown. Our daughter was disappointed when the lockdown ended and she had to return to school. When homeschooling works well (and when it doesn’t) Over the summer holidays (Dec 2021-Jan 2022), our state pivoted from zero covid to promoting mass infection as “necessary”. We pulled our daughter out of school, initially intending that it would just be a temporary measure until her age group could be fully vaccinated (vaccine rollout was later in Australia than in the USA). However, we immediately saw positive changes, with her regaining her old curiosity, enthusiasm, and proactive nature, all of which she had lost being in school. Her perfectionism disappeared and she began to enjoy challenges again. We supplemented her online classes with in-person playdates, extracurriculars, and sports (due to covid risks, we wear masks and stay outdoors for all of these). We are fortunate to live in a beautiful part of the world, where we can spend most of the year outside. We enjoy visiting the beaches, forests, and parks in our region. Our daughter is happy: playing Minecraft with friends online, learning tennis with other local children, riding bikes as a family, spending hours absorbed in books of her own choosing, enjoying piano and chess again, running around in nature, and learning at her own pace. Homeschooled kids typically score 15 to 30 percentile points above public-school students on standardised academic achievement tests, and 87% of studies on social development “showed clearly positive outcomes for the homeschooled compared to those in conventional schools”. However, it is understandable that many children had negative experiences with virtual learning in the past 2 years, given that programs were often hastily thrown together with inadequate resources and inappropriately structured to try to mimic in-person school, against the stressful backdrop of a global pandemic. Many parents faced the impossible task of simultaneously needing to work full-time and help their children full-time (and many other parents did not even have the option to stay home). Every family is different, and virtual learning or homeschooling will not suit everyone. There are children who need in-person services only offered within schools; parents whose work constraints don’t allow for it; and kids who thrive being with tons of other kids. Despite the difficulty of the pandemic, there are a variety of families who found that virtual or homeschooling was better for their particular kids. Some parents have shared about children with ADHD who found in-person school too distracting; children who were facing bullying or violence at school; kids who couldn’t get enough sleep on a traditional school schedule; Black and Latino families whose cultural heritages were not being reflected in curriculums. I enjoyed these article featuring a few such families: Covid Risks I have had brain surgery twice, was hospitalised in the ICU with a life-threatening brain infection, and have a number of chronic health issues. I am both at higher risk for negative outcomes from covid AND acutely aware of how losing your health can destroy your life. It is lonely and difficult being high-risk in a society that has given up on protecting others. While I am nervous about the long-term impact that homeschooling will have on my career (on top of how my existing health issues already hinder it), acquiring additional disabilities would be far, far worse. I have been disturbed to follow the ever-accumulating research on cardiovascular, neurological, and immune system harms that can be caused by covid, even in previously healthy people, even in the vaccinated, and even in children. While vaccines significantly reduce risk of death, unfortunately they provide only a limited reduction in Long Covid risk. Immunity wanes, and people face cumulative risks with each new covid infection (so even if you’ve had covid once or twice, it is best to try to avoid reinfections). I am alarmed that leaders are encouraging mass, repeated infections of a generation of children. Given all this, I am relieved that our decision to continue homeschooling was relatively clear. It much better suits our daughter’s needs AND drastically reduces our family’s covid risk. We can nurture her innate curiosity, protect her intrinsic motivation, and provide in-person social options that are entirely outdoors and safer than being indoors all day at school. Most families are not so fortunate and many face difficult choices, with no good options. The Broader Picture I believe that high-quality, equitable, and safe public education is important for a healthy democracy, and I worry about the various ongoing ways in which education is being undermined and attacked. Furthermore, due to a lack of covid protections in communities, high-risk children and children with high-risk families are being shut out of in-person school options in the USA, Australia, and many other places. While the workplaces of politicians and a handful of schools in ultra-wealthy areas installed expensive ventilation upgrades, the majority of schools in the USA and Australia have not had any ventilation upgrades, nor received air purifiers. All children deserve access to an education that is safe, fits their needs, and will allow them to thrive. Even when homeschooling does work, it is often still just an individual solution to systemic problems. Related Posts A few other posts that you may be interested in, related to my views on education and teaching:
========================================
[SOURCE: https://www.bbc.com/weather] | [TOKENS: 275]
Accessibility links BBC Weather Search for a location Weather forecasts for thousands of locations around the world World map with selected cities Forecast for the Middle East and Africa To play this video you need to enable JavaScript in your browser. This video can not be played Features Break from wintry weather as UK temperatures to climb as high as 14C There's a major change in the UK's weather pattern heading our way with a big lift in temperatures in time for the weekend. More than 90 deaths this season: Are we seeing more avalanches? Recent deadly incidents in California and Europe are putting avalanches - and how to avoid them - in the spotlight. 'Best ski season in years' on Scotland's snowy hills Some of Scotland's mountain resorts say it has been the best winter snowsports season in six years. Cornish village endures 50 consecutive days of rain The exceptionally wet start to 2026 has seen the Cornish village of Cardinham confirmed as recording 50 continuous days of rain as Matt Taylor explains. Weather for the week ahead Video, 00:03:29Weather for the week ahead Turning milder over the weekend, will these conditions last into next week? Ben Rich takes a look. Settings More Weather BBC Weather in association with MeteoGroup, external All times are Greenwich Mean Time (Europe/London, GMT) unless otherwise stated. Explore the BBC
========================================
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_note-28] | [TOKENS: 1793]
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ⁡ ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/Chthonic] | [TOKENS: 1525]
Contents Chthonic deities In Greek mythology, deities referred to as chthonic (/ˈθɒnɪk/) or chthonian (/ˈθoʊniən/) were gods or spirits who inhabited the underworld or existed in or under the earth, and were typically associated with death or fertility. The terms chthonic and chthonian are derived from the Ancient Greek word χθών (khthṓn) meaning 'earth' or 'soil'. The Greek adjective χθόνιος (khthónios) means 'in, under, or beneath the earth', which can be differentiated from γῆ (gê), which refers to the living surface of land on the earth. In Greek, χθόνιος (khthónios) is a descriptive word for things relating to the underworld, which was in antiquity sometimes applied as an epithet to deities such as Hermes, Demeter, and Zeus. The chthonic deities have been compared to the more commonly referred-to Olympic gods and their associated rites and cults. Olympic gods are understood to reference that which exists above the earth, particularly in the sky. Gods that are related to agriculture are also considered to have chthonic associations as planting and growing take place in part under the earth. Relationship to Olympian deities Chthonic and ouranic (or Olympic) are not completely opposite descriptors. They do not cleanly differentiate types of gods and worship into distinct categories, but represent a cultic spectrum. These terms communicate associations with the underworld and/or agriculture. This makes some deities such as Hades, Persephone, and the Erinyes more likely to be considered chthonic due to their proximity to the underworld. While this is the case, virtually any god could be considered chthonic to emphasize different aspects of the god. For example, Demeter and Hermes are categorized within the twelve Olympian gods but are often considered chthonic. Zeus has also been referenced with the surname "chthonios", demonstrating the situational use of a chthonic description. Epithets In Ancient Greece, the names of deities were sometimes followed by an epithet, similar in concept to a surname. In this context, the purpose of an epithet was to describe a characteristic or association of a deity. The epithets 'chthonios' and 'chthonia' would follow the name of a god or goddess to reference their relationship either to the underworld or agriculture. For example, Hermes Chthonios references Hermes' role as the underworld escort. In contrast, Charon does not necessitate a chthonic epithet as his relation to the underworld is his main attribution. Additional examples of deities with recorded epithets include Demeter Chthonia, Ge Chthonia, Persephone Chthonia, Zeus Chthonios, and Hecate Chthonia. Common chthonic deities As discussed, many deities can be considered chthonic based upon what attributes are being referenced. Though this is the case, a few gods are most commonly considered chthonic due to their considerable role in the underworld and/or agriculture. These include Hades as he is the ruler of the underworld. Persephone is the Queen of the Underworld alongside Hades. She spends half the year in the underworld and the other half above the earth. The period when Persephone is in the underworld corresponds with winter while she personifies spring when she returns to above the earth. It is for these reasons that she is mainly associated with the underworld as well as agriculture. Demeter is related to the underworld as she attempts to rescue Persephone from Hades in her grief. She is associated with agriculture and fertility. The Furies, or the Erinyes, reside in the underworld and are known for vengeance. Chthonic cult Offerings were a significant aspect of Ancient Greek religion. They were used to communicate with the gods and commonly took the forms of sacrifice and libation. Offerings were central to the worship of both chthonic and ouranic gods, though the specifics of these rituals differed. These differences provide insight into the ways in which Greeks perceived chthonic and ouranic deities as well as the ways they related to them. Ouranic sacrifices took place during the daytime and included wine as a libation. They were performed on high altars which resided outside of temples. The animal sacrifice was roasted with the smoke traveling upward toward the sky, in the direction of the Olympic gods. Once cooked, the worshippers would feast on the sacrifice with the idea that they were sharing this meal with the gods. The worshippers would eat the consumable portions of the animal and burn the rest for the god. While performing the sacrifice, worshippers would raise their palms open and upward, again gesturing toward the sky where the ouranic gods resided. Chthonic sacrifice was commonly defined by offering a black or dark-hided animal to the deity. Worshippers did not consume the sacrifice themselves, but instead burned the entirety of the animal for the god. This type of sacrifice is called a holocaust, defined by the completely burned and destroyed nature of the offering. The sacrifice was performed on a low altar or in a pit in the ground, offered in the direction of the earth where chthonic deities would reside. The animal sacrifice was sometimes buried as well. The temples in which these sacrifices were performed were typically built outside city walls with caves and grottos being popular locations, believed to be openings for chthonic deities. Worshippers lowered their palms and faced them downwards toward the earth and underworld, in the direction of the chthonic gods. The goal of chthonic worship was to interact with gods beneath the earth so offerings were directed toward the ground to reach these deities. For this reason, incense was not used in Chthonic worship, as the smoke would rise upwards rather than downwards. Wine was not utilized in this form of worship, but instead honey was a common libation used. Sacrificial practices would not always follow these exact patterns, but these are differences which can allude to whether the worshipper is conducting an ouranic or chthonic sacrifice. Though the specifics of chthonic and ouranic sacrifice differ, they both have similar goals. In both scenarios, worshippers perform sacrifices to communicate and forge a relationship with the gods. They may perform a sacrifice to thank, honor, or request a favor from a god. Scholarly controversy There is scholarly debate regarding whether the distinction of chthonic is historically accurate and/or useful. Some scholars, including van Straten, argue that the term is not archaeologically verifiable. Some of these scholars believe that the modern use of chthonic is much more binary and concrete than it was in Ancient Greece. Schlesier notes that discussions of chthonic practices often create a false sense of "normal" worship and "deviant" worship, again citing the stark binary which modern scholars may fall into. In response, Scullion articulates the benefits of the term chthonic as long as one also understands the fact that chthonic and Olympian are not mutually exclusive categories. The term serves to highlight differing aspects of religious practice. Scholars emphasize the importance of reserving the label of chthonic for situations that were explicitly labeled as such in Ancient Greece. Notes References External links
========================================
[SOURCE: https://en.wikipedia.org/wiki/Talk:Minecraft] | [TOKENS: 735]
Contents Talk:Minecraft This article is of interest to the Lead Improvement Team, a WikiProject dedicated to improving the lead sections of articles on Wikipedia. Minecraft has been added to our To-do list because its lead has been assessed as needing improvement. A member of our team is either currently working on remedying this, or will be shortly. If you think you can improve the lead yourself, please do so and then let us know so we can strike it off our list. If you are interested in helping with this project, please visit our project page and add your name to the list of participants. Minecraft players go full propaganda mode as petition demanding the end of the 'mob vote' draws 300,000 signatures in just a few days The Minecraft mob vote boycott petition is missing the point The Minecraft Community Is "Boycotting" This Year's Mob Vote Minecraft Players Trying to Stop Mob Vote With Propaganda Posters and More Than 220,000 Signatures Over 150,000 players sign new Minecraft mob vote petition “United we bargain, divided we beg”: The 2023 Minecraft mob vote boycott explained After a proposed boycott, Minecraft mob vote faces more controversy in wake of rigging accusations Barber, Regina; Kwong, Emily; Summers, Juana; Carlson, Rachel; Ryan, Erika (2025-05-02). "What playing Minecraft tells researchers about social learning". NPR. Retrieved 2025-05-02. Wu, Charley; Deffner, Dominik; Kahl, Benjamin; Meder, Björn; Ho, Mark; Kurvers, Ralf (2025-04-25). "Adaptive mechanisms of social and asocial learning in immersive collective foraging". Nature Communications. 16. doi:10.1038/s41467-025-58365-6. Retrieved 2025-05-02. Adding game drop updates to the update list A user @DasKlose added the "game drops" to the update list. These have been reverted by @NegativeMP1. I'm starting a section on the talk page, as I believe the original edit should stand and the game drop updates should be included - otherwise it appears as though Minecraft's development stopped in 2024. There's a meaningful difference between these drop updates and for example 1.16.2, because Mojang are naming these updates, and are treating them as the successor to the previous system. Including four updates a year isn't too unusual either - in 2012 1.1, 1.2, 1.3 and 1.4 were all released. Eastwood Park and strabane (talk) 14:02, 4 January 2026 (UTC)[reply] "Noob vs pro vs hacker" listed at Redirects for discussion The redirect Noob vs pro vs hacker has been listed at redirects for discussion to determine whether its use and function meets the redirect guidelines. Readers of this page are welcome to comment on this redirect at Wikipedia:Redirects for discussion/Log/2026 February 7 § Noob vs pro vs hacker until a consensus is reached. Thepharoah17 (talk) 23:38, 7 February 2026 (UTC)[reply] A Commons file used on this page or its Wikidata item has been nominated for deletion The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion: Participate in the deletion discussion at the nomination page. —Community Tech bot (talk) 13:37, 18 February 2026 (UTC)[reply]
========================================
[SOURCE: https://en.wikipedia.org/wiki/Language_model#cite_note-29] | [TOKENS: 1793]
Contents Language model A language model is a computational model that predicts sequences in natural language. Language models are useful for a variety of tasks, including speech recognition, machine translation, natural language generation (generating more human-like text), optical character recognition, route optimization, handwriting recognition, grammar induction, and information retrieval. Large language models (LLMs), currently their most advanced form as of 2019, are predominantly based on transformers trained on larger datasets (frequently using texts scraped from the public internet). They have superseded recurrent neural network-based models, which had previously superseded the purely statistical models, such as the word n-gram language model. History Noam Chomsky did pioneering work on language models in the 1950s by developing a theory of formal grammars. In 1980, statistical approaches were explored and found to be more useful for many purposes than rule-based formal grammars. Discrete representations like word n-gram language models, with probabilities for discrete combinations of words, made significant advances. In the 2000s, continuous representations for words, such as word embeddings, began to replace discrete representations. Typically, the representation is a real-valued vector that encodes a word’s meaning such that words closer in vector space are similar in meaning and common relationships between words, such as plurality or gender, are preserved. Pure statistical models In 1980, the first significant statistical language model was proposed, and during the decade IBM performed 'Shannon-style' experiments, in which potential sources for language modeling improvement were identified by observing and analyzing the performance of human subjects in predicting or correcting text. A word n-gram language model is a statistical model of language which calculates the probability of the next word in a sequence from a fixed size window of previous words. If one previous word is considered, it is a bigram model; if two words, a trigram model; if n − 1 words, an n-gram model. Special tokens are introduced to denote the start and end of a sentence ⟨ s ⟩ {\displaystyle \langle s\rangle } and ⟨ / s ⟩ {\displaystyle \langle /s\rangle } . To prevent a zero probability being assigned to unseen words, the probability of each seen word is slightly lowered to make room for the unseen words in a given corpus. To achieve this, various smoothing methods are used, from simple "add-one" smoothing (assigning a count of 1 to unseen n-grams, as an uninformative prior) to more sophisticated techniques, such as Good–Turing discounting or back-off models. Word n-gram models have largely been superseded by recurrent neural network–based models, which in turn have been superseded by Transformer-based models often referred to as large language models. Maximum entropy language models encode the relationship between a word and the n-gram history using feature functions. The equation is P ( w m ∣ w 1 , … , w m − 1 ) = 1 Z ( w 1 , … , w m − 1 ) exp ⁡ ( a T f ( w 1 , … , w m ) ) {\displaystyle P(w_{m}\mid w_{1},\ldots ,w_{m-1})={\frac {1}{Z(w_{1},\ldots ,w_{m-1})}}\exp(a^{T}f(w_{1},\ldots ,w_{m}))} where Z ( w 1 , … , w m − 1 ) {\displaystyle Z(w_{1},\ldots ,w_{m-1})} is the partition function, a {\displaystyle a} is the parameter vector, and f ( w 1 , … , w m ) {\displaystyle f(w_{1},\ldots ,w_{m})} is the feature function. In the simplest case, the feature function is just an indicator of the presence of a certain n-gram. It is helpful to use a prior on a {\displaystyle a} or some form of regularization. The log-bilinear model is another example of an exponential language model. Skip-gram language model is an attempt at overcoming the data sparsity problem that the preceding model (i.e. word n-gram language model) faced. Words represented in an embedding vector were not necessarily consecutive anymore, but could leave gaps that are skipped over (thus the name "skip-gram"). Formally, a k-skip-n-gram is a length-n subsequence where the components occur at distance at most k from each other. For example, in the input text: the set of 1-skip-2-grams includes all the bigrams (2-grams), and in addition the subsequences In skip-gram model, semantic relations between words are represented by linear combinations, capturing a form of compositionality. For example, in some such models, if v is the function that maps a word w to its n-d vector representation, then v ( k i n g ) − v ( m a l e ) + v ( f e m a l e ) ≈ v ( q u e e n ) {\displaystyle v(\mathrm {king} )-v(\mathrm {male} )+v(\mathrm {female} )\approx v(\mathrm {queen} )} where ≈ is made precise by stipulating that its right-hand side must be the nearest neighbor of the value of the left-hand side. Neural models Continuous representations or embeddings of words are produced in recurrent neural network-based language models (known also as continuous space language models). Such continuous space embeddings help to alleviate the curse of dimensionality, which is the consequence of the number of possible sequences of words increasing exponentially with the size of the vocabulary, further causing a data sparsity problem. Neural networks avoid this problem by representing words as non-linear combinations of weights in a neural net. A large language model (LLM) is a language model trained with self-supervised machine learning on a vast amount of text, designed for natural language processing tasks, especially language generation. The largest and most capable LLMs are generative pre-trained transformers (GPTs) that provide the core capabilities of modern chatbots. LLMs can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained on. They consist of billions to trillions of parameters and operate as general-purpose sequence models, generating, summarizing, translating, and reasoning over text. LLMs represent a significant new technology in their ability to generalize across tasks with minimal task-specific supervision, enabling capabilities like conversational agents, code generation, knowledge retrieval, and automated reasoning that previously required bespoke systems. LLMs evolved from earlier statistical and recurrent neural network approaches to language modeling. The transformer architecture, introduced in 2017, replaced recurrence with self-attention, allowing efficient parallelization, longer context handling, and scalable training on unprecedented data volumes. This innovation enabled models like GPT, BERT, and their successors, which demonstrated emergent behaviors at scale, such as few-shot learning and compositional reasoning. Reinforcement learning, particularly policy gradient algorithms, has been adapted to fine-tune LLMs for desired behaviors beyond raw next-token prediction. Reinforcement learning from human feedback (RLHF) applies these methods to optimize a policy, the LLM's output distribution, against reward signals derived from human or automated preference judgments. This has been critical for aligning model outputs with user expectations, improving factuality, reducing harmful responses, and enhancing task performance. Benchmark evaluations for LLMs have evolved from narrow linguistic assessments toward comprehensive, multi-task evaluations measuring reasoning, factual accuracy, alignment, and safety. Hill climbing, iteratively optimizing models against benchmarks, has emerged as a dominant strategy, producing rapid incremental performance gains but raising concerns of overfitting to benchmarks rather than achieving genuine generalization or robust capability improvements. Although sometimes matching human performance, it is not clear whether they are plausible cognitive models. At least for recurrent neural networks, it has been shown that they sometimes learn patterns that humans do not, but fail to learn patterns that humans typically do. Evaluation and benchmarks Evaluation of the quality of language models is mostly done by comparison to human created sample benchmarks created from typical language-oriented tasks. Other, less established, quality tests examine the intrinsic character of a language model or compare two such models. Since language models are typically intended to be dynamic and to learn from data they see, some proposed models investigate the rate of learning, e.g., through inspection of learning curves. Various data sets have been developed for use in evaluating language processing systems. These include: See also References Further reading
========================================
[SOURCE: https://en.wikipedia.org/wiki/World#cite_note-Lewis-4] | [TOKENS: 5641]
Contents World The world is the totality of entities, the whole of reality, or everything that exists. The nature of the world has been conceptualized differently in different fields. Some conceptions see the world as unique, while others talk of a "plurality of worlds". Some treat the world as one simple object, while others analyze the world as a complex made up of parts. In scientific cosmology, the world or universe is commonly defined as "the totality of all space and time; all that is, has been, and will be". Theories of modality talk of possible worlds as complete and consistent ways how things could have been. Phenomenology, starting from the horizon of co-given objects present in the periphery of every experience, defines the world as the biggest horizon, or the "horizon of all horizons". In philosophy of mind, the world is contrasted with the mind as that which is represented by the mind. Theology conceptualizes the world in relation to God, for example, as God's creation, as identical to God, or as the two being interdependent. In religions, there is a tendency to downgrade the material or sensory world in favor of a spiritual world to be sought through religious practice. A comprehensive representation of the world and our place in it, as is found in religions, is known as a worldview. Cosmogony is the field that studies the origin or creation of the world, while eschatology refers to the science or doctrine of the last things or of the end of the world. In various contexts, the term "world" takes a more restricted meaning associated, for example, with the Earth and all life on it, with humanity as a whole, or with an international or intercontinental scope. In this sense, world history refers to the history of humanity as a whole, and world politics is the discipline of political science studying issues that transcend nations and continents. Other examples include terms such as "world religion", "world language", "world government", "world war", "world population", "world economy", or "world championship". Etymology The English word world comes from the Old English weorold. The Old English is a reflex of the Common Germanic *weraldiz, a compound of weraz 'man' and aldiz 'age', thus literally meaning roughly 'age of man'; this word led to Old Frisian warld, Old Saxon werold, Old Dutch werolt, Old High German weralt, and Old Norse verǫld. The corresponding word in Latin is mundus, literally 'clean, elegant', itself a loan translation of Greek cosmos 'orderly arrangement'. While the Germanic word thus reflects a mythological notion of a "domain of Man" (compare Midgard), presumably as opposed to the divine sphere on the one hand and the chthonic sphere of the underworld on the other, the Greco-Latin term expresses a notion of creation as an act of establishing order out of chaos. Conceptions Different fields often work with quite different conceptions of the essential features associated with the term "world". Some conceptions see the world as unique: there can be no more than one world. Others talk of a "plurality of worlds". Some see worlds as complex things composed of many substances as their parts while others hold that worlds are simple in the sense that there is only one substance: the world as a whole. Some characterize worlds in terms of objective spacetime while others define them relative to the horizon present in each experience. These different characterizations are not always exclusive: it may be possible to combine some without leading to a contradiction. Most of them agree that worlds are unified totalities. Monism is a thesis about oneness: that only one thing exists in a certain sense. The denial of monism is pluralism, the thesis that, in a certain sense, more than one thing exists. There are many forms of monism and pluralism, but in relation to the world as a whole, two are of special interest: existence monism/pluralism and priority monism/pluralism. Existence monism states that the world is the only concrete object there is. This means that all the concrete "objects" we encounter in our daily lives, including apples, cars and ourselves, are not truly objects in a strict sense. Instead, they are just dependent aspects of the world-object. Such a world-object is simple in the sense that it does not have any genuine parts. For this reason, it has also been referred to as "blobject" since it lacks an internal structure like a blob. Priority monism allows that there are other concrete objects besides the world. But it holds that these objects do not have the most fundamental form of existence, that they somehow depend on the existence of the world. The corresponding forms of pluralism state that the world is complex in the sense that it is made up of concrete, independent objects. Scientific cosmology can be defined as the science of the universe as a whole. In it, the terms "universe" and "cosmos" are usually used as synonyms for the term "world". One common definition of the world/universe found in this field is as "[t]he totality of all space and time; all that is, has been, and will be". Some definitions emphasize that there are two other aspects to the universe besides spacetime: forms of energy or matter, like stars and particles, and laws of nature. World-conceptions in this field differ both concerning their notion of spacetime and of the contents of spacetime. The theory of relativity plays a central role in modern cosmology and its conception of space and time. A difference from its predecessors is that it conceives space and time not as distinct dimensions but as a single four-dimensional manifold called spacetime. This can be seen in special relativity in relation to the Minkowski metric, which includes both spatial and temporal components in its definition of distance. General relativity goes one step further by integrating the concept of mass into the concept of spacetime as its curvature. Quantum cosmology uses a classical notion of spacetime and conceives the whole world as one big wave function expressing the probability of finding particles in a given location. The world-concept plays a role in many modern theories of modality, sometimes in the form of possible worlds. A possible world is a complete and consistent way how things could have been. The actual world is a possible world since the way things are is a way things could have been. There are many other ways things could have been besides how they actually are. For example, Hillary Clinton did not win the 2016 US election, but she could have won. So there is a possible world in which she did. There is a vast number of possible worlds, one corresponding to each such difference, no matter how small or big, as long as no outright contradictions are introduced this way. Possible worlds are often conceived as abstract objects, for example, in terms of non-obtaining states of affairs or as maximally consistent sets of propositions. On such a view, they can even be seen as belonging to the actual world. Another way to conceive possible worlds, made famous by David Lewis, is as concrete entities. On this conception, there is no important difference between the actual world and possible worlds: both are conceived as concrete, inclusive and spatiotemporally connected. The only difference is that the actual world is the world we live in, while other possible worlds are not inhabited by us but by our counterparts. Everything within a world is spatiotemporally connected to everything else but the different worlds do not share a common spacetime: They are spatiotemporally isolated from each other. This is what makes them separate worlds. It has been suggested that, besides possible worlds, there are also impossible worlds. Possible worlds are ways things could have been, so impossible worlds are ways things could not have been. Such worlds involve a contradiction, like a world in which Hillary Clinton both won and lost the 2016 US election. Both possible and impossible worlds have in common the idea that they are totalities of their constituents. Within phenomenology, worlds are defined in terms of horizons of experiences. When we perceive an object, like a house, we do not just experience this object at the center of our attention but also various other objects surrounding it, given in the periphery. The term "horizon" refers to these co-given objects, which are usually experienced only in a vague, indeterminate manner. The perception of a house involves various horizons, corresponding to the neighborhood, the city, the country, the Earth, etc. In this context, the world is the biggest horizon or the "horizon of all horizons". It is common among phenomenologists to understand the world not just as a spatiotemporal collection of objects but as additionally incorporating various other relations between these objects. These relations include, for example, indication-relations that help us anticipate one object given the appearances of another object and means-end-relations or functional involvements relevant for practical concerns. In philosophy of mind, the term "world" is commonly used in contrast to the term "mind" as that which is represented by the mind. This is sometimes expressed by stating that there is a gap between mind and world and that this gap needs to be overcome for representation to be successful. One problem in philosophy of mind is to explain how the mind is able to bridge this gap and to enter into genuine mind-world-relations, for example, in the form of perception, knowledge or action. This is necessary for the world to be able to rationally constrain the activity of the mind. According to a realist position, the world is something distinct and independent from the mind. Idealists conceive of the world as partially or fully determined by the mind. Immanuel Kant's transcendental idealism, for example, posits that the spatiotemporal structure of the world is imposed by the mind on reality but lacks independent existence otherwise. A more radical idealist conception of the world can be found in Berkeley's subjective idealism, which holds that the world as a whole, including all everyday objects like tables, cats, trees and ourselves, "consists of nothing but minds and ideas". Different theological positions hold different conceptions of the world based on its relation to God. Classical theism states that God is wholly distinct from the world. But the world depends for its existence on God, both because God created the world and because He maintains or conserves it. This is sometimes understood in analogy to how humans create and conserve ideas in their imagination, with the difference being that the divine mind is vastly more powerful. On such a view, God has absolute, ultimate reality in contrast to the lower ontological status ascribed to the world. God's involvement in the world is often understood along the lines of a personal, benevolent God who looks after and guides His creation. Deists agree with theists that God created the world but deny any subsequent, personal involvement in it. Pantheists reject the separation between God and world. Instead, they claim that the two are identical. This means that there is nothing to the world that does not belong to God and that there is nothing to God beyond what is found in the world. Panentheism constitutes a middle ground between theism and pantheism. Against theism, it holds that God and the world are interrelated and depend on each other. Against pantheism, it holds that there is no outright identity between the two. History of philosophy In philosophy, the term world has several possible meanings. In some contexts, it refers to everything that makes up reality or the physical universe. In others, it can mean have a specific ontological sense (see world disclosure). While clarifying the concept of world has arguably always been among the basic tasks of Western philosophy, this theme appears to have been raised explicitly only at the start of the twentieth century, Plato is well known for his theory of forms, which posits the existence of two different worlds: the sensible world and the intelligible world. The sensible world is the world we live in, filled with changing physical things we can see, touch and interact with. The intelligible world is the world of invisible, eternal, changeless forms like goodness, beauty, unity and sameness. Plato ascribes a lower ontological status to the sensible world, which only imitates the world of forms. This is due to the fact that physical things exist only to the extent that they participate in the forms that characterize them, while the forms themselves have an independent manner of existence. In this sense, the sensible world is a mere replication of the perfect exemplars found in the world of forms: it never lives up to the original. In the allegory of the cave, Plato compares the physical things we are familiar with to mere shadows of the real things. But not knowing the difference, the prisoners in the cave mistake the shadows for the real things. Two definitions that were both put forward in the 1920s, however, suggest the range of available opinion. "The world is everything that is the case", wrote Ludwig Wittgenstein in his influential Tractatus Logico-Philosophicus, first published in 1921. Martin Heidegger, meanwhile, argued that "the surrounding world is different for each of us, and notwithstanding that we move about in a common world". "World" is one of the key terms in Eugen Fink's philosophy. He thinks that there is a misguided tendency in western philosophy to understand the world as one enormously big thing containing all the small everyday things we are familiar with. He sees this view as a form of forgetfulness of the world and tries to oppose it by what he calls the "cosmological difference": the difference between the world and the inner-worldly things it contains. On his view, the world is the totality of the inner-worldly things that transcends them. It is itself groundless but it provides a ground for things. It therefore cannot be identified with a mere container. Instead, the world gives appearance to inner-worldly things, it provides them with a place, a beginning and an end. One difficulty in investigating the world is that we never encounter it since it is not just one more thing that appears to us. This is why Fink uses the notion of play or playing to elucidate the nature of the world. He sees play as a symbol of the world that is both part of it and that represents it. Play usually comes with a form of imaginary play-world involving various things relevant to the play. But just like the play is more than the imaginary realities appearing in it so the world is more than the actual things appearing in it. The concept of worlds plays a central role in Nelson Goodman's late philosophy. He argues that we need to posit different worlds in order to account for the fact that there are different incompatible truths found in reality. Two truths are incompatible if they ascribe incompatible properties to the same thing. This happens, for example, when we assert both that the earth moves and that the earth is at rest. These incompatible truths correspond to two different ways of describing the world: heliocentrism and geocentrism. Goodman terms such descriptions "world versions". He holds a correspondence theory of truth: a world version is true if it corresponds to a world. Incompatible true world versions correspond to different worlds. It is common for theories of modality to posit the existence of a plurality of possible worlds. But Goodman's theory is different since it posits a plurality not of possible but of actual worlds. Such a position is in danger of involving a contradiction: there cannot be a plurality of actual worlds if worlds are defined as maximally inclusive wholes. This danger may be avoided by interpreting Goodman's world-concept not as maximally inclusive wholes in the absolute sense but in relation to its corresponding world-version: a world contains all and only the entities that its world-version describes. Religion Mythological cosmologies depict the world as centered on an axis mundi and delimited by a boundary such as a world ocean, a world serpent or similar. Hinduism constitutes a family of religious-philosophical views. These views present perspectives on the nature and role of the world. Samkhya philosophy, for example, is a metaphysical dualism that understands reality as comprising 2 parts: purusha and prakriti. The term "purusha" stands for the individual conscious self that each of "us" possesses. Prakriti, on the other hand, is the 1 world inhabited by all these selves. Samkhya understands this world as a world of matter governed by the law of cause and effect. The term "matter" is understood in a sense in this tradition including physical and mental aspects. This is reflected in the doctrine of tattvas, according to which prakriti is made up of 23 principles or elements of reality. These principles include physical elements, like water or earth, and mental aspects, like intelligence or sense-impressions. The relation between purusha and prakriti is conceived as 1 of observation: purusha is the conscious self aware of the world of prakriti and does not causally interact with it. A conception of the world is present in Advaita Vedanta, the monist school among the Vedanta schools. Unlike the realist position defended in Samkhya philosophy, Advaita Vedanta sees the world of multiplicity as an illusion, referred to as Maya. This illusion includes impression of existing as separate experiencing selfs called Jivas. Instead, Advaita Vedanta teaches that on the most fundamental level of reality, referred to as Brahman, there exists no plurality or difference. All there is is 1 all-encompassing self: Atman. Ignorance is seen as the source of this illusion, which results in bondage to the world of mere appearances. Liberation is possible in the course of overcoming this illusion by acquiring the knowledge of Brahman, according to Advaita Vedanta. Contemptus mundi is the name given to the belief that the world, in all its vanity, is nothing more than a futile attempt to hide from God by stifling our desire for the good and the holy. This view has been characterised as a "pastoral of fear" by historian Jean Delumeau. "The world, the flesh, and the devil" is a traditional division of the sources of temptation. Orbis Catholicus is a Latin phrase meaning "Catholic world", per the expression Urbi et Orbi, and refers to that area of Christendom under papal supremacy. In Islam, the term "dunya" is used for the world. Its meaning is derived from the root word "dana", a term for "near". It is associated with the temporal, sensory world and earthly concerns, i.e. with this world in contrast to the spiritual world. Religious teachings warn of a tendency to seek happiness in this world and advise a more ascetic lifestyle concerned with the afterlife. Other strands in Islam recommend a balanced approach. In Mandaean cosmology, the world or earthly realm is known as Tibil. It is separated from the World of Light (alma d-nhūra) above and the World of Darkness (alma d-hšuka) below by aether (ayar). Related terms and problems A worldview is a comprehensive representation of the world and our place in it. As a representation, it is a subjective perspective of the world and thereby different from the world it represents. All higher animals need to represent their environment in some way in order to navigate it. But it has been argued that only humans possess a representation encompassing enough to merit the term "worldview". Philosophers of worldviews commonly hold that the understanding of any object depends on a worldview constituting the background on which this understanding can take place. This may affect not just our intellectual understanding of the object in question but the experience of it in general. It is therefore impossible to assess one's worldview from a neutral perspective since this assessment already presupposes the worldview as its background. Some hold that each worldview is based on a single hypothesis that promises to solve all the problems of our existence we may encounter. On this interpretation, the term is closely associated to the worldviews given by different religions. Worldviews offer orientation not just in theoretical matters but also in practical matters. For this reason, they usually include answers to the question of the meaning of life and other evaluative components about what matters and how we should act. A worldview can be unique to one individual but worldviews are usually shared by many people within a certain culture or religion. The idea that there exist many different worlds is found in various fields. For example, theories of modality talk about a plurality of possible worlds and the many-worlds interpretation of quantum mechanics carries this reference even in its name. Talk of different worlds is also common in everyday language, for example, with reference to the world of music, the world of business, the world of football, the world of experience or the Asian world. But at the same time, worlds are usually defined as all-inclusive totalities. This seems to contradict the very idea of a plurality of worlds since if a world is total and all-inclusive then it cannot have anything outside itself. Understood this way, a world can neither have other worlds besides itself or be part of something bigger. One way to resolve this paradox while holding onto the notion of a plurality of worlds is to restrict the sense in which worlds are totalities. On this view, worlds are not totalities in an absolute sense. This might be even understood in the sense that, strictly speaking, there are no worlds at all. Another approach understands worlds in a schematic sense: as context-dependent expressions that stand for the current domain of discourse. So in the expression "Around the World in Eighty Days", the term "world" refers to the earth while in the colonial expression "the New World" it refers to the landmass of North and South America. Cosmogony is the field that studies the origin or creation of the world. This includes both scientific cosmogony and creation myths found in various religions. The dominant theory in scientific cosmogony is the Big Bang theory, according to which both space, time and matter have their origin in one initial singularity occurring about 13.8 billion years ago. This singularity was followed by an expansion that allowed the universe to sufficiently cool down for the formation of subatomic particles and later atoms. These initial elements formed giant clouds, which would then coalesce into stars and galaxies. Non-scientific creation myths are found in many cultures and are often enacted in rituals expressing their symbolic meaning. They can be categorized concerning their contents. Types often found include creation from nothing, from chaos or from a cosmic egg. Eschatology refers to the science or doctrine of the last things or of the end of the world. It is traditionally associated with religion, specifically with the Abrahamic religions. In this form, it may include teachings both of the end of each individual human life and of the end of the world as a whole. But it has been applied to other fields as well, for example, in the form of physical eschatology, which includes scientifically based speculations about the far future of the universe. According to some models, there will be a Big Crunch in which the whole universe collapses back into a singularity, possibly resulting in a second Big Bang afterward. But current astronomical evidence seems to suggest that our universe will continue to expand indefinitely. World history studies the world from a historical perspective. Unlike other approaches to history, it employs a global viewpoint. It deals less with individual nations and civilizations, which it usually approaches at a high level of abstraction. Instead, it concentrates on wider regions and zones of interaction, often interested in how people, goods and ideas move from one region to another. It includes comparisons of different societies and civilizations as well as considering wide-ranging developments with a long-term global impact like the process of industrialization. Contemporary world history is dominated by three main research paradigms determining the periodization into different epochs. One is based on productive relations between humans and nature. The two most important changes in history in this respect were the introduction of agriculture and husbandry concerning the production of food, which started around 10,000 to 8,000 BCE and is sometimes termed the Neolithic Revolution, and the Industrial Revolution, which started around 1760 CE and involved the transition from manual to industrial manufacturing. Another paradigm, focusing on culture and religion instead, is based on Karl Jaspers' theories about the Axial Age, a time in which various new forms of religious and philosophical thoughts appeared in several separate parts of the world around the time between 800 and 200 BCE. A third periodization is based on the relations between civilizations and societies. According to this paradigm, history can be divided into three periods in relation to the dominant region in the world: Middle Eastern dominance before 500 BCE, Eurasian cultural balance until 1500 CE and Western dominance since 1500 CE. Big History employs an even wider framework than world history by putting human history into the context of the history of the universe as a whole. It starts with the Big Bang and traces the formation of galaxies, the Solar System, the Earth, its geological eras, the evolution of life and humans until the present day. World politics, also referred to as global politics or international relations, is the discipline of political science studying issues of interest to the world that transcend nations and continents. It aims to explain complex patterns found in the social world that are often related to the pursuit of power, order and justice, usually in the context of globalization. It focuses not just on the relations between nation-states but also considers other transnational actors, like multinational corporations, terrorist groups, or non-governmental organizations. For example, it tries to explain events such as the September 11 attacks, the 2003 invasion of Iraq or the 2008 financial crisis. Various theories have been proposed in order to deal with the complexity involved in formulating such explanations. These theories are sometimes divided into realism, liberalism and constructivism. Realists see nation-states as the main actors in world politics. They constitute an anarchical international system without any overarching power to control their behavior. They are seen as sovereign agents that, determined by human nature, act according to their national self-interest. Military force may play an important role in the ensuing struggle for power between states, but diplomacy and cooperation are also key mechanisms for nations to achieve their goals. Liberalists acknowledge the importance of states but they also emphasize the role of transnational actors, like the United Nations or the World Trade Organization. They see humans as perfectible and stress the role of democracy in this process. The emergent order in world politics, on this perspective, is more complex than a mere balance of power since more different agents and interests are involved in its production. Constructivism ascribes more importance to the agency of individual humans than realism and liberalism. It understands the social world as a construction of the people living in it. This leads to an emphasis on the possibility of change. If the international system is an anarchy of nation-states, as the realists hold, then this is only so because we made it this way and may change since this is not prefigured by human nature, according to the constructivists. See also References External links Africa Antarctica Asia Australia Europe North America South America Afro-Eurasia Americas Eurasia Oceania
========================================
[SOURCE: https://he.wikipedia.org/wiki/אדר] | [TOKENS: 1937]
תוכן עניינים אדר אֲדָר (מאכדית: addaru), הוא חודש בלוח הבבלי ובלוח העברי, השנים-עשר במספר לפי המסורת המקראית והשישי לפי המסורת החז"לית. חודש זה חל בסוף החורף בארץ ישראל. בשנה מעוברת ישנם שני חודשי אדר: אדר א' (ששמו הרשמי הוא אדר ראשון) שאורכו 30 יום ואדר ב' (ששמו הרשמי הוא אדר שני) שאורכו 29 ימים כמו אדר בשנה שאינה מעוברת. למעשה מבחינה הלכתית כל אירועי החודש כמו חג הפורים חלים באדר ב' (למעט אזכרות לנפטרים, שאותן האשכנזים מציינים באדר א'). בחודש אדר 29 ימים. א' באדר יכול לחול בימים שני, רביעי, שישי ושבת. א' באדר א' יכול לחול בימים שני, רביעי, חמישי ושבת. שם החודש כמו שמות החודשים העבריים האחרים, מקור השם בשמות החודשים הבבליים. לכן הוא מופיע בתנ"ך רק בספרים שלאחר תחילת גלות בבל – אסתר ועזרא. המילונים המדעיים למקרא מציינים את קרבת שם החודש אדר למילה האכדית addaru שהוראתה "חשוך" והמילה האוגריתית U'dar, שמשמעותה "גבורה". בארמית בבלית משמעות המילה היא "גורן" ("אידר"). בתנ"ך מופיעה המילה גם בניקוד שונה כשם פרטי: "וַיִּהְיוּ בָנִים לְבָלַע אַדָּר וְגֵרָא וַאֲבִיהוּד" (דברי הימים א' ח', ג'), וגם כיום משמש השם כשם ישראלי הן לבנים והן לבנות. מנהגים "משנכנס אדר מרבין בשמחה" במוסדות החינוך הדתיים נהוג לציין את ראש חודש אדר בשירה וריקודים. כמו כן מראש חודש אדר מתחילים במנהגי השמחה ובין הידועים שבהם, המנהג להכתיר רב פורים. מועדים עיקריים סמלים מזלו של חודש אדר הוא מזל דגים. כך, לדוגמה, כותב רש"י כי "באדר מזל דגים מתחיל לעלות" (בפירושו למסכת בבא מציעא, ק"ו:). גם מזלם של ישראל עולה בחודש הזה מפני שנמשלו ישראל לדגים. קישורים בין המזל לחודש נפוצים. כך, למשל, נאמר במדרש על מחשבתו של המן בעת שהטיל גורל לקביעת התאריך לגזירת השמדת היהודים: ”בא לו מזל דגים שהוא משמש בחדש אדר ולא נמצא לו זכות ושמח מיד ואמר אדר אין לו זכות ומזלו אין לו זכות ולא עוד אלא שבאדר מת משה רבן, והוא לא ידע שבשבעה באדר מת משה ובשבעה באדר נולד משה, ואמר כשם שהדגים בולעין כך אני בולע אותן, אמר לו הקדוש ברוך הוא רשע דגים פעמים נבלעין ופעמים בולעין ועכשיו אותו האיש נבלע מן הבולעין, אמר רבי חנן הדא הוא דכתיב ונהפוך הוא אשר ישלטו היהודים המה בשונאיהם” (אסתר רבה, פרשה ז', פסקה י"א). במגילה 4Q318 ממגילות מדבר יהודה המזל הפותח וסוגר את חודש אדר הוא מזל טלה. ראו גם קישורים חיצוניים הערות שוליים
========================================
[SOURCE: https://en.wikipedia.org/wiki/Latin] | [TOKENS: 10625]
Contents Latin Latin (lingua Latina or Latinum[I]) is a classical language belonging to the Italic branch of the Indo-European languages. Latin was originally spoken by the Latins in Latium (now known as Lazio), the lower Tiber area around Rome, Italy. Through the expansion of the Roman Republic, it became the dominant language in the Italian Peninsula and subsequently throughout the Roman Empire. It has greatly influenced many languages, including English, having contributed many words to the English lexicon, particularly after the Christianisation of the Anglo-Saxons and the Norman Conquest. Latin roots appear frequently in the technical vocabulary used by fields such as theology, the sciences, medicine, and law. By the late Roman Republic, Old Latin had evolved into standardised Classical Latin. Vulgar Latin refers to the less prestigious colloquial registers, attested in inscriptions and some literary works such as those of the comic playwrights Plautus and Terence and the author Petronius. While often called a "dead language", Latin did not undergo language death. Between the 6th and 9th centuries, natural language change in the vernacular Latin of different regions evolved into distinct Romance languages. After the fall of the Western Roman Empire, Latin remained the common language of international communication, science, scholarship and academia in Europe into the early 19th century, by which time modern languages had supplanted it in common academic and political usage. Late Latin is the literary form of the language from the 3rd century AD onward. No longer spoken as a native language, Medieval Latin was used across Western and Catholic Europe during the Middle Ages as a working and literary language from the 9th century to the Renaissance, which then developed a classicising form, called Renaissance Latin. This was the basis for Neo-Latin, which evolved during the early modern period. Latin was taught to be written and spoken at least until the late seventeenth century, when spoken skills began to erode; Contemporary Latin is generally studied to be read rather than spoken. Ecclesiastical Latin remains the official language of the Holy See and the Roman Rite of the Catholic Church. Latin grammar is highly fusional, with classes of inflections for case, number, person, gender, tense, mood, voice, and aspect. The Latin alphabet is directly derived from the Etruscan and Greek alphabets. History A number of phases of the language have been recognised, each distinguished by subtle differences in vocabulary, usage, spelling, and syntax. There are no hard and fast rules of classification; different scholars emphasise different features. As a result, the list has variants, as well as alternative names. In addition to the historical phases, Ecclesiastical Latin refers to the styles used by the writers of the Roman Catholic Church from late antiquity onward, as well as by Protestant scholars. The earliest known form of Latin is Old Latin, also called Archaic or Early Latin, which was spoken from the Roman Kingdom, traditionally founded in 753 BC, through the later part of the Roman Republic, up to 75 BC, i.e. before the age of Classical Latin. It is attested both in inscriptions and in some of the earliest extant Latin literary works, such as the comedies of Plautus and Terence. The Latin alphabet was devised from the Etruscan alphabet. The writing later changed from what was initially either a right-to-left or a boustrophedon script to what ultimately became a strictly left-to-right script. During the late republic and into the first years of the empire, from about 75 BC to AD 200, a new Classical Latin arose, a conscious creation of the orators, poets, historians and other literate men, who wrote the great works of classical literature, which were taught in grammar and rhetoric schools. Today's instructional grammars trace their roots to such schools, which served as a sort of informal language academy dedicated to maintaining and perpetuating educated speech. Philological analysis of Archaic Latin works, such as those of Plautus, which contain fragments of everyday speech, gives evidence of an informal register of the language, Vulgar Latin (termed sermo vulgi 'the speech of the masses', by Cicero). Some linguists, particularly in the nineteenth century, believed this to be a separate language, existing more or less in parallel with the literary or educated Latin, but this is now widely dismissed. The term 'Vulgar Latin' remains difficult to define, referring both to informal speech at any time within the history of Latin, and the kind of informal Latin that had begun to move away from the written language significantly in the post-Imperial period, that led ultimately to the Romance languages. During the Classical period, informal language was rarely written, so philologists have been left with only individual words and phrases cited by classical authors, inscriptions such as Curse tablets and those found as graffiti. In the Late Latin period, language changes reflecting spoken (non-classical) norms tend to be found in greater quantities in texts. As it was free to develop on its own, there is no reason to suppose that the speech was uniform either diachronically or geographically. On the contrary, Romanised European populations developed their own dialects of the language, which eventually led to the differentiation of Romance languages. Late Latin is a kind of written Latin used in the 3rd to 6th centuries. This began to diverge from Classical forms at a faster pace. It is characterised by greater use of prepositions, and word order that is closer to modern Romance languages, for example, while grammatically retaining more or less the same formal rules as Classical Latin. Ultimately, Latin diverged into a distinct written form, where the commonly spoken form was perceived as a separate language, for instance early French or Italian dialects, that could be transcribed differently. It took some time for these to be viewed as wholly different from Latin however. After the Western Roman Empire fell in 476 and Germanic kingdoms took its place, the Germanic people adopted Latin as a language more suitable for legal and other, more formal uses. Initially Latin was also retained by the Eastern Roman Empire (Byzantine Empire) as the language of the government and legislation, even though the vast majority of its population spoke Greek and other Eastern languages such as Syriac and Coptic. It even enjoyed a brief flowering as the language of the great codification of laws, the Corpus Iuris Civilis under Justinian I, himself a native Latin speaker. After Justinian I's death, the gradual territorial retreat of the Empire and its near collapse in the wake of the Muslim conquests led to the near extinction of Latin as a spoken and official language. The surviving rump Roman state replaced Latin with the Greek language. Latin was retained on coinage and in some court ceremonies until the 11th century, albeit often in a ritual and fossilised form. In a surviving letter from the late 9th century, the Carolingian emperor Louis II invoked the fact that the Byzantine imperial chancery struggled to write in proper Latin as an argument in his ideological dispute over who was the rightful Roman emperor. While the written form of Latin was increasingly standardised into a fixed form, the spoken forms began to diverge more greatly. Currently, the six most widely spoken Romance languages by number of native speakers are Spanish, Portuguese, French, Italian, Romanian and Catalan. Despite dialectal variation, which is found in any widespread language, the languages currently existing in Spain, France, Portugal, and Italy have retained a remarkable unity in phonological forms and developments, bolstered by the stabilising influence of their common Christian (Roman Catholic) culture. It was not until the Muslim conquest of the Iberian Peninsula in 711, cutting off communications between the major Romance regions, that the languages began to diverge seriously. The spoken Latin that would later become Romanian diverged somewhat more from the other varieties, as it was largely separated from the unifying influences in the western part of the Empire. Spoken Latin began to diverge into distinct languages by the 9th century at the latest, when the earliest extant Romance writings begin to appear. They were, throughout the period, confined to everyday speech, as Medieval Latin was used for writing. For many Italians using Latin, though, there was no complete separation between Italian and Latin, even into the beginning of the Renaissance. Petrarch for example saw Latin as a literary version of the spoken language. Medieval Latin is the written Latin in use during that portion of the post-classical period when no corresponding Latin vernacular existed, that is from around 700 to 1500 AD. The spoken language had developed into the various Romance languages; however, in the educated and official world, Latin continued without its natural spoken base. Moreover, this Latin spread into lands that had never spoken Latin, such as the Germanic and Slavic nations. It became useful for international communication between the member states of the Holy Roman Empire and its allies. Without the institutions of the Roman Empire that had supported its uniformity, Medieval Latin was much more liberal in its linguistic cohesion: for example, in classical Latin sum and eram are used as auxiliary verbs in the perfect and pluperfect passive, which are compound tenses. Medieval Latin might use fui and fueram instead. Furthermore, the meanings of many words were changed and new words were introduced, often under influence from the vernacular. Identifiable individual styles of classically incorrect Latin prevail. Renaissance Latin, in use from around 1300 to 1500, and the classicised Latin that followed through to the present are often grouped together as Neo-Latin, or New Latin, which have in recent decades become a focus of renewed study, given their importance for the development of European culture, religion and science. The vast majority of written Latin belongs to this period, but its full extent is unknown. The Renaissance reinforced the position of Latin as a spoken and written language by the scholarship by the Renaissance humanists. Petrarch and others began to change their usage of Latin as they explored the texts of the Classical Latin world. Skills of textual criticism evolved to create much more accurate versions of extant texts through the fifteenth and sixteenth centuries, and some important texts were rediscovered. Comprehensive versions of authors' works were published by Isaac Casaubon, Joseph Scaliger and others. Nevertheless, despite the careful work of Petrarch, Politian and others, first the demand for manuscripts, and then the rush to bring works into print, led to the circulation of inaccurate copies for several centuries following. Neo-Latin literature was extensive and prolific, but less well known or understood today. Works covered poetry, prose stories and early novels, occasional pieces and collections of letters, to name a few. Famous and well regarded writers included Petrarch, Erasmus, Salutati, Celtis, George Buchanan and Thomas More. Non fiction works were long produced in many subjects, including the sciences, law, philosophy, historiography and theology. Famous examples include Isaac Newton's Principia. Latin was also used as a convenient medium for translations of important works first written in a vernacular, such as those of Descartes. Latin education underwent a process of reform to classicise written and spoken Latin. Schooling remained largely Latin medium until approximately 1700. Until the end of the 17th century, the majority of books and almost all diplomatic documents were written in Latin. Afterwards, most diplomatic documents were written in French (a Romance language) and later native or other languages. Education methods gradually shifted towards written Latin, and eventually concentrating solely on reading skills. The decline of Latin education took several centuries and proceeded much more slowly than the decline in written Latin output. Despite having no native speakers, Latin is still used for a variety of purposes in the contemporary world. The largest organisation that retains Latin in official and quasi-official contexts is the Catholic Church. The Catholic Church required that Mass be carried out in Latin until the Second Vatican Council of 1962–1965, which permitted the use of the vernacular. Latin remains the language of the Roman Rite. The Tridentine Mass (also known as the Extraordinary Form or Traditional Latin Mass) is celebrated in Latin. Although the Mass of Paul VI (also known as the Ordinary Form or the Novus Ordo) is usually celebrated in the local vernacular language, it can be and often is said in Latin, in part or in whole, especially at multilingual gatherings. It is the official language of the Holy See, the primary language of its public journal, the Acta Apostolicae Sedis, and the working language of the Roman Rota. Vatican City is also home to the world's only automatic teller machine that gives instructions in Latin. In the pontifical universities postgraduate courses of Canon law are taught in Latin, and papers are written in the same language. There are a small number of Latin services held in the Anglican church. These include an annual service in Oxford, delivered with a Latin sermon; a relic from the period when Latin was the normal spoken language of the university. In the Western world, many organisations, governments and schools use Latin for their mottos due to its association with formality, tradition, and the roots of Western culture. Canada's motto A mari usque ad mare ("from sea to sea") and most provincial mottos are also in Latin. The Canadian Victoria Cross is modelled after the British Victoria Cross which has the inscription "For Valour". Because Canada is officially bilingual, the Canadian medal has replaced the English inscription with the Latin Pro Valore. Spain's motto Plus ultra 'even further', or figuratively "Further!", is also Latin in origin. It is taken from the personal motto of Charles V, Holy Roman Emperor and King of Spain (as Charles I), and is a reversal of the original phrase Non terrae plus ultra ("No land further beyond", "No further!"). According to legend, this phrase was inscribed as a warning on the Pillars of Hercules, the rocks on both sides of the Strait of Gibraltar and the western end of the known Mediterranean world. Charles adopted the motto following the discovery of the New World by Columbus, and it also has metaphorical suggestions of taking risks and striving for excellence. In the United States the unofficial national motto until 1956 was E pluribus unum meaning "Out of many, one". The motto continues to be featured on the Great Seal. It also appears on the flags and seals of both houses of congress and the flags of the states of Michigan, North Dakota, New York, and Wisconsin. The motto's 13 letters symbolically represent the original Thirteen Colonies which revolted from the British Crown. The motto is featured on all presently minted coinage and has been featured in most coinage throughout the nation's history. Several states of the United States have Latin mottos, such as: Many military organisations today have Latin mottos, such as: Some law governing bodies in the Philippines have Latin mottos, such as: Some colleges and universities have adopted Latin mottos, for example Harvard University's motto is Veritas ("truth"). Veritas was the goddess of truth, a daughter of Saturn, and the mother of Virtue. Switzerland has adopted the country's Latin short name Helvetia on coins and stamps, since there is no room to use all of the nation's four official languages. For a similar reason, it adopted the international vehicle code CH and the Internet top-level domain ch, for Confoederatio Helvetica, the country's full Latin name. Some film and television in ancient settings, such as Sebastiane, The Passion of the Christ and Barbarians (2020 TV series), have been made with dialogue in Latin. Occasionally, Latin dialogue is used because of its association with religion or philosophy, in such film/television series as The Exorcist and Lost ("Jughead"). Subtitles are usually shown for the benefit of those who do not understand Latin. There are also songs written with Latin lyrics. The libretto for the opera-oratorio Oedipus rex by Igor Stravinsky is in Latin. The continued instruction of Latin is seen by some as a highly valuable component of a liberal arts education. Latin is taught at many high schools, especially in Europe and the Americas. It is most common in British public schools and grammar schools, the Italian liceo classico and liceo scientifico, the German Humanistisches Gymnasium and the Dutch gymnasium. Occasionally, some media outlets, targeting enthusiasts, broadcast in Latin. Notable examples include Radio Bremen in Germany, YLE radio in Finland (the Nuntii Latini broadcast from 1989 until it was shut down in June 2019), and Vatican Radio & Television, all of which broadcast news segments and other material in Latin. A variety of organisations, as well as informal Latin circuli, 'circles', have been founded in more recent times to support the use of spoken Latin. Moreover, a number of university classics departments have begun incorporating communicative pedagogies in their Latin courses. These include the University of Kentucky, the University of Oxford and Princeton University. There are many websites and forums maintained in Latin by enthusiasts. The Latin Wikipedia has more than 140,000 articles. Legacy Italian, French, Portuguese, Spanish, Romanian, Catalan, Romansh, Sardinian and other Romance languages are direct descendants of Latin. There are also many Latin loanwords in English and Albanian, as well as a few in German, Dutch, Norwegian, Danish and Swedish. Latin is still spoken in Vatican City, a city-state situated in Rome that is the seat of the Catholic Church. The works of several hundred ancient authors who wrote in Latin have survived in whole or in part, in substantial works or in fragments to be analysed in philology. They are in part the subject matter of the field of classics. Their works were published in manuscript form before the invention of printing and are now published in carefully annotated printed editions, such as the Loeb Classical Library, published by Harvard University Press, or the Oxford Classical Texts, published by Oxford University Press.[citation needed] Latin translations of modern literature such as: The Hobbit, Treasure Island, Robinson Crusoe, Paddington Bear, Winnie the Pooh, The Adventures of Tintin, Asterix, Harry Potter, Le Petit Prince, Max and Moritz, How the Grinch Stole Christmas!, The Cat in the Hat, and a book of fairy tales, fabulae mirabiles, are intended to garner popular interest in the language. Additional resources include phrasebooks and resources for rendering everyday phrases and concepts into Latin, such as Meissner's Latin Phrasebook. Some inscriptions have been published in an internationally agreed, monumental, multivolume series, the Corpus Inscriptionum Latinarum. Authors and publishers vary, but the format is about the same: volumes detailing inscriptions with a critical apparatus stating the provenance and relevant information. The reading and interpretation of these inscriptions is the subject matter of the field of epigraphy. About 270,000 inscriptions are known. The Latin influence in English has been significant at all stages of its insular development. In the Middle Ages, borrowing from Latin occurred from ecclesiastical usage established by Saint Augustine of Canterbury in the 6th century or indirectly after the Norman Conquest, through the Anglo-Norman language.[citation needed] From the 16th to the 18th centuries, English writers cobbled together huge numbers of new words from Latin and Greek words, dubbed "inkhorn terms", as if they had spilled from a pot of ink. Many of these words were used once by the author and then forgotten, but some useful ones survived, such as imbibe and extrapolate. Many of the most common polysyllabic English words are of Latin origin through the medium of Old French. Romance words make up 59%, 20% and 14% of English, German and Dutch vocabularies, respectively. Those figures can rise dramatically when only non-compound and non-derived words are included. The influence of Roman governance and Roman technology on the less-developed nations under Roman dominion led to the adoption of Latin phraseology in some specialised areas, such as science, technology, medicine, and law. For example, the Linnaean system of plant and animal classification was heavily influenced by Historia Naturalis, an encyclopaedia of people, places, plants, animals, and things published by Pliny the Elder. Roman medicine, recorded in the works of such physicians as Galen, established that today's medical terminology would be primarily derived from Latin and Greek words, the Greek being filtered through the Latin. Roman engineering had the same effect on scientific terminology as a whole. Latin law principles have survived partly in a long list of Latin legal terms. The Logudorese dialect of the Sardinian language and Standard Italian are the two closest contemporary languages to Latin. Throughout European history, an education in the classics was considered crucial for those who wished to join literate circles. The prominence of Latin in classical education rested not only on tradition but also on its reputation for clarity, logical structure, and intellectual rigor, a view expressed by the mathematics educator Theodor Haagaas, who remarked that "Latin, it is mathematics, language mathematics." This also was true in the United States where many of the nation's founders obtained a classically based education in grammar schools or from tutors. Admission to Harvard in the Colonial era required that the applicant "Can readily make and speak or write true Latin prose and has skill in making verse" Latin Study and the classics were emphasised in American secondary schools and colleges well into the Antebellum era. Instruction in Latin is an essential aspect. In today's world, a large number of Latin students in the United States learn from Wheelock's Latin: The Classic Introductory Latin Course, Based on Ancient Authors. This book, first published in 1956, was written by Frederic M. Wheelock. Wheelock's Latin has become the standard text for many American introductory Latin courses. The numbers of people studying Latin varies significantly by country. In the United Kingdom, Latin is available in around 2.3% of state primary schools, representing a significant increase in availability. In Germany, over 500,000 students study Latin each year, representing a decrease from over 800,000 in 2008. Latin is still required for some University courses, but this has become less frequent. The Living Latin movement attempts to teach Latin in the same way that living languages are taught, as a means of both spoken and written communication. It is available in Vatican City and at some institutions in the US, such as the University of Kentucky and Iowa State University. The British Cambridge University Press is a major supplier of Latin textbooks for all levels, such as the Cambridge Latin Course series. It has also published a subseries of children's texts in Latin by Bell & Forte, which recounts the adventures of a mouse called Minimus. In the United Kingdom, the Classical Association encourages the study of antiquity through various means, such as publications and grants. The University of Cambridge, the Open University, a number of independent schools, for example Eton, Harrow, Haberdashers' Aske's Boys' School, Merchant Taylors' School, and Rugby, and The Latin Programme/Via Facilis, a London-based charity, run Latin courses. In the United States and in Canada, the American Classical League supports every effort to further the study of classics. Its subsidiaries include the National Junior Classical League (with more than 50,000 members), which encourages high school students to pursue the study of Latin, and the National Senior Classical League, which encourages students to continue their study of the classics into college. The league also sponsors the National Latin Exam. Classicist Mary Beard wrote in The Times Literary Supplement in 2006 that the reason for learning Latin is because of what was written in it. Latin was or is the official language of several European states. It had official status in the Kingdom of Hungary from the 11th to mid-19th centuries, when Hungarian became the exclusive official language in 1844. The best known Latin language poet of Hungarian origin was Janus Pannonius. Similarly, in the Kingdom of Poland and the Polish–Lithuanian Commonwealth, Latin was officially recognised and widely used between the 10th and 18th centuries, commonly used in foreign relations and popular as a second language among some of the nobility. Latin was also the official language of the Croatian Parliament from the 13th to the 19th century (1847). The oldest preserved records of the parliamentary sessions (Congregatio Regni totius Sclavonie generalis)—held in Zagreb (Zagabria), Croatia—date from 19 April 1273. An extensive Croatian Latin literature exists. Latin was used on Croatian coins on even years until 1 January 2023, when Croatia adopted the Euro as its official currency. Phonology The ancient pronunciation of Latin has been reconstructed; among the data used for reconstruction are explicit statements about pronunciation by ancient authors, misspellings, puns, ancient etymologies, the spelling of Latin loanwords in other languages, and the historical development of Romance languages. The consonant phonemes of Classical Latin are as follows: /z/ was not native to Classical Latin. It appeared in Greek loanwords starting c. the 1st century BC, when it was probably pronounced (at least by educated speakers) [z] initially and doubled [zz] between vowels, in accordance with its pronunciation in Koine Greek. In Classical Latin poetry, the letter ⟨z⟩ between vowels always counts as two consonants for metrical purposes. The consonant ⟨b⟩ usually sounds as [b]; however, when ⟨t⟩ or ⟨s⟩ follows ⟨b⟩ then it is pronounced as in [pt] or [ps]. In Latin, ⟨q⟩ is always followed by the vowel ⟨u⟩. Together they make a [kʷ] sound. In Old and Classical Latin, the Latin alphabet had no distinction between uppercase and lowercase, and the letters ⟨J U W⟩ did not exist. In place of ⟨J U⟩, ⟨I V⟩ were used, respectively; ⟨I V⟩ represented both vowels and consonants. Most of the letter forms were similar to modern uppercase, as can be seen in the inscription from the Colosseum shown at the top of the article. The spelling systems used in Latin dictionaries and modern editions of Latin texts, however, normally use ⟨j u⟩ in place of Classical-era ⟨i v⟩. Some systems use ⟨j v⟩ for the consonant sounds /j w/ except in the combinations ⟨gu su qu⟩ for which ⟨v⟩ is never used. Some notes concerning the mapping of Latin phonemes to English graphemes are given below: In Classical Latin, as in modern Italian, double consonant letters were pronounced as long consonant sounds distinct from short versions of the same consonants. Thus the nn in Classical Latin annus 'year' (and in Italian anno) is pronounced as a doubled /nn/ as in English unnamed. (In English, distinctive consonant length or doubling occurs only at the boundary between two words or morphemes, as in that example.) In Classical Latin, ⟨U⟩ did not exist as a letter distinct from ⟨V⟩; the written form ⟨V⟩ was used to represent both a vowel and a consonant. ⟨Y⟩ was adopted to represent upsilon in loanwords from Greek, but it was pronounced like ⟨u⟩ and ⟨i⟩ by some speakers. It was also used in native Latin words by confusion with Greek words of similar meaning, such as sylva and ὕλη hū́lē. Classical Latin distinguished between long and short vowels. Then, long vowels, except for ⟨i⟩, were frequently marked using the apex, which was sometimes similar to an acute accent ⟨Á É Ó V́ Ý⟩. Long /iː/ was written using a taller version of ⟨I⟩, called i longa 'long I': ⟨ꟾ⟩. In modern texts, long vowels are often indicated by a macron ⟨ā ē ī ō ū⟩, and short vowels are usually unmarked except when it is necessary to distinguish between words, when they are marked with a breve ⟨ă ĕ ĭ ŏ ŭ⟩. However, they would also signify a long vowel by writing the vowel larger than other letters in a word or by repeating the vowel twice in a row. The acute accent, when it is used in modern Latin texts, indicates stress, as in Spanish, rather than length. Although called long vowels, their exact quality in Classical Latin is different from short vowels. The difference is described in the table below: This difference in quality is posited by W. Sidney Allen in his book Vox Latina. However, Andrea Calabrese has disputed this assertion, based in part upon the observation that in Sardinian and some Lucanian dialects, each long and short vowel pair merged, as opposed to in Italo-Western languages in which short /i/ and /u/ merged with long /eː/ and /o:/ (cf. Latin siccus, Italian secco, and Sardinian siccu). A vowel letter followed by ⟨m⟩ at the end of a word, or a vowel letter followed by ⟨n⟩ before ⟨s⟩ or ⟨f⟩, represented a short nasal vowel, as in monstrum [mõːstrũ]. Classical Latin had several diphthongs. The two most common were ⟨ae au⟩. The former is pronounced like the i in mine, and the latter like the ow in power. ⟨oe⟩ was fairly rare, and ⟨ui eu ei⟩ were very rare, at least in native Latin words. There has also been debate over whether ⟨ui⟩ is truly a diphthong in Classical Latin, due to its rarity, absence in works of Roman grammarians, and the roots of Classical Latin words (e.g. hui ce to huic, quoi to cui) not matching or being similar to the pronunciation of classical words if ⟨ui⟩ were to be considered a diphthong. The sequences sometimes did not represent diphthongs. ⟨ae⟩ and ⟨oe⟩ also represented a sequence of two vowels in different syllables in aēnus [aˈeː.nʊs] 'bronze' and coēpit [kɔˈeː.pɪt] 'began', and ⟨au ui eu ei ou⟩ represented sequences of two vowels or of a vowel and one of the semivowels /j w/, in cavē [ˈka.weː] 'beware!', cuius [ˈkʊj.jʊs] 'whose', monuī [ˈmɔn.ʊ.iː] 'I warned', solvī [ˈsɔɫ.wiː] 'I released', dēlēvī [deːˈleː.wiː] 'I destroyed', eius [ˈɛj.jʊs] 'his', and novus [ˈnɔ.wʊs] 'new'. Old Latin had more diphthongs, but most of them changed into long vowels in Classical Latin. The Old Latin diphthong ⟨ai⟩ and the sequence ⟨āī⟩ became Classical ⟨ae⟩. Old Latin ⟨oi⟩ and ⟨ou⟩ changed to Classical ⟨ū⟩, except in a few words whose ⟨oi⟩ became Classical ⟨oe⟩. These two developments sometimes occurred in different words from the same root: for instance, Classical poena "punishment" and pūnīre "to punish". Early Old Latin ⟨ei⟩ usually monophthongised to a later Old Latin ⟨ē⟩, to Classical ⟨ī⟩. By the late Roman Empire, ⟨ae oe⟩ had merged with ⟨e ē⟩. During the Classical period this sound change was present in some rural dialects, but deliberately avoided by well-educated speakers. Syllables in Latin are signified by the presence of diphthongs and vowels. The number of syllables is the same as the number of vowel sounds. Further, if a consonant separates two vowels, it will go into the syllable of the second vowel. When there are two consonants between vowels, the last consonant will go with the second vowel. An exception occurs when a phonetic stop and liquid come together. In this situation, they are thought to be a single consonant, and as such, they will go into the syllable of the second vowel. Syllables in Latin are considered either long or short (less often called "heavy" and "light" respectively). Within a word, a syllable may either be long by nature or long by position. A syllable is long by nature if it has a diphthong or a long vowel. On the other hand, a syllable is long by position if the vowel is followed by more than one consonant. There are two rules that define which syllable is stressed in Classical Latin. Orthography Latin was written in the Latin alphabet (A, B, C, D, E, F, G, H, I, K, L, M, N, O, P, Q, R, S, T, V, X), derived from the Etruscan alphabet, which was in turn drawn from the Greek alphabet and ultimately the Phoenician alphabet. This alphabet has continued to be used over the centuries as the script for the Romance, Celtic, Germanic, Baltic, Finnic and many Slavic languages (Polish, Slovak, Slovene, Croatian, Bosnian, Serbian and Czech); and it has been adopted by many languages around the world, including Vietnamese, the Austronesian languages, many Turkic languages, and most languages in sub-Saharan Africa, the Americas and Oceania, making it by far the world's single most widely used writing system. The number of letters in the Latin alphabet has varied. When it was first derived from the Etruscan alphabet, it contained only 21 letters. Later, G was added to represent /ɡ/, which had previously been spelled C, and Z ceased to be included in the alphabet, as the language then had no voiced alveolar fricative. The letters K, Y, and Z were later added to represent Greek letters kappa, upsilon, and zeta respectively, in Greek loanwords. W was created in the 11th century from VV in some areas and UU in others. It represented /w/ in Germanic languages, not Latin, which still uses V for the purpose. J was distinguished from the original I only during the late Middle Ages, as was the letter U from V. Although some Latin dictionaries use J, it is rarely used for Latin text, as it was not used in classical times, but many other languages use it. Classical Latin did not contain sentence punctuation, letter case, or interword spacing, but apices were sometimes used to distinguish length in vowels and the interpunct was used at times to separate words. The first line of Catullus 3 ("Mourn, O Venuses and Cupids") was originally written as: It would be rendered in a modern edition as: The Roman cursive script is commonly found on the many wax tablets excavated at sites such as forts, an especially extensive set having been discovered at Vindolanda on Hadrian's Wall in Britain. Most notable is the fact that while most of the Vindolanda tablets show spaces between words, spaces were avoided in monumental inscriptions from that era. Occasionally, Latin has been written in other scripts: Grammar Latin is a synthetic, fusional language in the terminology of linguistic typology. Words involve an objective semantic element and markers (usually suffixes) specifying the grammatical use of the word, expressing gender, number, and case in adjectives, nouns, and pronouns (declension) and verbs to denote person, number, tense, voice, mood, and aspect (conjugation). Some words are uninflected and undergo neither process, such as adverbs, prepositions, and interjections. Latin inflection can result in words with much ambiguity: For example, amābit 'he/she/it will love', is formed from amā-, a future tense morpheme -bi- and a third person singular morpheme, -t, the last of which -t does not express masculine, feminine, or neuter gender. A major task in understanding Latin phrases and clauses is to clarify such ambiguities by an analysis of context. Latin word order is relatively free because inflections disambiguate semantic connections, but different word orders can indicate different nuances of meaning. A regular Latin noun belongs to one of five main declensions, a group of nouns with similar inflected forms. The declensions are identified by the genitive singular form of the noun. There are seven Latin noun cases, which also apply to adjectives and pronouns and mark a noun's syntactic role in the sentence by means of inflections. Thus, word order in Latin is not as important as it is in English, which is less inflected. The general structure and word order of a Latin sentence can therefore vary. The cases are as follows: Latin lacks both definite and indefinite articles, so puer currit can mean either 'the boy is running' or 'a boy is running'. There are two types of regular Latin adjectives: first- and second-declension and third-declension. They are so-called because their forms are similar or identical to first- and second-declension and third-declension nouns, respectively. Latin adjectives also have comparative and superlative forms. There are also a number of Latin participles. Latin numbers are sometimes declined as adjectives; see § Numbers. First- and second-declension adjectives are declined like first-declension nouns for the feminine forms and like second-declension nouns for the masculine and neuter forms. For example, for mortuus, mortua, mortuum 'dead', mortua is declined like a regular first-declension noun (such as puella 'girl', mortuus is declined like a regular second-declension masculine noun (such as dominus 'lord, master', and mortuum is declined like a regular second-declension neuter noun (such as auxilium 'help'. Third-declension adjectives are mostly declined like normal third-declension nouns, with a few exceptions. In the plural nominative neuter, for example, the ending is -ia (omnia 'all, everything', and for third-declension nouns, the plural nominative neuter ending is -a or -ia (capita 'heads', animalia 'animals'. They can have one, two or three forms for the masculine, feminine, and neuter nominative singular. Latin participles, like English participles, are formed from a verb. There are a few main types of participles: Present Active Participles, Perfect Passive Participles, Future Active Participles, and Future Passive Participles. Latin sometimes uses prepositions, depending on the type of prepositional phrase being used. Most prepositions are followed by a noun in either the accusative or ablative case: apud puerum 'with the boy', with puerum being the accusative form of puer 'boy', and sine puero 'without the boy' – puero being the ablative form. A few adpositions, however, govern a noun in the genitive, such as gratia and tenus. A regular verb in Latin belongs to one of four main conjugations. A conjugation is "a class of verbs with similar inflected forms". The conjugations are identified by the last letter of the verb's present stem. The present stem can be found by omitting the -re (-rī in deponent verbs) ending from the present infinitive form. The infinitive of the first conjugation ends in -ā-re or -ā-rī (active and passive respectively): amāre 'to love', hortārī 'to exhort'; of the second conjugation by -ē-re or -ē-rī: monēre 'to warn', verērī 'to fear', of the third conjugation by -ere, -ī: dūcere 'to lead', ūtī 'to use'; of the fourth by -ī-re, -ī-rī: audīre 'to hear', experīrī 'to attempt'. The stem categories descend from Indo-European and can therefore be compared to similar conjugations in other Indo-European languages. Irregular verbs are verbs that do not follow the regular conjugations in the formation of the inflected form. Irregular verbs in Latin are esse 'to be'; velle 'to want'; ferre 'to carry'; edere 'to eat'; dare 'to give'; īre 'to go'; posse 'to be able'; fieri 'to happen'; and their compounds. There are six simple tenses in Latin (present, imperfect, future, perfect, pluperfect and future perfect), three moods (indicative, imperative and subjunctive, in addition to the infinitive, participle, gerund, gerundive and supine), three persons (first, second and third), two numbers (singular and plural), two voices (active and passive) and two aspects (perfective and imperfective). Verbs are described by four principal parts: The six simple tenses of Latin are divided into two systems: the present system, which is made up of the present, imperfect and future forms, and the perfect system, which is made up of the perfect, pluperfect and future perfect forms. Each simple tense has a set of endings corresponding to the person, number, and voice of the subject. Subject (nominative) pronouns are generally omitted for the first (I, we) and second (you) persons except for emphasis. The table below displays the common inflected endings for the indicative mood in the active voice in all six tenses. For the future tense, the first listed endings are for the first and second conjugations, and the second listed endings are for the third and fourth conjugations: Some Latin verbs are deponent, causing their forms to be in the passive voice but retain an active meaning: hortor, hortārī, hortātus sum 'to urge'. Vocabulary As Latin is an Italic language, most of its vocabulary is likewise Italic, ultimately from the ancestral Proto-Indo-European language. However, because of close cultural interaction, the Romans not only adapted the Etruscan alphabet to form the Latin alphabet but also borrowed some Etruscan words into their language, including persona 'mask' and histrio 'actor'. Latin also included vocabulary borrowed from Oscan, another Italic language. After the Fall of Tarentum in 272 BC, the Romans began Hellenising, or adopting features of Greek culture, including the borrowing of Greek words, such as camera 'vaulted roof', sumbolum 'symbol', and balineum 'bath'. This Hellenisation led to the addition of Y and Z to the alphabet to represent Greek sounds. Subsequently, the Romans transplanted Greek art, medicine, science and philosophy to Italy, paying almost any price to entice Greek skilled and educated persons to Rome and sending their youth to be educated in Greece. Thus, many Latin scientific and philosophical words were Greek loanwords or had their meanings expanded by association with Greek words, as ars 'craft' and tekhne 'art'. Because of the Roman Empire's expansion and subsequent trade with outlying European tribes, the Romans borrowed some northern and central European words, such as beber 'beaver', of Germanic origin, and bracae 'breeches', of Celtic origin. The specific dialects of Latin across Latin-speaking regions of the former Roman Empire after its fall were influenced by languages specific to the regions. The dialects of Latin evolved into different Romance languages. During and after the adoption of Christianity into Roman society, Christian vocabulary became a part of the language, either from Greek or Hebrew borrowings or as Latin neologisms. Into the Middle Ages, Latin incorporated many more words from surrounding languages, including Old English and other Germanic languages. Over the ages, Latin-speaking populations produced new adjectives, nouns, and verbs by affixing or compounding meaningful segments. For example, the compound adjective, omnipotens 'all-powerful', was produced from the adjectives omnis 'all', and potens 'powerful', by dropping the final s of omnis and concatenating. Often, the concatenation changed the part of speech, and nouns were produced from verb segments or verbs from nouns and adjectives. Numbers In ancient times, numbers in Latin were written only with letters. Today, the numbers can be written with the Arabic numbers as well as with Roman numerals. The numbers 1, 2 and 3 and every whole hundred from 200 to 900 are declined as nouns and adjectives, with some differences. The numbers from 4 to 100 do not change their endings. As in modern descendants such as Spanish, the gender for naming a number in isolation is masculine, so that "1, 2, 3" is counted as ūnus, duo, trēs. Example text Commentarii de Bello Gallico, also called De Bello Gallico (The Gallic War), written by Gaius Julius Caesar, begins with the following passage: Gallia est omnis divisa in partes tres, quarum unam incolunt Belgae, aliam Aquitani, tertiam qui ipsorum lingua Celtae, nostra Galli appellantur. Hi omnes lingua, institutis, legibus inter se differunt. Gallos ab Aquitanis Garumna flumen, a Belgis Matrona et Sequana dividit. Horum omnium fortissimi sunt Belgae, propterea quod a cultu atque humanitate provinciae longissime absunt, minimeque ad eos mercatores saepe commeant atque ea quae ad effeminandos animos pertinent important, proximique sunt Germanis, qui trans Rhenum incolunt, quibuscum continenter bellum gerunt. Qua de causa Helvetii quoque reliquos Gallos virtute praecedunt, quod fere cotidianis proeliis cum Germanis contendunt, cum aut suis finibus eos prohibent aut ipsi in eorum finibus bellum gerunt. Eorum una pars, quam Gallos obtinere dictum est, initium capit a flumine Rhodano, continetur Garumna flumine, Oceano, finibus Belgarum; attingit etiam ab Sequanis et Helvetiis flumen Rhenum; vergit ad septentriones. Belgae ab extremis Galliae finibus oriuntur; pertinent ad inferiorem partem fluminis Rheni; spectant in septentrionem et orientem solem. Aquitania a Garumna flumine ad Pyrenaeos montes et eam partem Oceani quae est ad Hispaniam pertinet; spectat inter occasum solis et septentriones. The same text may be marked for all long vowels (before any possible elisions at word boundary) with apices over vowel letters, including customarily before nf and ns where a long vowel is automatically produced: Gallia est omnis dívísa in partés trés, quárum únam incolunt Belgae, aliam Aquítání, tertiam quí ipsórum linguá Celtae, nostrá Gallí appellantur. Hí omnés linguá, ínstitútís, légibus inter sé differunt. Gallós ab Aquítánís Garumna flúmen, á Belgís Mátrona et Séquana dívidit. Hórum omnium fortissimí sunt Belgae, proptereá quod á cultú atque húmánitáte próvinciae longissimé absunt, miniméque ad eós mercátórés saepe commeant atque ea quae ad efféminandós animós pertinent important, proximíque sunt Germánís, quí tráns Rhénum incolunt, quibuscum continenter bellum gerunt. Quá dé causá Helvétií quoque reliquós Gallós virtúte praecédunt, quod feré cotídiánís proeliís cum Germánís contendunt, cum aut suís fínibus eós prohibent aut ipsí in eórum fínibus bellum gerunt. Eórum úna pars, quam Gallós obtinére dictum est, initium capit á flúmine Rhodanó, continétur Garumná flúmine, Óceanó, fínibus Belgárum; attingit etiam ab Séquanís et Helvétiís flúmen Rhénum; vergit ad septentriónés. Belgae ab extrémís Galliae fínibus oriuntur; pertinent ad ínferiórem partem flúminis Rhéní; spectant in septentriónem et orientem sólem. Aquítánia á Garumná flúmine ad Pýrénaeós montés et eam partem Óceaní quae est ad Hispániam pertinet; spectat inter occásum sólis et septentriónés. An example of Late Latin is the Latin Vulgate by Saint Jerome. Below is Psalm One (Psalmum Unum) from the Clementine Vulgate. 1 Beatus vir qui non abiit in consilio impiorum, et in via peccatorum non stetit, et in cathedra pestilentiae non sedit; 2 sed in lege Domini voluntas ejus, et in lege ejus meditabitur die ac nocte. 3 Et erit tamquam lignum quod plantatum est secus decursus aquarum, quod fructum suum dabit in tempore suo : et folium ejus non defluet; et omnia quaecumque faciet prosperabuntur. 4 Non sic impii, non sic; sed tamquam pulvis quem projicit ventus a facie terrae. 5 Ideo non resurgent impii in judicio, neque peccatores in concilio justorum, See also Notes References Bibliography External links until 75 BCOld Latin 75 BC – 200 ADClassical Latin 200–700Late Latin 700–1500Medieval Latin 1300–1500Renaissance Latin 1300–presentNeo-Latin 1900–presentContemporary Latin
========================================
[SOURCE: https://en.wikipedia.org/w/index.php?title=Minecraft&action=history] | [TOKENS: 67]
Minecraft: Revision history For any version listed below, click on its date to view it. For more help, see Help:Page history and Help:Edit summary. (cur) = difference from current version, (prev) = difference from preceding version, m = minor edit, → = section edit, ← = automatic edit summary
========================================
[SOURCE: https://www.bbc.com/news/articles/c4g8r23yv71o] | [TOKENS: 3516]
Why fake AI videos of UK urban decline are taking over social media3 hours agoShareSaveMarianna SpringSocial media investigations correspondentShareSaveBBCAn AI-generated video shows a crowd of young - mostly black - men, wearing balaclavas and padded jackets, slipping down a water slide into a dirty swimming pool with litter bobbing on the surface. The caption describes the scene as a taxpayer-funded water park in Croydon.It is one of a wave of deepfakes showing often absurd scenes of urban decline, and regularly purporting to be in the same south London neighbourhood. Dozens of copycat accounts have begun producing similar content and collectively they have racked up millions of views across TikTok and Instagram Reels.These fake videos have become part of a much wider trend - where online influencers and content creators portray Western cities such as London, Manchester, San Francisco or New York as overrun with immigrants and crime.It has been dubbed "decline porn". These narratives - often exaggerated or fabricated, some obviously satirical - are fuelling anger and racist backlash among some viewers who take them at face value.The BBC tracked down the originator of the Croydon AI videos for the new podcast Top Comment, which investigates the stories behind our social media feeds. What we found was a new brand of online faker, who thrives off engagement and shrugs off responsibility for how the content can be used to push divisive political narratives.The shame around posting fakes seems to have gone completely out of the window.The creator, who uses the online handle RadialB, says he didn't expect to spawn copycats or be politically provocative. He says his content is intended to be funny - but that he also wants people to believe his fake scenes are real to grab their attention.RadialB's fake videos portray grimy Croydon waterparks and an arcade machine filled with knives"If people saw it and they immediately knew it was fake, then they would just scroll. The selling point of generative AI models is that they look real," RadialB tells me over the phone. He refuses to share his real name but reveals he is in his 20s and from the north-west of England. He has never been to Croydon.He tells me the creation of the AI water park, zoo and aquarium in Croydon was "just part of the progression of things getting more and more funny or absurd". Several of the videos "blew up", he says, because they were very graphic, showing people flying off slides.The young men in his videos are "roadmen", a slang term for urban youth, often associated with drug dealing, he says, and are "cultural archetypes" that he frequently features in his videos. One post portraying roadmen in Parliament got eight million views in a day, he says.When asked about the racism that his videos sometimes provoke in the comments, he says: "I don't deny it", but adds that "comments get filtered", meaning that social media platforms delete racist remarks. TikTok, Instagram and X all have policies prohibiting racist abuse.RadialB says when he generates the AI content he doesn't intend for the people portrayed to be a certain race or ethnicity, but just uses the prompt "roadmen wearing puffer jackets, track suits, and balaclavas" because that makes the "funniest" characters.While he disavows any political intent, his videos portray absurd "taxpayer-funded" facilities. He says "English politics is a bit of a parasitic cesspit" and suggests "we replace them all with roadmen".Several of the videos feature small labels saying they are "AI-generated" or contain "synthetic media", in line with Tiktok, Instagram and X's policies on AI media, but some people who had left comments told us they had been genuinely convinced by the posts.RadialB acknowledges the videos provoke political reactions: "I could put stuff up and there would be like 50-year-olds and 60-year-olds in the comments raging and saying all this political stuff." But he suggests some of the comments are ironic.Other users have objected to this wave of AI slop videos as an unfair racial stereotype of their neighbourhood. One black TikTok user from Croydon called C.Tino posted a response, saying the trend falsely portrayed the area as "ghetto". "These videos are making people think this is real life. It's becoming out of hand now," he said.Distort realityRadialB says he was able to start making this content because of the "huge jump" in the quality and availability of AI tools. It "hugely lowers the barrier for entry" for anyone who wants to make "fake stuff", he says.He says a lot of the accounts re-sharing his posts are likely doing it for views and clicks - and in an effort to monetise the content on other platforms like Facebook.Users as far away as Israel and Brazil said they shared the videos because they "got engagement" or to "join in on the trend". Several other accounts posting in Arabic, and that appear to be based in the Middle East, have also shared multiple videos about London being in decline - including the ones of Croydon.I have also found several TikTok profiles that purport to be British news accounts, which only share either these kinds of AI-generated videos about London or other negative content about cities in the UK and US.The deepfakes fit into an existing trend of videos presenting European and American cities as falling into urban decay because of crime and immigration. Sometimes they show real examples of phone-snatching, homelessness, graffiti or drug problems, but omit any wider context. Increasingly, though, they use AI to distort reality.South African YouTuber Kurt Caz has built an audience of more than four million subscribers by posting travel videos with titles such as "Attacked by thieves in Barcelona!" and "Threatened in the UK's worst town!"But after posting a recent video, called "Avoid this place in London", he was accused of using AI to doctor the thumbnail to bolster his portrayal of the UK capital as one of "the most messed up cities" he has ever been to.Kurt CazArabic text was added to these shop signs and a balaclava placed on the friendly cyclist in this YouTube thumbnailIt showed a man on a bike in a balaclava, in front of shop signs written in Arabic. But in the video itself, the signs on Croydon's North End are in English, the cyclist has no balaclava and Caz is giving him the thumbs up after a friendly chat. On X, Kurt Caz dismissed criticism of the thumbnail as "clickbait" and said "if you're going to do a hit piece on me do it properly".These ideas of the UK and Europe in decline have also been taken up by high-profile, influential figures, including X, Tesla and Space X owner Elon Musk, who spoke at far-right activist Tommy Robinson's Unite the Kingdom rally last year."What I see happening is a destruction of Britain. Initially a slow erosion, but a rapidly increasing erosion of Britain with massive uncontrolled migration," said Musk. It is a topic he regularly posts about on his X profile, with more than 230 million followers.EPAElon Musk has promoted ideas of British declineWhile there are legitimate debates to be had about immigration and crime, a lot of this content goes beyond the evidence available in reality.In January, pollster YouGov released new data suggesting a majority of Britons now believe London is unsafe, but only a third of people surveyed in the capital agreed - and 81% of them said their own local area was safe.But RadialB says his intention was not to become a "decline porn" influencer - and instead just wants to make people laugh with a sort of "artform" that games the recommendation systems. He appears to wash his hands of responsibility for how his content may be used or copied.His account on TikTok was banned for sharing content that was detected as graphic or inappropriate, he says. But he has now set up a new account sharing the same kinds of videos, showing "roadmen" at grubby "infinity pools" and "taxpayer-funded buffets".Social mediaDeepfakesTechnology Why fake AI videos of UK urban decline are taking over social media An AI-generated video shows a crowd of young - mostly black - men, wearing balaclavas and padded jackets, slipping down a water slide into a dirty swimming pool with litter bobbing on the surface. The caption describes the scene as a taxpayer-funded water park in Croydon. It is one of a wave of deepfakes showing often absurd scenes of urban decline, and regularly purporting to be in the same south London neighbourhood. Dozens of copycat accounts have begun producing similar content and collectively they have racked up millions of views across TikTok and Instagram Reels. These fake videos have become part of a much wider trend - where online influencers and content creators portray Western cities such as London, Manchester, San Francisco or New York as overrun with immigrants and crime. It has been dubbed "decline porn". These narratives - often exaggerated or fabricated, some obviously satirical - are fuelling anger and racist backlash among some viewers who take them at face value. The BBC tracked down the originator of the Croydon AI videos for the new podcast Top Comment, which investigates the stories behind our social media feeds. What we found was a new brand of online faker, who thrives off engagement and shrugs off responsibility for how the content can be used to push divisive political narratives. The shame around posting fakes seems to have gone completely out of the window. The creator, who uses the online handle RadialB, says he didn't expect to spawn copycats or be politically provocative. He says his content is intended to be funny - but that he also wants people to believe his fake scenes are real to grab their attention. "If people saw it and they immediately knew it was fake, then they would just scroll. The selling point of generative AI models is that they look real," RadialB tells me over the phone. He refuses to share his real name but reveals he is in his 20s and from the north-west of England. He has never been to Croydon. He tells me the creation of the AI water park, zoo and aquarium in Croydon was "just part of the progression of things getting more and more funny or absurd". Several of the videos "blew up", he says, because they were very graphic, showing people flying off slides. The young men in his videos are "roadmen", a slang term for urban youth, often associated with drug dealing, he says, and are "cultural archetypes" that he frequently features in his videos. One post portraying roadmen in Parliament got eight million views in a day, he says. When asked about the racism that his videos sometimes provoke in the comments, he says: "I don't deny it", but adds that "comments get filtered", meaning that social media platforms delete racist remarks. TikTok, Instagram and X all have policies prohibiting racist abuse. RadialB says when he generates the AI content he doesn't intend for the people portrayed to be a certain race or ethnicity, but just uses the prompt "roadmen wearing puffer jackets, track suits, and balaclavas" because that makes the "funniest" characters. While he disavows any political intent, his videos portray absurd "taxpayer-funded" facilities. He says "English politics is a bit of a parasitic cesspit" and suggests "we replace them all with roadmen". Several of the videos feature small labels saying they are "AI-generated" or contain "synthetic media", in line with Tiktok, Instagram and X's policies on AI media, but some people who had left comments told us they had been genuinely convinced by the posts. RadialB acknowledges the videos provoke political reactions: "I could put stuff up and there would be like 50-year-olds and 60-year-olds in the comments raging and saying all this political stuff." But he suggests some of the comments are ironic. Other users have objected to this wave of AI slop videos as an unfair racial stereotype of their neighbourhood. One black TikTok user from Croydon called C.Tino posted a response, saying the trend falsely portrayed the area as "ghetto". "These videos are making people think this is real life. It's becoming out of hand now," he said. Distort reality RadialB says he was able to start making this content because of the "huge jump" in the quality and availability of AI tools. It "hugely lowers the barrier for entry" for anyone who wants to make "fake stuff", he says. He says a lot of the accounts re-sharing his posts are likely doing it for views and clicks - and in an effort to monetise the content on other platforms like Facebook. Users as far away as Israel and Brazil said they shared the videos because they "got engagement" or to "join in on the trend". Several other accounts posting in Arabic, and that appear to be based in the Middle East, have also shared multiple videos about London being in decline - including the ones of Croydon. I have also found several TikTok profiles that purport to be British news accounts, which only share either these kinds of AI-generated videos about London or other negative content about cities in the UK and US. The deepfakes fit into an existing trend of videos presenting European and American cities as falling into urban decay because of crime and immigration. Sometimes they show real examples of phone-snatching, homelessness, graffiti or drug problems, but omit any wider context. Increasingly, though, they use AI to distort reality. South African YouTuber Kurt Caz has built an audience of more than four million subscribers by posting travel videos with titles such as "Attacked by thieves in Barcelona!" and "Threatened in the UK's worst town!" But after posting a recent video, called "Avoid this place in London", he was accused of using AI to doctor the thumbnail to bolster his portrayal of the UK capital as one of "the most messed up cities" he has ever been to. It showed a man on a bike in a balaclava, in front of shop signs written in Arabic. But in the video itself, the signs on Croydon's North End are in English, the cyclist has no balaclava and Caz is giving him the thumbs up after a friendly chat. On X, Kurt Caz dismissed criticism of the thumbnail as "clickbait" and said "if you're going to do a hit piece on me do it properly". These ideas of the UK and Europe in decline have also been taken up by high-profile, influential figures, including X, Tesla and Space X owner Elon Musk, who spoke at far-right activist Tommy Robinson's Unite the Kingdom rally last year. "What I see happening is a destruction of Britain. Initially a slow erosion, but a rapidly increasing erosion of Britain with massive uncontrolled migration," said Musk. It is a topic he regularly posts about on his X profile, with more than 230 million followers. While there are legitimate debates to be had about immigration and crime, a lot of this content goes beyond the evidence available in reality. In January, pollster YouGov released new data suggesting a majority of Britons now believe London is unsafe, but only a third of people surveyed in the capital agreed - and 81% of them said their own local area was safe. But RadialB says his intention was not to become a "decline porn" influencer - and instead just wants to make people laugh with a sort of "artform" that games the recommendation systems. He appears to wash his hands of responsibility for how his content may be used or copied. His account on TikTok was banned for sharing content that was detected as graphic or inappropriate, he says. But he has now set up a new account sharing the same kinds of videos, showing "roadmen" at grubby "infinity pools" and "taxpayer-funded buffets". Starmer 'appeasing' big tech firms, says online safety campaigner Zuckerberg defends Meta in landmark social media addiction trial Parents angered by lack of online safety strategy Tumbler Ridge suspect's ChatGPT account banned before shooting OpenAI said the account's activity did not meet the threshold to flag it to authorities when it was identified. Fixing fashion's erratic sizing problem Tech Now meets a startup trying to fix one of the fashion industry's biggest blind spots, inconsistent sizing. The Chinese AI app sending Hollywood into a panic Clips of Deadpool and other film characters have sparked alarm within Hollywood over copyright infringement. Tech firms will have 48 hours to remove abusive images under new law The government is proposing that intimate image abuse should be treated more severely. Indian university faces backlash for claiming Chinese robodog as own at AI summit A professor claimed that a robotic dog made by Chinese firm Unitree had been made by the university. Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking.
========================================
[SOURCE: https://www.fast.ai/posts/2022-08-25-jupyter-git.html] | [TOKENS: 2254]
The Jupyter+git problem is now solved Jeremy Howard August 25, 2022 On this page Jupyter notebooks don’t work with git by default. With nbdev2, the Jupyter+git problem has been totally solved. It provides a set of hooks which provide clean git diffs, solve most git conflicts automatically, and ensure that any remaining conflicts can be resolved entirely within the standard Jupyter notebook environment. To get started, follow the directions on Git-friendly Jupyter. The Jupyter+git problem Jupyter notebooks are a powerful tool for scientists, engineers, technical writers, students, teachers, and more. They provide an ideal notebook environment for interactively exploring data and code, writing programs, and documenting the results as dashboards, books, or blogs. But when collaborating with others, this ideal environment goes up in smoke. That’s because tools such as git, which are the most popular approaches for asynchronous collaboration, makes notebooks unusable. Literally. Here’s what it looks like if you and a colleague both modify a notebook cell (including, in many cases, simply executing a cell withuout changing it), and then try to open that notebook later: The reason for this stems from a fundamental incompatibility between the format Jupyter notebooks use (JSON) and the format that git conflict markers assume by default (plain lines of text). This is what it looks like when git adds its conflict markers to a notebook: That’s not valid JSON, and therefore Jupyter can’t open it. Conflicts are particularly common in notebooks, because Jupyter changes the following every time you run a notebook: All these changes to notebook files also make git diffs of notebooks very verbose. This can make code reviews a challenge, and make git repos more bulky than necessary. The result of these problems is that many Jupyter users feel that collaborating with notebooks is a clunky, error-prone, and frustrating experience. (We’ve even seen people on social media describe Jupyter’s notebook format as “stupid” or “terrible”, despite otherwise professing their love for the software!) It turns out, however, that Jupyter and git can work together extremely well, with none of the above problems at all. All that’s needed is a bit of special software… The solution Jupyter and git are both well-designed software systems that provide many powerful extensibility mechanisms. It turns out that we can use these to fully and automatically solve the Jupyter+git problem. We identified two categories of problems in the previous section: In our newly released nbdev2, an open source Jupyter-based development platform, we’ve solve each of the problems: Here’s what a conflict looks like in Jupyter with nbdev’s merge driver: As you see, the local and remote change are each clearly displayed as separate cells in the notebook, allowing you to simply delete the version you don’t want to keep, or combine the two cells as needed. The techniques used to make the merge driver work are quite fascinating – let’s dive into the details! We provide here a summary of the git merge driver – for full details and source code see the nbdev.merge docs. Amazingly enough, the entire implementation is just 58 lines of code! The basic idea is to first “undo” the original git merge which created the conflict, and then “redo” it at a cell level (instead of a line level) and looking only at cell source (not outputs or metadata). The “undoing” is straightforward: just create two copies of the conflicted file (representing the local and remove versions of the file), go through each git conflict marker, and replace the conflict section with either the local or remote version of the code. Now that we’ve got the original local and remote notebooks, we can load the json using execnb.nbio, which will then give us an array of cells for each notebook. Now we’re up to the interesting bit – creating cell-level diffs based only on the cell source. The Python standard library contains a very flexible and effective implementation of a diff algorithm in the difflib module. In particular, the SequenceMatcher class provides the fundamental building blocks for implementing your own conflict resolution system. We pass the two sets of cells (remote and local) to SequenceMatcher(...).get_matching_blocks(), and it returns a list of each section of cells that match (i.e. have no conflicts/differences). We can then go through each matching section and copy them into the final notebook, and through each non-matching section and copy in each of the remote and local cells (add cells between them to mark the conflicts). Making SequenceMatcher work with notebook cells (represented in nbdev by the NbCell class) requires only adding __hash__ and __eq__ methods to NbCell. In each case, these methods are defined to look only at the actual source code, and not at any metadata or outputs. As a result, SequenceMatcher will only show differences in source code, and will ignore differences in everything else. With a single line of configuration, we can ask git to call our python script, instead of its default line-based implementation, any time it is merging changes. nbdev_install_hooks sets up this configuration automatically, so after running it, git conflicts become much less common, and never result in broken notebooks. Solving git merges locally is extremely helpful, but we need to solve them remotely as well. For instance, if a contributor submits a pull request (PR), and then someone else commits to the same notebook before the PR is merged, the PR might now have a conflict like this: This conflict shows that the two contributors have run cells in different orders (or perhaps one added a couple of cells above in the notebook), so their commits have conflicting execution counts. GitHub will refuse to allow this PR to be merged until this conflict is fixed. But of course we don’t really care about the conflict at all – it doesn’t matter what, if any, execution count is stored in the notebook. So we’d really prefer to ignore this difference entirely! Thankfully, Jupyter provides a “pre-save” hook which allows code to be run every time a notebook is saved. nbdev uses this to set up a hook which removes all unnecessary metadata (including execution_count) on saving. That means there’s no pointless conflicts like the one above, because no commits will have this information stored in the first place. Background Here at fast.ai we use Jupyter for everything. All our tests, documentation, and module source code for all of our many libraries is entirely developed in notebooks (using nbdev, of course!) And we use git for all our libraries too. Some of our repositories have many hundreds of contributors. Therefore solving the Jupyter+git problem has been critical for us. The solution presented here is the result of years of work by many people. Our first approach, developed by Stas Bekman and me, was to use git “smudge” and “clean” filters that automatically rewrote all notebook json to remove unneeded metadata when committing. This helped a bit, but git quite often ended up in an odd state where it was impossible to merge. In nbdev v1 Sylvain Gugger created an amazing tool called nbdev_fix_merge which used very clever custom logic to manually fix merge conflicts in notebooks, to ensure that they could opened in Jupyter. For nbdev v2 I did a from-scratch rewrite of every part of the library, and I realised that we could replace the custom logic with the SequenceMatcher approach described above. None of these steps fully resolved the Jupyter+git problem, since we were getting frequent merge errors caused by the smudge/clean git filters, and conflicts required manually running nbdev_fix_merge. Wasim Lorgat realised that we could resolve the smudge/clean issue by moving that logic into an nbdev save hook, and avoid the manual fix step by moving that logic into a git merge driver. This resolved the final remaining issues! (I was actually quite stunned that Wasim went from our first discussion of the outstanding problems, to figuring out how to solve all of them, in the space of about two days…) The result The new tools in nbdev2, which we’ve been using internally for the last few months, have been transformational to our workflow. The Jupyter+git problem has been totally solved. I’ve seen no unnecessary conflicts, cell-level merges have worked like magic, and on the few occassions where I’ve changed the source in the same cell as a collaborator, fixing the conflict in Jupyter has been straightforward and convenient. Postscript: other Jupyter+git tools There is one other tool which we’ve found very helpful in using Jupyter with git, which is ReviewNB. ReviewNB solves the problem of doing pull requests with notebooks. GitHub’s code review GUI only works well for line-based file formats, such as plain python scripts. This works fine with the Python modules that nbdev exports, and I often do reviews directly on the Python files, instead of the source notebooks. However, much of the time I’d rather do reviews on the source notebooks, because: For this purpose, ReviewNB is perfect. Just like nbdev makes git merges and commits Jupyter-friendly, ReviewNB makes code reviews Jupyter-friendly. A picture is worth a thousand words, so rather than trying to explain, I’ll just show this picture from the ReviewNB website of what PRs look like in their interface: Another potential solution to the Jupyter+git problem might be to use Jupytext. Jupytext saves notebooks in a line-based format, instead of in JSON. This means that all the usual git machinery, such as merges and PRs, works fine. Jupytext can even use Quarto’s format, qmd, as a format for saving notebooks, which then can be used to generate a website. Jupytext can be a bit tricky to manage when you want to save your cell outputs (which I generally want to do, since many of my notebooks take a long time to run – e.g training deep learning models.) Whilst Jupytext can save outputs in a linked ipynb file, managing this linkage gets complex, and ends up with the Jupyter+git problem all over again! If you don’t need to save outputs, then you might find Jupytext sufficient – although of course you’ll miss out on the cell-based code reviews of ReviewNB and your users won’t be able to read your notebooks properly when they’re browsing GitHub. There’s also an interesting project called nbdime which has its own git drivers and filters. Since they’re not really compatible with nbdev (partly because they tackle some of the same problems in different ways) I haven’t used them much, so haven’t got an informed opinion about them. However I do use nbdime’s Jupyter extension sometimes, which provides a view similar to ReviewNB, but for local changes instead of PRs. If you want to try to yourself, follow the directions on Git-friendly Jupyter to get started.
========================================
[SOURCE: https://www.fast.ai/posts/2022-08-25-jupyter-git.html] | [TOKENS: 2254]
The Jupyter+git problem is now solved Jeremy Howard August 25, 2022 On this page Jupyter notebooks don’t work with git by default. With nbdev2, the Jupyter+git problem has been totally solved. It provides a set of hooks which provide clean git diffs, solve most git conflicts automatically, and ensure that any remaining conflicts can be resolved entirely within the standard Jupyter notebook environment. To get started, follow the directions on Git-friendly Jupyter. The Jupyter+git problem Jupyter notebooks are a powerful tool for scientists, engineers, technical writers, students, teachers, and more. They provide an ideal notebook environment for interactively exploring data and code, writing programs, and documenting the results as dashboards, books, or blogs. But when collaborating with others, this ideal environment goes up in smoke. That’s because tools such as git, which are the most popular approaches for asynchronous collaboration, makes notebooks unusable. Literally. Here’s what it looks like if you and a colleague both modify a notebook cell (including, in many cases, simply executing a cell withuout changing it), and then try to open that notebook later: The reason for this stems from a fundamental incompatibility between the format Jupyter notebooks use (JSON) and the format that git conflict markers assume by default (plain lines of text). This is what it looks like when git adds its conflict markers to a notebook: That’s not valid JSON, and therefore Jupyter can’t open it. Conflicts are particularly common in notebooks, because Jupyter changes the following every time you run a notebook: All these changes to notebook files also make git diffs of notebooks very verbose. This can make code reviews a challenge, and make git repos more bulky than necessary. The result of these problems is that many Jupyter users feel that collaborating with notebooks is a clunky, error-prone, and frustrating experience. (We’ve even seen people on social media describe Jupyter’s notebook format as “stupid” or “terrible”, despite otherwise professing their love for the software!) It turns out, however, that Jupyter and git can work together extremely well, with none of the above problems at all. All that’s needed is a bit of special software… The solution Jupyter and git are both well-designed software systems that provide many powerful extensibility mechanisms. It turns out that we can use these to fully and automatically solve the Jupyter+git problem. We identified two categories of problems in the previous section: In our newly released nbdev2, an open source Jupyter-based development platform, we’ve solve each of the problems: Here’s what a conflict looks like in Jupyter with nbdev’s merge driver: As you see, the local and remote change are each clearly displayed as separate cells in the notebook, allowing you to simply delete the version you don’t want to keep, or combine the two cells as needed. The techniques used to make the merge driver work are quite fascinating – let’s dive into the details! We provide here a summary of the git merge driver – for full details and source code see the nbdev.merge docs. Amazingly enough, the entire implementation is just 58 lines of code! The basic idea is to first “undo” the original git merge which created the conflict, and then “redo” it at a cell level (instead of a line level) and looking only at cell source (not outputs or metadata). The “undoing” is straightforward: just create two copies of the conflicted file (representing the local and remove versions of the file), go through each git conflict marker, and replace the conflict section with either the local or remote version of the code. Now that we’ve got the original local and remote notebooks, we can load the json using execnb.nbio, which will then give us an array of cells for each notebook. Now we’re up to the interesting bit – creating cell-level diffs based only on the cell source. The Python standard library contains a very flexible and effective implementation of a diff algorithm in the difflib module. In particular, the SequenceMatcher class provides the fundamental building blocks for implementing your own conflict resolution system. We pass the two sets of cells (remote and local) to SequenceMatcher(...).get_matching_blocks(), and it returns a list of each section of cells that match (i.e. have no conflicts/differences). We can then go through each matching section and copy them into the final notebook, and through each non-matching section and copy in each of the remote and local cells (add cells between them to mark the conflicts). Making SequenceMatcher work with notebook cells (represented in nbdev by the NbCell class) requires only adding __hash__ and __eq__ methods to NbCell. In each case, these methods are defined to look only at the actual source code, and not at any metadata or outputs. As a result, SequenceMatcher will only show differences in source code, and will ignore differences in everything else. With a single line of configuration, we can ask git to call our python script, instead of its default line-based implementation, any time it is merging changes. nbdev_install_hooks sets up this configuration automatically, so after running it, git conflicts become much less common, and never result in broken notebooks. Solving git merges locally is extremely helpful, but we need to solve them remotely as well. For instance, if a contributor submits a pull request (PR), and then someone else commits to the same notebook before the PR is merged, the PR might now have a conflict like this: This conflict shows that the two contributors have run cells in different orders (or perhaps one added a couple of cells above in the notebook), so their commits have conflicting execution counts. GitHub will refuse to allow this PR to be merged until this conflict is fixed. But of course we don’t really care about the conflict at all – it doesn’t matter what, if any, execution count is stored in the notebook. So we’d really prefer to ignore this difference entirely! Thankfully, Jupyter provides a “pre-save” hook which allows code to be run every time a notebook is saved. nbdev uses this to set up a hook which removes all unnecessary metadata (including execution_count) on saving. That means there’s no pointless conflicts like the one above, because no commits will have this information stored in the first place. Background Here at fast.ai we use Jupyter for everything. All our tests, documentation, and module source code for all of our many libraries is entirely developed in notebooks (using nbdev, of course!) And we use git for all our libraries too. Some of our repositories have many hundreds of contributors. Therefore solving the Jupyter+git problem has been critical for us. The solution presented here is the result of years of work by many people. Our first approach, developed by Stas Bekman and me, was to use git “smudge” and “clean” filters that automatically rewrote all notebook json to remove unneeded metadata when committing. This helped a bit, but git quite often ended up in an odd state where it was impossible to merge. In nbdev v1 Sylvain Gugger created an amazing tool called nbdev_fix_merge which used very clever custom logic to manually fix merge conflicts in notebooks, to ensure that they could opened in Jupyter. For nbdev v2 I did a from-scratch rewrite of every part of the library, and I realised that we could replace the custom logic with the SequenceMatcher approach described above. None of these steps fully resolved the Jupyter+git problem, since we were getting frequent merge errors caused by the smudge/clean git filters, and conflicts required manually running nbdev_fix_merge. Wasim Lorgat realised that we could resolve the smudge/clean issue by moving that logic into an nbdev save hook, and avoid the manual fix step by moving that logic into a git merge driver. This resolved the final remaining issues! (I was actually quite stunned that Wasim went from our first discussion of the outstanding problems, to figuring out how to solve all of them, in the space of about two days…) The result The new tools in nbdev2, which we’ve been using internally for the last few months, have been transformational to our workflow. The Jupyter+git problem has been totally solved. I’ve seen no unnecessary conflicts, cell-level merges have worked like magic, and on the few occassions where I’ve changed the source in the same cell as a collaborator, fixing the conflict in Jupyter has been straightforward and convenient. Postscript: other Jupyter+git tools There is one other tool which we’ve found very helpful in using Jupyter with git, which is ReviewNB. ReviewNB solves the problem of doing pull requests with notebooks. GitHub’s code review GUI only works well for line-based file formats, such as plain python scripts. This works fine with the Python modules that nbdev exports, and I often do reviews directly on the Python files, instead of the source notebooks. However, much of the time I’d rather do reviews on the source notebooks, because: For this purpose, ReviewNB is perfect. Just like nbdev makes git merges and commits Jupyter-friendly, ReviewNB makes code reviews Jupyter-friendly. A picture is worth a thousand words, so rather than trying to explain, I’ll just show this picture from the ReviewNB website of what PRs look like in their interface: Another potential solution to the Jupyter+git problem might be to use Jupytext. Jupytext saves notebooks in a line-based format, instead of in JSON. This means that all the usual git machinery, such as merges and PRs, works fine. Jupytext can even use Quarto’s format, qmd, as a format for saving notebooks, which then can be used to generate a website. Jupytext can be a bit tricky to manage when you want to save your cell outputs (which I generally want to do, since many of my notebooks take a long time to run – e.g training deep learning models.) Whilst Jupytext can save outputs in a linked ipynb file, managing this linkage gets complex, and ends up with the Jupyter+git problem all over again! If you don’t need to save outputs, then you might find Jupytext sufficient – although of course you’ll miss out on the cell-based code reviews of ReviewNB and your users won’t be able to read your notebooks properly when they’re browsing GitHub. There’s also an interesting project called nbdime which has its own git drivers and filters. Since they’re not really compatible with nbdev (partly because they tackle some of the same problems in different ways) I haven’t used them much, so haven’t got an informed opinion about them. However I do use nbdime’s Jupyter extension sometimes, which provides a view similar to ReviewNB, but for local changes instead of PRs. If you want to try to yourself, follow the directions on Git-friendly Jupyter to get started.
========================================
[SOURCE: https://www.bbc.com/news/articles/ckg1dl410q9o] | [TOKENS: 2806]
The Chinese AI app sending Hollywood into a panic1 day agoShareSaveOsmond Chia,Business reporterandSuranjana Tewari,Asia Business CorrespondentShareSaveGetty ImagesA new artificial intelligence (AI) model developed by the Chinese company behind TikTok rocked Hollywood this week - not just because of what it can do, but what it could mean for creative industries.Created by tech giant ByteDance, Seedance 2.0 can generate cinema-quality video, complete with sound effects and dialogue, from just a few written prompts.Many of the clips said to have been made using Seedance, and featuring popular characters like Spider-Man and Deadpool, went viral.Major studios like Disney and Paramount quickly accused ByteDance of copyright infringement but concerns about the technology run deeper than legal issues.What is Seedance - and why the stir?Seedance was launched to little fanfare in June 2025 but it is the second version that came eight months later that has caused a major stir."For the first time, I'm not thinking that this looks good for AI. Instead, I'm thinking that this looks straight out of a real production pipeline," says Jan-Willem Blom from creative studio Videostate.Western AI video models have made progress in processing user instructions to make stunning images, he adds, but Seedance seems to have tied everything together. Like other AI tools - Midjourney and OpenAI's Sora - Seedance can create videos from short text prompts. In some cases just one prompt seems to be producing high-quality videos.It is particularly impressive because it combines text, visuals and audio in a single system, AI ethics researcher Margaret Mitchell says.Seedance's impact is being measured by an unlikely benchmark: how well it generates a clip of Will Smith eating spaghetti.Not only can Seedance create a remarkably life-like version of the star tucking into a plate of pasta, it has also spawned viral videos of Smith battling a spaghetti monster - and it looks and feels like a big-budget movie.Many industry experts and filmmakers believe Seedance is a new chapter in the development of video-generating technology.The complex action sequences it is producing look more realistic than its competitors, says David Kwok, who runs a Singapore-based animation studio called Tiny Island Productions. "It almost feels like having a cinematographer or director of photography specialising in action films assisting you."The promise - and the challenge Seedance has run into trouble over copyright issues, a growing challenge in the age of AI. Experts warn that AI companies are prioritising technology over people as they make more powerful tools and use data without paying for it. AlamyDisney owns the rights to multiple franchises, including Marvel's superheroesMajor Hollywood groups have cried foul over Seedance's use of copyrighted characters like Spider Man and Darth Vader. Disney and Paramount issued cease-and-desist letters demanding that Seedance stop using their content. Japan is also investigating ByteDance for alleged copyright violations, after AI videos of popular anime characters went viral.ByteDance has said it was taking steps to "strengthen current safeguards". This is not unique to the Chinese firm.In 2023, the New York Times sued OpenAI and Microsoft, alleging they used its articles without permission to train their AI models. Reddit sued Perplexity last year, claiming the AI firm had illegally scraped user posts. Disney raised similar concerns with Google. Clearly labelling content to prevent deception and building public trust in AI is far more important than "cooler-looking" videos, Mitchell says. And that's why developers must build systems that manage licensing and payments, and provide clear mechanisms for people to contest misuse, she adds.Disney, for instance, signed a $1bn (£730m) deal with OpenAI's Sora so it could use characters from Star Wars, Pixar and Marvel. Seedance's developers were likely to have been aware of potential copyright issues around the use of Western IP and took a risk anyway, says Shaanan Cohney, a computing researcher at the University of Melbourne."There's plenty of leeway to bend the rules strategically, to flout the rules for a while and get marketing clout," he adds.Meanwhile, for small firms, Seedance is too useful to ignore. Kwok, from Singapore's Tiny Island Productions, says AI of this quality will allow companies like his to create films that would cost far more than they can otherwise afford.He gave the example of Asia's booming short‑form videos and micro‑dramas that typically run on small budgets - roughly $140,000 for as many as 80 episodes under two minutes each.These productions have been sticking to romance or family drama to keep costs down as they need fewer visual effects. But now AI can "elevate low-budget productions into more ambitious genres such as sci-fi, period drama and, now, action", Kwok says.Getty ImagesHumanoid robots at a factory in ShanghaiIs China racing ahead? Seedance once again puts Chinese tech in the spotlight. "It signals that Chinese models are at the very least matching at the frontier of what is available," Cohney says. "If ByteDance can produce this seemingly out of nowhere, what other kinds of models do Chinese companies have in store?"Last year DeepSeek, another Chinese AI model, sent shockwaves around the world with its low-cost large language model. It quickly overtook ChatGPT as the most-downloaded free app on Apple's US store.In the year since Beijing has put AI and robotics at the core of its economic strategy, investing heavily in advanced computer chip production, automation and generative AI as it bids for a technological edge over the US. While Seedance 2.0 was making headlines, other big Chinese firms had lower-profile rollouts of their new generative AI tools ahead of the Lunar New Year holiday.The Spring Festival is increasingly becoming an "AI holiday," with firms timing launches for a period when millions of people are at home and experimenting with new apps, China analyst Bill Bishop wrote in his newsletter. He predicts 2026 could mark a turning point for mass AI adoption in China - not just chatbots, but also AI agents handling transactions, coding tools incorporated in everyday work, and video creators routinely using AI.Is the US-TikTok deal a new reality for China's tech champions?Nvidia strikes bumper AI deals with Asia tech giantsDeepSeek: How China's 'AI heroes' overcame US curbs to stun Silicon ValleyCopyrightChinaArtificial intelligenceInternational businessTechnology The Chinese AI app sending Hollywood into a panic A new artificial intelligence (AI) model developed by the Chinese company behind TikTok rocked Hollywood this week - not just because of what it can do, but what it could mean for creative industries. Created by tech giant ByteDance, Seedance 2.0 can generate cinema-quality video, complete with sound effects and dialogue, from just a few written prompts. Many of the clips said to have been made using Seedance, and featuring popular characters like Spider-Man and Deadpool, went viral. Major studios like Disney and Paramount quickly accused ByteDance of copyright infringement but concerns about the technology run deeper than legal issues. What is Seedance - and why the stir? Seedance was launched to little fanfare in June 2025 but it is the second version that came eight months later that has caused a major stir. "For the first time, I'm not thinking that this looks good for AI. Instead, I'm thinking that this looks straight out of a real production pipeline," says Jan-Willem Blom from creative studio Videostate. Western AI video models have made progress in processing user instructions to make stunning images, he adds, but Seedance seems to have tied everything together. Like other AI tools - Midjourney and OpenAI's Sora - Seedance can create videos from short text prompts. In some cases just one prompt seems to be producing high-quality videos. It is particularly impressive because it combines text, visuals and audio in a single system, AI ethics researcher Margaret Mitchell says. Seedance's impact is being measured by an unlikely benchmark: how well it generates a clip of Will Smith eating spaghetti. Not only can Seedance create a remarkably life-like version of the star tucking into a plate of pasta, it has also spawned viral videos of Smith battling a spaghetti monster - and it looks and feels like a big-budget movie. Many industry experts and filmmakers believe Seedance is a new chapter in the development of video-generating technology. The complex action sequences it is producing look more realistic than its competitors, says David Kwok, who runs a Singapore-based animation studio called Tiny Island Productions. "It almost feels like having a cinematographer or director of photography specialising in action films assisting you." The promise - and the challenge Seedance has run into trouble over copyright issues, a growing challenge in the age of AI. Experts warn that AI companies are prioritising technology over people as they make more powerful tools and use data without paying for it. Major Hollywood groups have cried foul over Seedance's use of copyrighted characters like Spider Man and Darth Vader. Disney and Paramount issued cease-and-desist letters demanding that Seedance stop using their content. Japan is also investigating ByteDance for alleged copyright violations, after AI videos of popular anime characters went viral. ByteDance has said it was taking steps to "strengthen current safeguards". This is not unique to the Chinese firm. In 2023, the New York Times sued OpenAI and Microsoft, alleging they used its articles without permission to train their AI models. Reddit sued Perplexity last year, claiming the AI firm had illegally scraped user posts. Disney raised similar concerns with Google. Clearly labelling content to prevent deception and building public trust in AI is far more important than "cooler-looking" videos, Mitchell says. And that's why developers must build systems that manage licensing and payments, and provide clear mechanisms for people to contest misuse, she adds. Disney, for instance, signed a $1bn (£730m) deal with OpenAI's Sora so it could use characters from Star Wars, Pixar and Marvel. Seedance's developers were likely to have been aware of potential copyright issues around the use of Western IP and took a risk anyway, says Shaanan Cohney, a computing researcher at the University of Melbourne. "There's plenty of leeway to bend the rules strategically, to flout the rules for a while and get marketing clout," he adds. Meanwhile, for small firms, Seedance is too useful to ignore. Kwok, from Singapore's Tiny Island Productions, says AI of this quality will allow companies like his to create films that would cost far more than they can otherwise afford. He gave the example of Asia's booming short‑form videos and micro‑dramas that typically run on small budgets - roughly $140,000 for as many as 80 episodes under two minutes each. These productions have been sticking to romance or family drama to keep costs down as they need fewer visual effects. But now AI can "elevate low-budget productions into more ambitious genres such as sci-fi, period drama and, now, action", Kwok says. Is China racing ahead? Seedance once again puts Chinese tech in the spotlight. "It signals that Chinese models are at the very least matching at the frontier of what is available," Cohney says. "If ByteDance can produce this seemingly out of nowhere, what other kinds of models do Chinese companies have in store?" Last year DeepSeek, another Chinese AI model, sent shockwaves around the world with its low-cost large language model. It quickly overtook ChatGPT as the most-downloaded free app on Apple's US store. In the year since Beijing has put AI and robotics at the core of its economic strategy, investing heavily in advanced computer chip production, automation and generative AI as it bids for a technological edge over the US. While Seedance 2.0 was making headlines, other big Chinese firms had lower-profile rollouts of their new generative AI tools ahead of the Lunar New Year holiday. The Spring Festival is increasingly becoming an "AI holiday," with firms timing launches for a period when millions of people are at home and experimenting with new apps, China analyst Bill Bishop wrote in his newsletter. He predicts 2026 could mark a turning point for mass AI adoption in China - not just chatbots, but also AI agents handling transactions, coding tools incorporated in everyday work, and video creators routinely using AI. Is the US-TikTok deal a new reality for China's tech champions? Nvidia strikes bumper AI deals with Asia tech giants DeepSeek: How China's 'AI heroes' overcame US curbs to stun Silicon Valley ByteDance to curb AI video app after Disney legal threat AirAsia accused by artist for allegedly using his work without consent Sting pays Police bandmates £600,000 in royalties Tumbler Ridge suspect's ChatGPT account banned before shooting OpenAI said the account's activity did not meet the threshold to flag it to authorities when it was identified. 'Breweries using AI could put artists out of work' As two pubs in Newcastle ban AI art, artists discuss the impact it can have on creatives. Why fake AI videos of UK urban decline are taking over social media Deepfakes showing grim taxpayer-funded waterparks have gone viral and drawn some racist responses. How will Trump's new 10% global tariffs work and what's next? The Supreme Court's decision has led questions over whether people can get a refund over the unlawful tariffs. 'Hard to keep lights on' - Business owners cautiously welcome tariff ruling A toy importer says the Supreme Court decision was a rebuff to "insane fluctuations" in duties. Copyright 2026 BBC. All rights reserved. The BBC is not responsible for the content of external sites. Read about our approach to external linking.
========================================