text
stringlengths
1
1k
title
stringclasses
230 values
Fig. 7 shows the importance of delayed optimization for decoupling skeletal deformation and non-rigid deformation. When not decoupled well, generalization to new views is much poorer, as shown in Fig. 8. Figure 4. Qualitative comparison to HyperNeRF [48] human motions are also more extreme than the examples shown to ...
HumanNeRF- Free-viewpoint Rendering of Moving People from Monocular Video
where Guanaco fails compared to ChatGPT. We release all of our models and code, including CUDA kernels for 4-bit training.2
QLORA
6 ACKNOWLEDGEMENTS We thank Kenneth Li, Sonja Johnson-Yu, Daniel Bashir, Zhou Fan, and Safwan Hossain for their feedback and discussions about this paper. We also thank Microsoft Azure and the Harvard Data Science Initiative for access to compute. The first author is supported by an NSF Graduate Research Fellowship an...
CHAIN-OF-THOUGHTREASONING IS APOLICY IMPROVEMENTOPERATOR
ishthetaskbetter.(1)Don’tsearchthesameentitytwotimessincetheresultsarealwaysthesame.(2)Whenthesearchactiondoesn’tfindthecorrespondingpage,youshouldtrytosearchforasimilarentity.(3)Whenthesearchactionreturnsapagewhichisnotrelatedtothequestion,youshoulddisambiguatetheentitytofindotherentitiesthatsharesimilarnameswiththecurr...
Tool Learning with Foundation Models
Self-Instruct, Alpaca, Unnatural Instructions The Self-Instruct, Alpaca, and Unnatural Instruc- tions datasets [59, 55, 26] are instruction tuning datasets collected with various approaches of model distillation from GPT-3 Instruct and ChatGPT. They rely on prompting, in-context learning, and paraphrasing to come up wi...
QLORA
A.2 Training Setup for the Text-Music Pairs For the textual description, we use metadata such as the title, author, album, genre, and year of re- lease. Given that a song could span longer than 44s, we append a string indicating which chunk is currently being trained on, together with the total chunks the song is made ...
MOUSAI
Recent advances in natural language processing (NLP) have made significant progress toward the key challenge of natural interaction with humans. In November 2022, OpenAI first introduced ChatGPT [1], a large dialogue language model, which has attracted high attention for its high-quality gener- ated text. ChatGPT is mo...
BadGPT- Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT
Noam Shazeer and Mitchell Stern. 2018. Aman Sinha, Hongseok Namkoong, and John Duchi. Certifiable distributional robustness with principled adversarial training. In International Conference on Learning Representations (ICLR), 2018. Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical Bayesian optimization of ma...
DoReMi- Optimizing Data Mixtures Speeds Up Language Model Pretraining
Product-Led AI | Greylock https://greylock.com/greymatter/seth-rosenberg-product-led-ai/ 4/10
Product-Led AI _ Greylock
[Khattab et al., 2022] Omar Khattab, Keshav Santhanam, Xiang Lisa Li, David Hall, Percy Liang, Christopher Potts, and Matei Zaharia. Demonstrate-search-predict: Compos- ing retrieval and language models for knowledge-intensive nlp. arXiv preprint arXiv:2212.14024, 2022. [Kwiatkowski et al., 2019] Tom Kwiatkowski, Jenn...
Retrieval-AugmentedGenerationforLargeLanguageModels-ASurvey
Stanford CRFM https://crfm.stanford.edu/2023/03/13/alpaca.html 4/6
Stanford alpha CRFM
types and levels of abstraction, 2022. [58] A. Voynov, K. Abernan, and D. Cohen-Or. Sketch-guided text-to-image diffusion models. 2022. [59] T. Wang, T. Zhang, B. Zhang, H. Ouyang, D. Chen, Q. Chen, and F. Wen. Pretraining is all you need for image-to-image translation, 2022. 32 [60] T.-C. Wang, M.-Y. Liu, J.-Y. ...
Adding Conditional Control to Text-to-Image Diffusion Models
[27] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for NLP. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning...
E5
[53] Nikola Marangunić and Andrina Granić. 2015. Technology acceptance model: a literature review from 1986 to 2013. Universal Access in the Information Society 14, 1 (March 2015), 81–95. https://doi.org/10.1007/s10209-014-0348-1 [54] Simone Marcheschi, Fabio Salsedo, Marco Fontana, and Massimo Bergamasco. 2011. Body...
Society’sAttitudesTowardsHumanAugmentation
Thomas Scialom, Tuhin Chakrabarty, and Smaranda Fine-tuned language mod- learners. arXiv preprint Muresan. 2022. els are continual arXiv:2205.12393.
SELF-INSTRUCT- Aligning Language Model with Self Generated Instructions
We also encountered many efficiency and robustness chal- lenges in scaling up aggregation-based methods to dynamic scenes. To efficiently model scene motion across multiple views, we model this motion using motion trajectory fields that span multiple frames, represented with learned basis functions. Furthermore, to achiev...
DynIBaR-NeuralDynamicImage-BasedRendering
BookCorpus2 EuroParl HackerNews YoutubeSubtitles PhilPapers NIH ExPorter Enron Emails Topic #1 like time good use want cells data study cell results time said like new know said trump president house state case given time let data y d b abbr j court trial evidence case state run q server project use signal syste...
The Pile- An 800GB Dataset of Diverse Text for Language Modeling
ve(whichcomputesthemaxKLoverstatesinsteadofthemean)formsalowerbound(i.e.,apessimisticbound)ontheperformanceofthepolicyπ.TRPOusesahardconstraintratherthanapenaltybecauseitishardtochooseasinglevalueofβthatperformswellacrossdifferentproblems—orevenwithinasingleproblem,wherethethecharacteristicschangeoverthecourseoflearning...
PPO
[78] Taufik Akbar Sitompul and Markus Wallmyr. 2019. Using Augmented Reality to Improve Productivity and Safety for Heavy Machinery Operators: State of the Art. In Proceedings of the 17th International Conference on Virtual-Reality Continuum and Its Applications in Industry (Brisbane, QLD, Australia) (Vrcai ’19). Assoc...
Society’sAttitudesTowardsHumanAugmentation
t h i s b y r e d u c i n g t h e t a s k o f e x p l a n a t i o n g e n e r a t i o n t o h i g h l i g h t i n g t h e r i g h t r a t i o n a l e . W h i l e a g o o d
PhD Fellow in Explainable Natural Language Understanding
16 JARVIS-1: Open-World Multi-task Agents with Memory-Augmented Multimodal Language Models action: craft object_item: planks object_number: 12 materials: {"log":3} tool: null rank: 2 ### Prompt 2: Goal parsing prompt in JARVIS-1 System: Here are some actions that the agent fails to perform in Minecraft. Please giv...
JARVIS-1
SQL: SELECT AVG(weight), MIN(weight), sex FROM people GROUP BY sex The execution of the SQL query above would return a table with 3 columns. The first column, "AVG(weight)" would contain the average weight. The second column, "MIN(weight)" would contain the minimum weight. The third column, " sex" would contain the ge...
Teaching Large Language Models to Self-Debug
User: LaMDA: User: LaMDA: User: LaMDA: User: LaMDA: C.4 Example dialogs with grounded responses Hi, I’m an automatic search aide. How can I help you, user? what’s the ford stock price now The stock price of ford is currently at $14.24 should i buy It’s hard to say. You might want to consult a financial advisor. i bel...
LaMDA- Language Models for Dialog Applications
[1] M. Aghighi, C. Bäckström, Cost-optimal and net-benefit planning–a parameterised complexity view, in: Proceedings of the 24th International Joint [2] M. Ancona, L.D. Floriani, J.S. Deogun, Path problems in structured graphs, Comput. J. 29 (1986) 553–563. [3] F. Bacchus, Q. Yang, Downward refinement and the efficiency o...
A-framework-for-analysing-state-abstraction-metho_2022_Artificial-Intelligen
The baseline dense model for 20 languages has 243M parameters. Expert scaling is investigated by adding 32, 64, 128, and 256 experts separately. The obtained model sizes are 1.6B, 2.9B, 5.6B, and 11.1B parameters. Experimental results show that the more the experts the higher the BLEU score hence the better the transla...
Scaling Speech, Language and Vision Models with Mixture of Experts Technique - Microsoft Community Hub
Linear MLP Linear(in dim, out dim) Linear(in dim, in dim), GELU, Linear(in dim, out dim) Contrastive loss batch size vs. modalities. While con- trastive losses do require larger batch size, this requirement didn’t increase with the number of modalities. As noted in Appendix B, our experiments (Table 2) sample a mini-...
IMAGEBIND- One Embedding Space To Bind Them A
4.3.1 Reacting and Updating Plans. Generative agents operate in an action loop where, at each time step, they perceive the world around them and those perceived observations are stored in their memory stream. We prompt the language model with these obser- vations to decide whether the agent should continue with their e...
Generative Agents- Interactive Simulacra of Human Behavior
highest risk, testing these areas, and adjusting as we go. It is also iterative in the sense that we use multiple rounds of red teaming as we incorporate new layers of mitigation and control, conduct testing and refining, and repeat this process.
gpt-4-system-card
16 Figure 23. The LoT-oriented instruction templates. Instruction Templates of Image to Text. Based on Fig. 23, we can categorize the instruction templates for Image to Text into the following four types: Original Instruction Based on the image, think of a sentence that is unexpected and humorous. Let’s think outsi...
Let’sThinkOutsidetheBox
10.6 Edge computing with LLMs Deploying Large Language Models (LLMs) in edge computing environments presents unique challenges due to the inherent limitations of edge devices. These devices often face constraints in terms of battery life, computational power, and memory resources [241, 242]. Additionally, issues such ...
Beyond Efficiency
four. For example, the ‘Research’ phase may issue the following query:
LaMDA- Language Models for Dialog Applications
11 Our primary personality measure, the IPIP-NEO [97], is a 300-item open source representation of the commercialized Revised NEO Personality Inventory [98]. The IPIP-NEO, hailing from the questionnaire tradition Simms et al. [96], involves rating descriptive statements (e.g., “[I] prefer variety to routine”; 60 per ...
PersonalityTraitsinLargeLanguageModels
To fill the mentioned gap, in this work, we con- duct privacy analyses of the state-of-the-art LLMs and study their privacy implications. We follow the setting of previous works to evaluate the privacy leakage issues of ChatGPT thoroughly and show that previous prompts are insufficient to extract personally identifiable i...
Multi-step Jailbreaking Privacy Attacks on ChatGPT
Empirical studies have also taught us about the mechanisms that undergird worldview backfire effects. Consistent with a motivated reasoning perspective, worldview backfire effects appear rooted in counterarguing. In one experiment, Schaffner and Roche (2017) examine differences in survey response times following the rele...
Social_Media_and_Democracy
Recent work has pushed these vision-language systems to larger scales [Ding et al., 2021, Yuan et al., 2021, Singh et al., 2022, Wang et al., 2022c, Fang et al., 2022b], based on freely available image-caption pairs collected from the internet, such as in [Schuhmann et al., 2022]. These modern SSL models are capable of...
A Cookbook of Self-Supervised Learning
[18] R. Schank, Explanation Patterns: Understanding Mechanically and Creatively, Psychology Press, 2013. [19] D. Walton, A dialogue system specification for explanation, Synthese 182 (3) (2011) 349–374. [20] C. Antaki, I. Leudar, Explaining in conversation: towards an argument model, Eur. J. Soc. Psychol. 22 (2) (1992) ...
Knowledge graphs as tools for explainable machine learning: A survey
ACM Comput. Surv., Vol. 1, No. 1, Article . Publication date: February 2022. Survey of Hallucination in Natural Language Generation 27
SurveyofHallucinationinNatural Language Generation
Platforms historically have had little incentive to share detailed information about content removal with the public. Compiling records of evolving content takedown processes, which may use different tools and standards or be managed by different is burdensome; and any disclosure, particularly one that admits error, ca...
Social_Media_and_Democracy
small.en medium.en 452 1.4 1.0 2.0 4.3 21014 3.0 4.0 5.3 12.3 64977 3.8 5.9 6.0 15.6 1485 3.6 3.2 1.0 7.9 6719 3.4 3.8 6.0 13.2 574 1.9 1.0 2.0 4.8 23549 3.5 4.2 4.6 12.3 18929 3.0 3.6 4.1 10.7 549 1.8 1.2 2.7 5.8 20611 3.0 4.6 7.5 15.1 77122 5.3 6.9 8.4 20.6 1292 1.4 4.2 3.1 8.7 6483 3.3 4.2 7.0 14.5 548 1.9 1.1 2.0 ...
DISTIL-WHISPER
4.6 Natural language generation Due to their generative pre-training, natural language generation (NLG) rather than classification or regression has become the primary interface for large language models. Despite this, however, models’ generation quality is rarely evaluated, and NLG evaluations typically focus on Engli...
PaLM 2 Technical Report
Rashkin et al. [152] introduce a set of control codes and concatenate them with dialogue inputs to reduce the hallucination by forcing the model to be more aware of how the response relies on the knowledge evidence in the response generation. Some researchers have also tried to reduce halluci- nated responses during ge...
SurveyofHallucinationinNatural Language Generation
4.2 Incorporating Multiple Modalities into SSL Training Self-supervised learning need not be based on a single modality. Especially multi- modal vision-language have recently demonstrated this to great effect. Contrastive Lan- guage–Image Pre-training (CLIP) [Radford et al., 2021], and ALIGN [Jia et al., 2021] are self-...
A Cookbook of Self-Supervised Learning
to catch all the instances of pejorative content, since purposeful misspellings of words could evade the censor and still have the intended effect. Further- more, words and their intents are always evolving, therefore any list created would likely be always outdated. Another issue pertains to sorting the words into the...
The Pile- An 800GB Dataset of Diverse Text for Language Modeling
A neural network contains many dense layers which perform matrix multiplication. The weight matrices in these layers typically have full-rank. When adapting to a specific task, Aghajanyan et al. (2020) shows that the pre-trained language models have a low “instrisic dimension” and can still learn efficiently despite a ra...
LORA
From the perspective of philosophy, is artificial entities capable of agency? In a general sense, if we define agents as entities with the capacity to act, AI systems do exhibit a form of agency [5]. However, the term agent is more usually used to refer to entities or subjects that possess consciousness, intentionality...
TheRiseandPotentialofLargeLanguageModel BasedAgents
t a s k s i n r e a l - t i m e . W e d i s c u s s p o t e n t i a l f u t u r e i m p r o v e m e n t s , i n c l u d i n g t h e i n t e g r a t i o n o f a s e c u r i t y / s a f e t y a g e n t , e x p a n d i n g f u n c t i o n a l i t y , g e n e r a t i n g i n t e r i m m ...
Task-driven Autonomous Agent Utilizing GPT-4, Pinecone, and LangChain for Diverse Applications – Yohei Nakajima
A l p a c a : A S t r o n g , R e p l i c a b l e I n s t r u c t i o n - F o l l o w i n g M o d e l A u t h o r s : R o h a n T a o r i * a n d I s h a a n G u l r a j a n i * a n d T i a n y i Z h a n g * a n d Y a n n D u b o i s * a n d X u e c h e n L i * a n d C a r l o s ...
Stanford alpha CRFM
Current trends indicate that AI technologies will become more relevant in the analysis and production of art. In the last several years many universities have established Digital humanities (DH) master’s and PhD programs to educate new generations of researchers familiar with quantitative and AI-based methods and their...
UNDERSTANDINGANDCREATINGARTWITHAI-REVIEWAND OUTLOOK
sha1_base64="76w10YEtETzUXdaT0wTZt0xBig8=">AAAB9XicbVDLSgMxFL1TX7W+qi7dBIvgqsyIoMuCG5cV7EPaacmkmTY0kxmSO0oZ+h9uXCji1n9x59+YtrPQ1gOBwzn3ck9OkEhh0HW/ncLa+sbmVnG7tLO7t39QPjxqmjjVjDdYLGPdDqjhUijeQIGStxPNaRRI3grGNzO/9ci1EbG6x0nC/YgOlQgFo2ilXjeiOArCrD3tYV/0yxW36s5BVomXkwrkqPfLX91BzNKIK2SSGtPx3AT9jGoUTPJpqZsanlA2pkPesVTRiBs/m...
BANMo- Building Animatable 3D Neural Models from Many Casual Videos
EFn,c(D; θ) := Ex∼D,z∼pc(·|x;θ)[Fn,c(x, z)], where θ is the set of parameters, and pc(· | x; θ) is the conditional probability over hidden variables Z given x specified by the PC rooted at unit c. Similar to flows, the expected flows can be computed via a forward and backward pass of the PC (Alg. 5 and 6 in the Appendix)...
Tractable Regularization of Probabilistic Circuits
rectly leverage the world knowledge embedded in its param- eters. This enables not only embodied reasoning but also question answering, as demonstrated in our experiments. Among works that output actions, perhaps most similar is the approach proposed in Gato (Reed et al., 2022) which, like PaLM-E, is a generalist multi...
PaLM-E- An Embodied Multimodal Language Model
We train the SR transformer with the MAGVIT [74] ob- jective, and use token factorization [75] to account for the large vocabulary size. For training, the LR token sequences are obtained by tokenizing bicubic-downsampled versions of the ground truth videos and applying noise augmenta- tion [32] in the discrete latent s...
VideoPoet
and NQ datasets. For the NLI dataset, contradiction sentences are regarded as hard negatives. The loss function is a linear interpolation between contrastive loss Lcont for hard labels and KL divergence DKL for distilling soft labels from the teacher model.
E5
vision-aware LLM to judge the outputs. In all drawbench evaluations, our model beats DALL-E 2 and Stable Diffusion XL. The gap widens signifi- cantly when we use the "upsampled" captions.
Improving Image Generation with Better Captions
Privacy and security. Given that humans can be members of the agent society, the exchange of private information between users and LLM-based agents poses significant privacy and security 40 concerns [573]. Users might inadvertently disclose sensitive personal information during their interactions, which will be reta...
TheRiseandPotentialofLargeLanguageModel BasedAgents
11 Reformer, Linear Transformer, AFT, and KDEformer, each presenting unique solu- tions to optimize processing speed and resource usage. Additionally, we touch upon hardware-optimized attention mechanisms and alternative non-transformer architec- tures, highlighting their contributions to the evolving landscape of effi...
Beyond Efficiency
Enhanced Instruction Tuning Different from conventional knowledge distillation based instruction tuning, Luo et al. (2023c,a) employed Evol-Instruct (Xu et al., 2023a) to construct the task-specific high quality instruction tuning dataset, where the seed instructions have evolved to the ones either extended in knowledg...
ChatGPT’sOne-yearAnniversary-AreOpen-Source LargeLanguageModelsCatchingup
simple motions, to least-to-most prompting of reward-conditioned trajectories that can discover and represent closed-loop policies (e.g., a stabilizing controller for CartPole). While difficult to deploy today for real systems due to latency, context size limitations, and compute costs, the approach of using LLMs to dr...
LargeLanguageModelsasGeneralPatternMachines
FLAN-T5. Therefore, our model TANGO sets itself apart from the three existing models, making it an exciting addition to the current research in this area. It is important to note that the AudioLDM-L-Full-FT checkpoint from Liu et al. [17] was not avail- able for our study. Therefore, we used the AudioLDM-M-Full-FT chec...
Text-to-Audio Generation using Instruction-Tuned LLM and Latent Diffusion Model
3.2. Binding modalities with images IMAGEBIND uses pairs of modalities (I,M), where I represents images and M is another modality, to learn a sin- gle joint embedding. We use large-scale web datasets with (image, text) pairings that span a wide range of semantic concepts. Additionally, we use the natural, self-supervi...
IMAGEBIND- One Embedding Space To Bind Them A
A mischievous ferret with a playful grin squeezes itself into a large glass jar, surrounded by colorful candy. The jar sits on a wooden table in a cozy kitchen, and warm sunlight filters through a nearby window. A fierce garden gnome warrior, clad in armor crafted from leaves and bark, brandishes a tiny sword and shie...
Improving Image Generation with Better Captions
In this review, we have discussed two phenomena that may contribute to the durability of misinformation post-correction: the continued influence effect and backfire effects. Though scholars have found evidence that each of these processes undermines the effectiveness of corrections, recent works have cast doubt on their ...
Social_Media_and_Democracy
Multilingual and cultural personality considerations: This work contributes evidence that at least some LLMs exhibit personality traits consistent with human personalities. We only considered English and did not make cultrual considerations beyond the applied psychomet- rics. While the LLMs we used performed well on NL...
PersonalityTraitsinLargeLanguageModels
Improving Image Generation with Better Captions James Betker∗† jbetker@openai.com Gabriel Goh∗† ggoh@openai.com Li Jing∗† Tim Brooks† lijing@openai.com Jianfeng Wang‡ Linjie Li‡ Long Ouyang† Juntang Zhuang† Joyce Lee† Yufei Guo† Wesam Manassra† Prafulla Dhariwal† Casey Chu† Yunxin Jiao† Aditya Ramesh∗† ara...
Improving Image Generation with Better Captions
5. Descriptions of objects , the image generator should draw the most commonly associated object . 6. Rare single words , where the image generator should create an image somewhat associable with the requested . specified image . 7. Images with text in them , where the image generator should create an image with the...
Improving Image Generation with Better Captions
the <API> token to 0. on any instructions. 6We use the original davinci variant that is not finetuned token, but whenever it is one of the k most likely tokens. For k = 1, this corresponds to regular greedy decoding; we instead use k = 10 to in- crease the disposition of our model to make use of the APIs that it has...
Toolformer
Consider the compilation from a PGM to an HCLT (Sec. 4.1). We first note that each PGM node g uniquely corresponds to a variable scope φ of the PC. That is, all PC units correspond to g have the same variable scope. Please first refer to Appx. B.2 for details on how to generate a HCLT given its PGM representation. In the...
LOSSLESS COMPRESSION WITH PROBABILISTIC CIRCUITS
[5] Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. Ext5: Towards extreme multi-task scaling for transfer learning. In International Conference on Learning Represe...
WizardLM- Empowering Large Language Models to Follow Complex Instructions
Table 2: MATTR (up-scaled by ×100) of the generated dataset. that this observation can be attributed to the en- hanced generative capabilities of gpt-3.5-turbo. Lexical Diversity We use Moving-Average Type–Token Ratio (MATTR) (Covington and Mc- Fall, 2010) to measure the lexical diversity with the window size of 50, ...
LaMini-LM- A Diverse Herd of Distilled Models from Large-Scale Instructions
ing the 2D conditional probabilities {li(x), hi(x)}D i=1 w.r.t. any x. Since every conditional probability can be represented as the quotient of two marginals, it is equivalent to compute the two following sets of marginals: F (x) := {p(x1, . . . , xi)}D As a direct application of the marginal algorithm described in Se...
LOSSLESS COMPRESSION WITH PROBABILISTIC CIRCUITS
tions. For each clip, we get a 6×2000 dimensional input and we measure the zero-shot performance for scenario classifi- cation using each clip as an independent testing sample. B.2. Few-shot evaluation details For the few-shot results in Figures 3 using the ESC and SUN datasets, we sampled k training samples per class,
IMAGEBIND- One Embedding Space To Bind Them A
4.2.2 Filtering the Search Space While in Section 4.2.1 we assigned a NatOp to each mutation in isolation, there can still be un- filled NatOps. For instance, the unfilled NatOp in the second mutation of Figure 4 leads to six possible NatOp sequences as candidates, one per available NatOp. Recall that these NatOp se- q...
ProoFVer- Natural Logic Theorem Proving for Fact Verification
YuXuan Liu, Abhishek Gupta, Pieter Abbeel, and Sergey Levine. Imitation from observation: Learning to imitate behaviors from raw video via context translation. In 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 1118–1125. IEEE, 2018b. Shayne Longpre, Kartik Perisetla, Anthony Chen, Nikhil Ram...
Tool Learning with Foundation Models
(2023); Feng et al. (2023), or multi-agent dialogue (Cohen et al., 2023; Du et al., 2023). There are also domains where GPT-3.5-turbo and GPT-4 remain unbeatable, such as AI safety. Due to the large-scale RLHF (Bai et al., 2022a) involved in GPT models, they are known to demonstrate safer and more ethical behaviors, wh...
ChatGPT’sOne-yearAnniversary-AreOpen-Source LargeLanguageModelsCatchingup
These descriptive findings contextualize and inform the nascent literature on the effects of exposure to online misinformation. Owing to practical and ethical restrictions, such research is necessarily conducted in artificial settings, often with convenience samples, but it provides an opportunity to check intuitions abo...
Social_Media_and_Democracy
Barnouw, E. (1966). A Tower in Babel. New York: Oxford University Press. Barthel, M., & Mitchell, A. (2017). Democrats, Republicans Now Split on Support for Watchdog Role. Pew Research Center report. www.journalism.org/2017/05/10/ democrats-republicans-now-split-on-support-for-watchdog-role Belford, A., Cvetkovska, S....
Social_Media_and_Democracy
11
Let’sThinkOutsidetheBox
4. Method As per our problem formulation in Section 3.2, we propose a multi-view cross-domain diffusion scheme, which oper- ates on two distinct domains to generate multi-view consis- tent normal maps and color images. The overview of our method is presented in Figure 2. First, our method adopts a multi-view diffusion ...
Wonder3D
to climate change messaging (Nisbet et al. 2015; Ma, Dixon, and Hmielowski 2019). A deeper focus on psychological reactance may therefore help reconcile previously perplexing findings in the misinformation literature. Some accounts of the continued influence effect posit that individuals continue to endorse misinformatio...
Social_Media_and_Democracy
io-awareness. In Advances in Neural Information Processing Systems (NeurIPS), 2022. [87] F. Paischer, T. Adler, V. Patil, A. Bitto-Nemling, M. Holzleitner, S. Lehner, H. Eghbal-Zadeh, and S. Hochreiter. History compression via language models in reinforcement learning. In International Conference on Machine Learning (...
LargeLanguageModelsasGeneralPatternMachines
[29] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. In CVPR, 2023. 2, 3 [30] Minghua Liu, Chao Xu, Haian Jin, Linghao Chen, Zexiang Xu, and Hao Su. One-2-3-45: Any single i...
Wonder3D
49 101102103104105Sample budget0.000.050.100.150.20Solve rate (10 attempts)Full dataset50% problems20% problems10% problems101102103104105Sample budget0.000.050.100.150.20Solve rate (10 attempts)Full dataset50% solutions20% solutions10% solutions Competition-Level Code Generation with AlphaCode
alphacode
What happened at Martin Lake has happened at dozens of Vistra’s other power plants, with more than 400 AI models (and counting) deployed across the company’s fleet to help operators make even better decisions. It also reflects a core trait of Vistra’s AI transformation, which is that it isn’t a story of one massi...
an-ai-power-play-fueling-the-next-wave-of-innovation-in-the-energy-sector-may-2022
[224] Anubhav Johri, Ashish Tripathi, et al. 2019. Parkinson disease detection using deep neural networks. In 2019 twelfth international conference on contemporary computing (IC3). IEEE, 1–4. [225] Yooncheol Ju, Ilhwan Kim, Hongsun Yang, Ji-Hoon Kim, Byeongyeol Kim, Soumi Maiti, and Shinji Watanabe. 2022. TriniTTS: ...
AReviewofDeepLearningTechniquesforSpeechProcessing
The way we connect the ControlNet is computationally efficient — since the locked copy parameters are frozen, no gradient computation is required in the originally locked encoder for the finetuning. This approach speeds up train- ing and saves GPU memory. As tested on a single NVIDIA A100 PCIE 40GB, optimizing Stable D...
AddingConditionalControltoText-to-ImageDiffusionModels
With recent advances in deep learning, researchers turn to utilize deep neural networks to model texture. A num- ber of deep generative models [18, 20–23, 33, 40, 51] have been proposed to parameterize texture into a latent space. For example, GANFIT [22] utilizes GAN-based neural net- works to train a generator of fac...
RaBit- Parametric Modeling of 3D Biped Cartoon Characters with a Topological-consistent Dataset
When investigating language comprehension and communication, it is essential to consider both auditory and visual information, as studies have demonstrated that visual information can assist in distinguishing between acoustically similar sounds that differ in articulatory characteristics. A comprehensive understanding ...
AReviewofDeepLearningTechniquesforSpeechProcessing
Despite the success of the LLM alignment process, most text-to-image diffusion training pipelines do not in- corporate learning from human preferences. Several mod- els [9, 35, 36], perform two-stage training, where large- scale pretraining is followed by fine-tuning on a high- quality text-image pair dataset to strate...
DiffusionModelAlignmentUsing Direct Preference Optimization
Benchmark (shots) MMLU (5-shot) TriviaQA (1-shot) Natural Questions (1-shot) GSM8K (8-shot) HumanEval (0-shot) BIG-Bench Hard (3-shot) GPT-3.5 GPT-4 PaLM PaLM-2-L Llama 2 70.0 – – 57.1 48.1 – 86.4 – – 92.0 67.0 – 69.3 81.4 29.3 56.5 26.2 52.3 78.3 86.1 37.5 80.7 – 65.7 68.9 85.0 33.0 56.8 29.9 51.2 Table 4: Comp...
Llama2
Here is a sample generation, including the prompt and the story generated by GPT-3.5. Write a short story (3-5 paragraphs) which only uses very simple words that a 3 year old child would likely un- derstand. The story should use the verb ”decorate”, the noun ”thunder” and the adjective ”ancient”. The story should have...
TinyStories-HowSmallCanLanguageModelsBeandStillSpeak CoherentEnglish?
Terminology Expert Router Top-n Routing Load Balancing Loss Group Size Capacity Factor (CF) FFN Encoder-Decoder allreduce all2all (↑/↓)
ST-MOE- DESIGNING STABLE AND TRANSFERABLE SPARSE EXPERT MODELS
might even degrade post self-correction. Drawing from these insights, we offer suggestions for future research and practical applications in this field.
LARGELANGUAGEMODELSCANNOTSELF-CORRECT REASONINGYET
58 [221] Liu, X., Sun, T., He, J., Wu, J., Wu, L., Zhang, X., Jiang, H., Cao, Z., Huang, X., Qiu, X.: Towards efficient nlp: A standard evaluation and a strong baseline. arXiv preprint arXiv:2110.07038 (2021) [222] Naveed, H., Khan, A.U., Qiu, S., Saqib, M., Anwar, S., Usman, M., Barnes, N., Mian, A.: A comprehensive ...
Beyond Efficiency
Qualitative evaluation showcases impressive crossmodal reasoning capabilities, enabling the model to understand and reason across an input sequence of audio, images, and text natively (see Figure 5 and Table 13). Consider the educational setting depicted in Figure 1 as an example. A teacher has drawn a physics problem ...
gemini_1_report
SQL: SELECT creation, COUNT(*) FROM department GROUP BY creation ORDER BY COUNT(*) DESC LIMIT 1 The execution of the SQL query above would return a table with 2 columns. The first column, "creation" would contain the year in which a department was created. The second column, "COUNT(*)" would contain the number of depa...
Teaching Large Language Models to Self-Debug
4.2. Performance on synthetic long context tasks The passkey retrieval task is as defined in (Mohtashami & Jaggi, 2023). It requires a language model to retrieve a simple passkey (a five-digit random number) in a long meaningless text sequence. This task is super simple, and it tests whether an LLM can be aware of the...
Self-Extend LLM
Given a piece of text generated by an LLM prompted with a specific combination of per- sonality traits, we can accurately predict the IPIP-NEO scores the model would have with the same prompt setup. This indicates that LLM-simulated IPIP-NEO test responses we generated accurately capture the latent signals of personali...
PersonalityTraitsinLargeLanguageModels
<filename>solutions/solution_1.py # Here is the correct implementation of the code exercise We also evaluated CodeGen-16B-Mono with the same temperature and prompt (but had to omit the filename since the CodeGen models do not support them). But, we found that this hurts performance, bringing it down to 28.10%. However...
StarCoder_paper (1)
IMavatar is represented by three neural implicit fields, defining the canonical geometry, deformation bases, and texture of the person, as shown in Fig. 2. Details of the network architecture can be found in the Sup. Mat. Geometry. We represent the canonical geometry using an MLP that predicts the occupancy values for ea...
I M Avatar- Implicit Morphable Head Avatars from Videos
Figure 8. Network Architecture for Baselines. We show the modified geometry network for C-Net, which is additionally conditioned on the expression and pose parameters, ψ and θ. The deformation network for the B-Morph baseline is conditioned on the deformed point xd and the expression and pose parameters. For D-Net, the ...
I M Avatar- Implicit Morphable Head Avatars from Videos