text
stringlengths
1
1k
title
stringclasses
230 values
[253] Mingjie Sun, Zhuang Liu, Anna Bair, and J Zico Kolter. 2023. A Simple and Effective Pruning Approach for Large Language Models. arXiv preprint arXiv preprint arXiv:2104.09864 (2021). arXiv:2305.16355 (2023). arXiv preprint arXiv:2208.10483 (2022). arXiv:2306.11695 (2023). [254] Yutao Sun, Li Dong, Shaohan Hu...
TheEfficiencySpectrumofLargeLanguageModels-AnAlgorithmicSurvey
Large-scale generative models such as GPT and DALL-E have revolutionized natural language processing and computer vision research. These models not only generate high fidelity text or image outputs, but are also generalists which can solve tasks not explicitly taught. In contrast, speech generative models are still pri...
Voicebox-Text-GuidedMultilingual UniversalSpeechGenerationatScale
Content Warning: This document contains content that some may find disturbing or offensive, including content that is sexual, hateful, or violent in nature. 1 Introduction
gpt-4-system-card
Prompt 7 Figure 3: Qualitative analysis of multi-model cooperation with resource dependency. this symbol with the resource generated by the prerequisite task. This strategy empowers HuggingGPT to efficiently handle resource dependencies during task execution. 3.4 Response Generation
HuggingGPT- Solving AI Tasks with ChatGPT and its Friends in Hugging Face
For utterances containing claims that need to be checked, we ask crowdworkers to record the search queries that they would use to investigate them. Finally, we ask crowdworkers to edit the model’s response to incorporate brief search results from an external knowledge-retrieval system. If the search results include any...
LaMDA- Language Models for Dialog Applications
A.2.1 Conversation generation For the crowdsourcing of the conversation generation task, human participants interacted with LaMDA to generate three types of conversations: natural, sensitive-topic, and adversarial-intent conversations. These are defined below: • When generating natural conversations participants were ...
LaMDA- Language Models for Dialog Applications
InputOursAvatarMe++AlbedoMM[Luo et al. 21][Lee et al. 20]GANfit[Tran et al. 19][Deng et al. 19][Genova et al. 18]OursInput[Thies al. 16] with exp[Thies et al. 16] neutral spect to the input thanks to our carefully designed inpaint- ing approach. We also show an extensive qualitative com- parison with related 3D reconst...
Relightify-Relightable3DFacesfromaSingleImageviaDiffusionModels
#sanfrancisco #newyork #seattle #boston #washdc #sydney #melbourne #perth #paris #london #tokyo #dhaka #vienna #edinburgh #liverpool #manchester #amsterdam #munich #berlin #toronto #vancouver #moscow #dublin #seoul #hcmc #singapore #ENG #IT #4 #LI
Data Scientist_Machine Learning Engineer (Singapore-based, relocation provided) - Careers at Agoda
To this end, we create MozArt, a multilingual dataset of fill-in-the-gap sentences covering four languages (English, French, German and Spanish). The sentences reflect diastratic variation within each language and can be used to compare bi- ases in pretrained language models (PLMs) across languages. We study the influe...
Are Pretrained Multilingual Models Equally Fair Across Languages?
didates for data enrichment. Thus, we consider the best possible case is to find completely different but accurate media events, keywords, and external datasets across all explanations. Such a scenario would maximizeend-users’learning. Inparticular,forlistedexternaldatasets,wecomputedtheaccuracy and RDE. At the same ti...
Knowledge-graph-based-rich-and-confidentiality-preserving-Ex_2022_Informatio
A Global Phenomenon While our review has highlighted the US focus of this research area, the perils of misinformation, disinformation, and online propaganda are truly a global issue. In this section, we briefly review what is known about the dissemination of misinformation in the rest of the world – across Europe, a ra...
Social_Media_and_Democracy
nations to protect American economic security and stability, as well as ensuring that the United States remained politically relevant. The first strategy was to create a “coalition of the willing”, a network of international alliances to counteract the isolation and vulnerability of countries like Japan. The second str...
Direct Preference Optimization
To further improve training efficiency, we re- duced the amount of activations that are recom- puted during the backward pass with checkpoint- ing. More precisely, we save the activations that are expensive to compute, such as the outputs of linear layers. This is achieved by manually imple- menting the backward functio...
LLaMA- Open and Efficient Foundation Language Models
Yi Tay, Jason Wei, Hyung Won Chung, David R. So, Siamak Shakeri, Xavier Garcia, Vinh Q. Tran, Hauixiu Steven Zheng, Jinfeng Rao, Denny Zhou, Donald Metzler, Neil Houlsby, Quoc V. Le, and Mostafa Dehghani. Transcending scaling laws with 0.1% extra compute. In arxiv, 2022b. Alex Wang, Yada Pruksachatkun, Nikita Nangia, ...
Scaling Instruction-Finetuned Language Models
(from Evidence-1 to 2) in Figure 3. These hy- perlinked mentions must always be added as a mutation, as they provide the context for switch- ing the source of the evidence from one sentence to another. In Figure 3, ‘‘Spanish Empire’’ is not selected as an alignment based on the similarity scores with the claim spans. H...
ProoFVer- Natural Logic Theorem Proving for Fact Verification
t e d a s : W e n g , L i l i a n . ( J u n 2 0 2 3 ) . L L M - p o w e r e d A u t o n o m o u s A g e n t s " . L i l ʼ L o g . h t t p s : / / l i l i a n w e n g . g i t h u b . i o / p o s t s / 2 0 2 3 - 0 6 - 2 3 - a g e n t / . O r 14/07/2023, 11:00
LLM Powered Autonomous Agents _ Lil'Log
Language processing deals with inputs that are made up of discrete symbols. A word embedding layer ε is a mapping from entries from a dictionary to vectors. Task-specific embedding functions are often learned in an end-to-end fashion, in which case ε adapts the representation of words to the downstream task at hand whil...
MULTI HASH EMBEDDINGS IN SPACY
[25] Kate Magsamen-Conrad and Jeanette Muhleman Dillon. 2020. Mobile technology adoption across the lifespan: A mixed methods investigation to clarify adoption stages, and the influence of diffusion attributes. Computers in Human Behavior 112 (Nov. 2020), 106456. https://doi.org/10.1016/j. chb.2020.106456 [26] Ethan R...
Adoptionand AppropriationofLLMs
• Static Alignment Evaluations: We evaluate our PMs using our HHH Evaluations [Askell et al., 2021] from BIG-Bench6 (Figure 5), on Bot Adversarial Dialogues [Xu et al., 2020], and for gender bias [Rae et al., 2021] (Figure 12). We evaluate our RLHF models on TruthfulQA [Lin et al., 2021] (Figure 5), BBQ-Lite [Parrish e...
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
F Pre-training and fine-tuning results Treatment PT (2B) PT (8B) PT (137B) FT quality-safety (137B) LaMDA (2B) LaMDA (8B) LaMDA (137B) Sensibleness Specificity Interestingness Table 28: Results for Foundation Metrics Safety 84.8 87.5 88 94.6 93.8 93.5 95.2 10.8 11.3 15.8 23.2 23.4 22.2 25.7 46.5 46.5 49.8 77.1 74....
LaMDA- Language Models for Dialog Applications
statements and additional entry tests will be still assessed when making offers. 2. Successful applicants will receive a dual offer: a standard UCL offer and a lower offer of up to 3. 4. two grades. If an applicant does not access the lower offer through the above criteria, the higher UCL offer will still st...
UCL Academic Manual
Figure 2: Overview of the proposed Instant3D, which applies a conditional decoder network to map a text prompt to a corresponding triplane. Three condition mechanisms, i.e., cross-attention, style injection, and token-to-plane transformation, are seamlessly combined to bridge text and 3D, tackling the issue of weak sup...
Instant3D
- - - - davinci text-davinci-002 text-davinci-003 code-davinci-002 90.9 90.9 50.0 65.1 57.9 39.5 24.0 34.0 54.5 45.5 44.1 61.8 45.7 42.9 29.0 35.5 31.2 26.5 32.3 38.7 72.7 90.9 79.1 81.4 63.2 65.8 46.0 40.0 75.8 69.7 67.6 67.6 60.0 65.7 64.5 41.9 45.3 38.8 64.5 71.0 100.0 100.0 82.6 87.2 71.1 52.6 43.0 65.0 78.8 69....
Mixture-of-Experts
the authors of issue prevalent in both non-retrieval-based and retrieval-augmented generation approaches, (Kang et al., 2023) introduces the EVER framework. Unlike existing methods that rectify hallucinations post-hoc, EVER employs a real-time, stepwise strategy during the generation process to detect and rectify hal...
AComprehensiveSurveyofHallucinationMitigationTechniquesinLarge LanguageModels
chatgpt. [301] OpenAI. Openai: Introducing chatgpt. Website, 2022. https://openai.com/blog/ [302] Lu, J., X. Ren, Y. Ren, et al. Improving contextual language models for response retrieval in multi-turn conversation. In J. X. Huang, Y. Chang, X. Cheng, J. Kamps, V. Murdock, J. Wen, Y. Liu, eds., Proceedings of the 43...
TheRiseandPotentialofLargeLanguageModel BasedAgents
Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. Palm: Scaling language model- ing with pathways. CoRR, abs/2204.02311, 2022. 1, 2
REVEAL-Retrieval-AugmentedVisual-LanguagePre-Trainingwith Multi-SourceMultimodalKnowledgeMemory
[21] P. Henzler, N. J. Mitra, and T. Ritschel, “Escaping plato’s cave: 3d shape from adversarial rendering,” in Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 9984–9993. [22] T. Nguyen-Phuoc, C. Li, L. Theis, C. Richardt, and Y.-L. Yang, “Hologan: Unsupervised learning of 3d represe...
Text2NeRF- Text-Driven 3D Scene Generation with Neural Radiance Fields
subsets, namely the training set, validation set, and test set, with a distribution ratio of 8:1:1, respectively. In the rest of this section, we describe our baseline models, followed by implementation details and evaluation metrics. 5.1. Baseline models We implement three models as state-of-the-art baseline archi...
Video2Music
During the training process, since zero convolutions do not add noise to the network, the model should always be able to predict high-quality images. We observe that the model does not gradually learn the control conditions but abruptly succeeds in following the input conditioning image; usually in less than 10K optimi...
AddingConditionalControltoText-to-ImageDiffusionModels
gains. We show that REV EAL achieves state-of-the-art re- sults on visual question answering and image captioning. The project page is ReVeaL-CVPR.github.io.
REVEAL-Retrieval-AugmentedVisual-LanguagePre-Trainingwith Multi-SourceMultimodalKnowledgeMemory
DPT 0.408 0.506 0.520 0.426 0.409 0.377 0.360 0.338 Table 11: Depth estimation with frozen features. We report performance when training a linear classifier on top of one (lin. 1) or four (lin. 4) transformer layers, as well, as the DPT decoder (DPT) of Ranftl et al. (2021). We report the RMSE metric on the 3 datasets....
DINOv2- Learning Robust Visual Features without Supervision
We select three unimodal tasks over 14 datasets, as shown in Table 3. For the image classification task, results show the classification accuracy on MedMNIST v2 Yang et al. (2021) (a set of benchmark datasets) covering several biomedical domains. Our BiomedGPTBase model achieves state-of-the-art accuracy on 9 out of 10...
BiomedGPT
30
Is Power-Seeking AI an Existential Risk?
fication (Schick and Schütze, 2021a) and retrieval (Izacard and Grave, 2021) to reasoning (Zelikman et al., 2022). In a similar spirit to these approaches, Toolformer is trained on its own predictions after applying a perplexity-based filtering step. reduce perplexity on future tokens. Toolformer considerably improves z...
Toolformer
displays related entity names; lookup<keyword>, which looks up the keyword on the current page and returns the next sentence containing the keyword, similar to humans’ using the CTRL+F function on a web page; disambiguate<entity>, which inputs an entity name and displays all entities that share the same name. We focus ...
Tool Learning with Foundation Models
High-intensity interval training (HIIT) is a form of exercise that involves short bursts of intense exercise followed by periods of rest or low-intensity exercise. HIIT has become increasingly popular among athletes and fitness enthusiasts due to its ability to improve athletic performance, increase endurance, and promo...
WizardLM- Empowering Large Language Models to Follow Complex Instructions
HMM + GMM•LSTM •GRU•Wav2Vec 2.0 •FastSpeech2 •Conformer •ContextNetTimePerformance2000s2010s2020WhisperVall-E•HuBERT •Speechstew202120222023 2 Mehrish et al. Deep Learning Architectures and Their Applications in Speech Processing Tasks Contents Introduction Background Speech Signals Speech Features Traditional mod...
AReviewofDeepLearningTechniquesforSpeechProcessing
ϕ1 1100 prioritisation.
APriorityMapforVision-and-LanguageNavigation withTrajectoryPlansandFeature-LocationCues
Particularly, we explore the relation with high-level features related to content, composition and color. For this purpose we use outputs of the attribute layers of the model proposed in [34], as described in Section IV-B.2. In order to observe how the predicted aesthetics, sentiment and memorability scores correlate w...
A_Deep_Learning_Perspective_on_Beauty_Sentiment_and_Remembrance_of_Art
cific dialogue in their memory stream that provided them with the information. We report the percentage of agents holding the information at the end of the simulation.
Generative Agents- Interactive Simulacra of Human Behavior
tasks. arXiv preprint arXiv:2303.15056, 2023. Dawn Song. The false promise of imitating proprietary llms. arXiv preprint arXiv:2305.15717, 2023. Udit Gupta, Mariam Elgamal, Gage Hills, Gu-Yeon Wei, Hsien-Hsin S Lee, David Brooks, and Carole-Jean Wu. Act: designing sustainable computer systems with an architectural carb...
Llama2
G P T - 2 , G P T - 4 i s a m o d e l t r a i n e d t o f o l l o w i n s t r u c t i o n s v i a R L H F . [ ↩ ]
Language models can explain neurons in language models
GPT-4 (early) response There are different ways to approach the task of writing a sentence that answers the question of who are the people getting married, depending on the level of detail, tone, and purpose of the sentence. Here are some possible examples: - A simple and factual sentence: The people getting married are...
gpt-4-system-card
The story starts with a group of people who live on a planet. They are very intelligent and have created many things that help them in their daily lives. One day, they decided to create a machine that can make food for them. The machine was able to produce food but it also produced something else. It produced a liquid ...
SELF-INSTRUCT- Aligning Language Model with Self Generated Instructions
[21] Leo Gao, Stella Rose Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. The pile: An 800gb dataset of diverse text for language modeling. ArXiv, abs/2101.00027, 2021. 10 [22] Tianyu Gao, Xingcheng Yao, and Dan...
E5
needed to meet goals, while reducing agents could be essential for managing computational resources and minimizing waste. In such instances, the system must be manually redesigned and restarted by the designer.
TheRiseandPotentialofLargeLanguageModel BasedAgents
6 Conclusion In this work, we presented Moûsai, a novel text- to-music generation model using latent diffusion. We show that, in contrast to earlier approaches, our model can generate minutes of music in real- time on a consumer GPU, with good music quality and text-audio binding. In addition, we provide a collection ...
MOUSAI
2.1 Pretraining Data Our training corpus includes a new mix of data from publicly available sources, which does not include data from Meta’s products or services. We made an effort to remove data from certain sites known to contain a high volume of personal information about private individuals. We trained on 2 trillio...
Llama2
Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vin- odkumar Prabhakaran, Emily Reif, Nan Du, B...
Toolformer
A c k n o w l e d g m e n t s I w o u l d l i k e t o t h a n k a n d a c k n o w l e d g e t h e i n c r e d i b l e w o r k o f m y c o l l e a g u e s o n t h e B a r d t e a m , G o o g l e R e s e a r c h a n d R e s p o n s i b l e A I . q u a l i t y , ...
An overview of Bard- an early experiment with generative AI
b e t t e r u n d e r s t a n d t h e p o s s i b l e c o n s e q u e n c e s a n d i m p l e m e n t s a f e g u a r d s a g a i n s t t h e m . T h e p a p e r c l i p s A I a p o c a l y p s e a n d t h e S q u i g g l e M a x i m i z e r s c e n a r i o s i l l u s t r a t e t h ...
Task-driven Autonomous Agent Utilizing GPT-4, Pinecone, and LangChain for Diverse Applications – Yohei Nakajima
Red teaming. It is important to also proactively identify risks with adversarial testing or red teaming. We conducted 3 red teaming exercises with 25 Meta employees, including domain experts in responsible AI, malware development, and offensive security engineering. The red teamers provided a nuanced evaluation specifi...
CodeLlama2
as an upper-bound on pipeline performance, removing the confounding factor that I→R rationales can be poor (§3.1). 8Camburu et al. (2018) give an example: the rationale “A woman is not a person” could predict either a contradiction or entailment label depending on the input. ble cases for a model’s output: {lstable, ...
Measuring Association Between Labels and Free-Text Rationales
[107] Philippe Martin, Lorraine Cousin, Serge Gottot, Aurélie Bourmaud, Elise de La Rochebrochard, and Corinne Alberti. Participatory interventions for sexual health promotion for adolescents and young adults on the internet: Systematic review. Journal of Medical Internet Research, 22:e15378, 07 2020. [108] David Patt...
LaMDA- Language Models for Dialog Applications
Video input consists of a series of continuous image frames. As a result, the methods used by agents to perceive images [287] may be applicable to the realm of videos, allowing the agent to have good perception of video inputs as well. Compared to image information, video information adds a temporal dimension. Therefor...
TheRiseandPotentialofLargeLanguageModel BasedAgents
R = FLLM(O,M), (3) where FLLM is a LLM. To avoid arbitrary reasoning outputs of LLMs which might lead to hallucination and results not relevant to planning, we constrain R to contain two essential parts: notable objects and potential effects. Specifically, we first instruct the LLM to identify those notable objects t...
ALanguageAgentforAutonomousDriving
The use of bots in online social settings dates back to before their integral use over Internet Relay Chat (IRC) – a precursor to contemporary social media (Mutton 2004). Social bots also appeared even earlier, in experiments with what programmers then called “chat bots” on the public web’s precursor, the Advanced Rese...
Social_Media_and_Democracy
We would also suggest that the scholarly community plays an especially important role in leveraging the gains that accrue to the public at large. To be sure, academics and other researchers can engage in malfeasance, conduct unethical research, pursue narrow “academic” questions, or publish erroneous results. However, ...
Social_Media_and_Democracy
where advanced models like GPT-4 and others of its ilk may generate references that are inaccurate or completely unfounded (Rawte et al., 2023). This issue arises due to the training phase’s pattern gen- eration techniques and the absence of real-time internet updates, contributing to discrepancies in the information o...
AComprehensiveSurveyofHallucinationMitigationTechniquesinLarge LanguageModels
28 Images Retrieval Stanford Cars / train SUN397 / train1 Dataset / Split Task ImageNet-22k / – classification ImageNet-22k / – classification classification ImageNet-1k / train fine-grained classif. Caltech 101 / train fine-grained classif. CUB-200-2011 / train fine-grained classif. DTD / train1 fine-grained classif. FGV...
DINOv2- Learning Robust Visual Features without Supervision
• Our work: we propose BadGPT, the first backdoor attack on RL fine-tuning in language models, with the aim of exploring the vulnerability of this new NLP paradigm and discussing effective attack strategies. The findings of our study have important implications for the security of RL fine-tuning in NLP, and can inform ...
BadGPT- Exploring Security Vulnerabilities of ChatGPT via Backdoor Attacks to InstructGPT
C u r r e n c y e x c h a n g e r a t e s c h a n g e m u c h f a s t e r t h a n w e a t h e r p r e d i c t i o n s , y e t t h e J u r a s s i c - X c o n c e p t - a l a n g u a g e m o d e l c o n n e c t e d t o a r e l i a b l e s o u r c e o f i n f o r m a t i o n - ...
Jurassic-X_ Crossing the neuro-symbolic chasm with the MRKL system
6.4. Sensitivity to provided metadata As described in Section 4.4, at sampling time we provide randomised metadata to AlphaCode to increase sample diversity. This includes tags (e.g. whether the problem is of type “binary search” or “brute force”), ratings (how difficult the problem is), programming language, and whether...
alphacode
data discrepancies and improve the sharpness of predictions. Finally, Binaural Grad [299] explores the application of diffusion models in binaural audio synthesis, aiming to generate binaural audio from monaural audio sources. It accomplishes this through a two-stage diffusion-based framework.
AReviewofDeepLearningTechniquesforSpeechProcessing
in regular pay. Now we can figure out how much Tina makes in overtime pay. She gets time and a half for overtime, so 1.5 * $18 per hour = $27 per hour. And since she works 10 hours of overtime and makes $27 per hour, she makes 10*27 = $270 in overtime pay. So in total, Tina makes 720 + 270 = $990 per week. (cid:52)
Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models
Previous neural rendering methods [3, 33, 36, 37, 50, 66, 75] typically assume multi-view input, careful lab capture, or do not perform well on humans due to non-rigid body motion. Human-specific methods typically assume a SMPL template [34] as a prior, which helps constrain the motion space but also introduces artifac...
HumanNeRF- Free-viewpoint Rendering of Moving People from Monocular Video
Following [22, 14], we evaluate our method on the task of texture completion using the Multi-PIE [30] subset of the UVDB dataset [14]. This consists of complete UV tex- tures for 337 different identities, and corresponding 2D im- ages of the faces from various camera poses. In accordance with [22, 14], we use the last ...
Relightify-Relightable3DFacesfromaSingleImageviaDiffusionModels
Q: Joe played catch with Derek and Tammy. He caught the ball 23 times. Derek made four less than double the catches Joe did. Tammy caught the ball sixteen more than a third of the times Derek did. How many times did Tammy catch the ball? A: Reasoning process: Joe caught the ball 23 times Derek caught four less than dou...
Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models
during training for each programming language supported by MultiPL-E. The performance curve for several high-resource programming languages suggests that training longer is likely to improve their performance further. However, some of the low-resource languages see limited improvement during training or even have a pas...
StarCoder_paper (1)
4 Experiments 4.1 Data sources We train our model on a large dataset compiled of speech, music, and environmental sounds. For speech, we use the DAPS dataset [26], the clean speech segments from DNS Challenge 4 [10], the Common Voice dataset [2], and the VCTK dataset [40]. For music, we use the MUSDB dataset 5 [31...
RVQGAN
Regarding retrieval quality, the issues are multifaceted. The primary concern is low precision, where not all blocks within the retrieval set correlate with the query, leading to potential hallucination and mid-air drop issues. A secondary issue is low recall, which arises when not all relevant blocks are retrieved, th...
Retrieval-AugmentedGenerationforLargeLanguageModels-ASurvey
27See Arbital (undated, and no author listed, but I believe it’s Yudkowsky) on “consequentialist cognition” for more. Thanks to Paul Christiano for discussion. 9 “behaving just like” they are doing this); by hypothesis, to the extent there is a difference here, it’s not a predictively relevant one. I am not assumin...
Is Power-Seeking AI an Existential Risk?
K-NN classifiers have the great advantage of not relying on many hyperparameters, being fast and light to deploy, without requiring any domain adaptation. In the context of SSL evaluation, training a linear classifier on top of pre-trained Linear feature representations, a.k.a. linear-probing evaluation, was introduced...
A Cookbook of Self-Supervised Learning
optimization of the entire vocabulary and the possibility of generating invalid labels beyond the closed label set. To tackle these issues, we apply a beam search strategy that incorporates a prefix tree (also known as a trie), limiting the number of candidate tokens and resulting in more efficient and accurate decodin...
BiomedGPT
[246] Feng, G., B. Zhang, Y. Gu, et al. Towards revealing the mystery behind chain of thought: a theoretical perspective. CoRR, abs/2305.15408, 2023. [247] Grafman, J., L. Spector, M. J. Rattermann. Planning and the brain. In The cognitive psychology of planning, pages 191–208. Psychology Press, 2004. [248] Unterra...
TheRiseandPotentialofLargeLanguageModel BasedAgents
References [1] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Chris...
HuggingGPT- Solving AI Tasks with ChatGPT and its Friends in Hugging Face
source. In Table 3, “happily” and “with his friend” are the two examples of the hallucinatory content since they are added without any apparent connection to the input.
SurveyofHallucinationinNatural Language Generation
gorithm [1] can effectively remedy this issue for existing text-to-3D methods, it does not perform well in our con- text. We find that this is mainly due to the different degrees of multi-head effect exhibited by different 3D objects. As previous text-to-3D approaches only train a single object at a time, one can easil...
Instant3D
of 512 × 512 images into smaller 64 × 64 “latent images” for stabilized training. This requires ControlNets to convert image-based conditions to 64 × 64 feature space to match the convolution size. We use a tiny network E(·) of four convolution layers with 4 × 4 kernels and 2 × 2 strides (activated by ReLU, channels ar...
Adding Conditional Control to Text-to-Image Diffusion Models
Codebook interleaving patterns. Figure A.1 provides a visual description of the additional codebook patterns introduced for the ablation in Section 4, namely “partial flattening” and “partial delay” patterns. The intuition behind such patterns is driven by the fact that the first codebook from RVQ is the most important...
Simple and Controllable Music Generation
By decomposing the term in Equation 2 into two parts, we obtain two attribution vectors over the input tokens; one for the predicted label logits L, and one for the predicted rationale logits R in the decoded output: (a) In §4.1. ϵσ is the noise. (b) In §4.2. Ia are input tokens selected by an attribution method. Fi...
Measuring Association Between Labels and Free-Text Rationales
PROMPT FOR SPORTS UNDERSTANDING Q: Is the following sentence plausible? “Kyle Palmieri was called for slashing.” A: Kyle Palmieri is a hockey player. Being called for slashing is part of hockey. So the answer is yes. Q: Is the following sentence plausible? “Joao Moutinho caught the screen pass in the NFC championship.”...
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
5 Figure 4: Per-domain log-perplexity of 8B models on The Pile. Despite downweighting some do- mains, DoReMi improves log-perplexity on all domains. 6.5%, and achieves the baseline accuracy 2.6x faster. On the GLaM dataset where domain weights tuned on downstream datasets are available, DoReMi finds domain weights wi...
DoReMi- Optimizing Data Mixtures Speeds Up Language Model Pretraining
This report examines what I see as the core argument for concern about existential risk from misaligned artificial intelligence. I proceed in two stages. First, I lay out a backdrop picture that informs such concern. On this picture, intelligent agency is an extremely powerful force, and creating agents much more intell...
Is Power-Seeking AI an Existential Risk?
and D. Amodei, “Scaling Laws for Neural Language Models.” 2020. [22] M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. d. O. Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, et al., “Evaluating large language models trained on code,” arXiv preprint arXiv:2107.03374 (2021) . [23] K. Cobbe, V. Kosaraju, M. Bav...
ClaudeModels
8 Conclusion We presented ProoFVer, a natural logic-based proof system for fact verification. Currently, we report the best results in terms of label accuracy, and the second best results in FEVER Score in the FEVER leaderboard. Moreover, ProoFVer is more robust in handling superfluous information from the retriever, a...
ProoFVer- Natural Logic Theorem Proving for Fact Verification
[72] M. T. Hosseini, A. Ghaffari, M. S. Tahaei, M. Rezagholizadeh, M. Asgharian, and V. P. Nia, “Towards fine-tuning pre-trained language models with integer forward and backward propagation,” in Proc. Findings Assoc. Comput. Linguistics, 2023, pp. 1867–1876. [73] S.-i. Amari, “Backpropagation and stochastic gradient ...
Parameter-EfficientFine-TuningMethods
Grimmelmann, J. (2015). The virtues of moderation. Yale Journal of Law and Halfaker, A., Geiger, R.S., Morgan, J.T. (2013). The rise and decline of an open collaboration system: How Wikipedia’s reaction to popularity is causing its decline. American Behavioral Scientist, 57(5), 664–688. Howard, P., & Kollanyi, B. (20...
Social_Media_and_Democracy
4.3 Quality So far, we have shown the quantity and diversity of the generated data, but its quality remains uncer- tain. To investigate this, we randomly sample 200 instructions and randomly select 1 instance per in- struction. We asked an expert annotator (co-author of this work) to label whether each instance is cor-...
SELF-INSTRUCT- Aligning Language Model with Self Generated Instructions
How was the specific wording of the task instructions generated? A research team iteratively generated instructions, making adjustments to create a template that could achieve high inter-annotator agreement. At a high level, what aspects of the task are subjective? Judgments of the toxicity of online comments are highly...
PaLM 2 Technical Report
end for end while while not converged do for (xli, xvi) in ϕ2 do Θ′ gP M F ← gϕ2 (Xi, Θ). end for end while while not converged do Sample xtrt from DT rain. xtpt ← gP M T P (xtrt). Sample (ı′ ev ← gU SM (ψt). v ← gV BF (ev). e′ el ← gP rL(gCat(ı′ t, e′ v)). t, ψt) from DT rain. end while return (el, e′ v) 3.1....
APriorityMapforVision-and-LanguageNavigation withTrajectoryPlansandFeature-LocationCues
T h e S q u i g g l e M a x i m i z e r s c e n a r i o , o r i g i n a l l y c o n c e i v e d a s t h e “ p a p e r c l i p m a x i m i z e r ” b y E l i e z e r Y u d k o w s k y , e m p h a s i z e s t h e i m p o r t a n c e o f c o n s i d e r i n g b o t h o u t e r a l i g ...
Task-driven Autonomous Agent Utilizing GPT-4, Pinecone, and LangChain for Diverse Applications – Yohei Nakajima
2. For the first time, we evaluate the abilities of LLMs independent of in-context learning and instruction tuning, providing a clear and precise measure of the abilities which are truly emergent. 3. We empirically investigate the hypothesis that the added capabilities of instruction-tuned models can be explained as t...
AreEmergentAbilitiesinLarge Language Models just In-Context
and Qwen-VL (Bai et al., 2023) have proposed different integration method to enable image understanding or generation capabilities for LLMs. For the audio modality, there have been attempts to utilize well-trained audio foundation models as tools, such as AudioGPT (Huang et al., 2023) and HuggingGPT (Shen et al., 2023)...
Qwen-Audio
√ 18
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
learning. arXiv preprint arXiv:2008.05659, 2020. 21 T. Xiao, I. Radosavovic, T. Darrell, and J. Malik. Masked visual pre-training for motor (arXiv:2203.06173), Mar 2022. URL http://arxiv.org/abs/2203.06173. control. arXiv:2203.06173 [cs]. 40 Z. Xie, Z. Zhang, Y. Cao, Y. Lin, J. Bao, Z. Yao, Q. Dai, and H. Hu. Simmim...
A Cookbook of Self-Supervised Learning
12.0 0.8 21.6 40.4 54.0 51.6 0.0 25.6 0.4 8.8 6.8 41.1 0.0 0.0 14.8 3.6 13.2 15.6 21.2 10.8 11.6 10.4 52.8 Model - - - - davinci text-davinci-002 text-davinci-003 code-davinci-002 80M T5-Small Flan-T5-Small 250M T5-Base Flan-T5-Base 780M T5-Large Flan-T5-Large 3B T5-XL Flan-T5-XL 11B T5-XXL Flan-T5-XX...
Mixture-of-Experts
be realized. What is needed is only to designate the potential roles that can implement the idea. For instance, a Python Programmer could collaborate with a Stock Trader to realize the idea of developing a trading bot for the stock market. After the idea and roles are determined, the task specifier agent will brainstorm...
CAMEL- Communicative Agents for “Mind” Exploration of Large Scale Language Model Society
plot them on a plane. Lastly, we apply the k-means algorithm to partition the instructions into 20 clusters based on their distance in the reduced space. We illustrate the resulting clusters in Figure 9
WizardLM- Empowering Large Language Models to Follow Complex Instructions
Similarly, 34% think the widespread use of facial recognition by police would make policing more fair; 40% think that it would not make much difference, and 25% think it would make policing less fair. How easy to understand is the Another concern for Americans ties to the potential impact of these emerging technologie...
AI and Human Enhancement_ Americans’ Openness Is Tempered by a Range of Concerns _ Pew Research Center