Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
Generated from Trainer
dataset_size:50000
loss:CosineSimilarityLoss
text-embeddings-inference
Instructions to use dwulff/minilm-brl with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use dwulff/minilm-brl with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("dwulff/minilm-brl") sentences = [ "An article on behavioral reinforcement learning:\n\nTitle: Cell-ŧype-specific responses to associative learning in the primary motor cortex.\nAbstract: The primary motor cortex (M1) is known to be a critical site for movement initiation and motor learning. Surprisingly, it has also been shown to possess reward-related activity, presumably to facilitate reward-based learning of new movements. However, whether reward-related signals are represented among different cell types in M1, and whether their response properties change after cue-reward conditioning remains unclear. Here, we performed longitudinal in vivo two-photon Ca2+ imaging to monitor the activity of different neuronal cell types in M1 while mice engaged in a classical conditioning task. Our results demonstrate that most of the major neuronal cell types in M1 showed robust but differential responses to both the conditioned cue stimulus (CS) and reward, and their response properties undergo cell-ŧype-specific modifications after associative learning. PV-INs’ responses became more reliable to the CS, while VIP-INs’ responses became more reliable to reward. Pyramidal neurons only showed robust responses to novel reward, and they habituated to it after associative learning. Lastly, SOM-INs’ responses emerged and became more reliable to both the CS and reward after conditioning. These observations suggest that cue- and reward-related signals are preferentially represented among different neuronal cell types in M1, and the distinct modifications they undergo during associative learning could be essential in triggering different aspects of local circuit reorganization in M1 during reward-based motor skill learning.", "An article on behavioral reinforcement learning:\n\nTitle: Learning to construct sentences in Spanish: A replication of the Weird Word Order technique.\nAbstract: In the present study, children's early ability to organise words into sentences was investigated using the Weird Word Order procedure with Spanish-speaking children. Spanish is a language that allows for more flexibility in the positions of subjects and objects, with respect to verbs, than other previously studied languages (English, French, and Japanese). As in prior studies (Abbot-Smith et al., 2001; Chang et al., 2009; Franck et al., 2011; Matthews et al., 2005, 2007), we manipulated the relative frequency of verbs in training sessions with two age groups (three-A nd four-year-old children). Results supported earlier findings with regard to frequency: Children produced atypical word orders significantly more often with infrequent verbs than with frequent verbs. The findings from the present study support probabilistic learning models which allow higher levels of flexibility and, in turn, oppose hypotheses that defend early access to advanced grammatical knowledge.", "An article on behavioral reinforcement learning:\n\nTitle: What are the computations of the cerebellum, the basal ganglia and the cerebral cortex?.\nAbstract: The classical notion that the cerebellum and the basal ganglia are dedicated to motor control is under dispute given increasing evidence of their involvement in non-motor functions. Is it then impossible to characterize the functions of the cerebellum, the basal ganglia and the cerebral cortex in a simplistic manner? This paper presents a novel view that their computational roles can be characterized not by asking what are the 'goals' of their computation, such as motor or sensory, but by asking what are the 'methods' of their computation, specifically, their learning algorithms. There is currently enough anatomical, physiological, and theoretical evidence to support the hypotheses that the cerebellum is a specialized organism for supervised learning, the basal ganglia are for reinforcement learning, and the cerebral cortex is for unsupervised learning.This paper investigates how the learning modules specialized for these three kinds of learning can be assembled into goal-oriented behaving systems. In general, supervised learning modules in the cerebellum can be utilized as 'internal models' of the environment. Reinforcement learning modules in the basal ganglia enable action selection by an 'evaluation' of environmental states. Unsupervised learning modules in the cerebral cortex can provide statistically efficient representation of the states of the environment and the behaving system. Two basic action selection architectures are shown, namely, reactive action selection and predictive action selection. They can be implemented within the anatomical constraint of the network linking these structures. Furthermore, the use of the cerebellar supervised learning modules for state estimation, behavioral simulation, and encapsulation of learned skill is considered. Finally, the usefulness of such theoretical frameworks in interpreting brain imaging data is demonstrated in the paradigm of procedural learning.", "An article on behavioral reinforcement learning:\n\nTitle: Repeated decisions and attitudes to risk.\nAbstract: In contrast to the underpinnings of expected utility, the experimental pilot study results reported here suggest that current decisions may be influenced both by past decisions and by the possibility of making decisions in the future." ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Notebooks
- Google Colab
- Kaggle
Ctrl+K