Sentence Similarity
sentence-transformers
Safetensors
bert
feature-extraction
Generated from Trainer
dataset_size:500
loss:MultipleNegativesRankingLoss
text-embeddings-inference
Instructions to use lufercho/ArxvBert-ST_v2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- sentence-transformers
How to use lufercho/ArxvBert-ST_v2 with sentence-transformers:
from sentence_transformers import SentenceTransformer model = SentenceTransformer("lufercho/ArxvBert-ST_v2") sentences = [ "Entanglement increase from local interactions with\n not-completely-positive maps", " Simple examples are constructed that show the entanglement of two qubits\nbeing both increased and decreased by interactions on just one of them. One of\nthe two qubits interacts with a third qubit, a control, that is never entangled\nor correlated with either of the two entangled qubits and is never entangled,\nbut becomes correlated, with the system of those two qubits. The two entangled\nqubits do not interact, but their state can change from maximally entangled to\nseparable or from separable to maximally entangled. Similar changes for the two\nqubits are made with a swap operation between one of the qubits and a control;\nthen there are compensating changes of entanglement that involve the control.\nWhen the entanglement increases, the map that describes the change of the state\nof the two entangled qubits is not completely positive. Combination of two\nindependent interactions that individually give exponential decay of the\nentanglement can cause the entanglement to not decay exponentially but,\ninstead, go to zero at a finite time.\n", " Many extra-solar planets discovered over the past decade are gas giants in\ntight orbits around their host stars. Due to the difficulties of forming these\n`hot Jupiters' in situ, they are generally assumed to have migrated to their\npresent orbits through interactions with their nascent discs. In this paper, we\npresent a systematic study of giant planet migration in power law discs. We\nfind that the planetary migration rate is proportional to the disc surface\ndensity. This is inconsistent with the assumption that the migration rate is\nsimply the viscous drift speed of the disc. However, this result can be\nobtained by balancing the angular momentum of the planet with the viscous\ntorque in the disc. We have verified that this result is not affected by\nadjusting the resolution of the grid, the smoothing length used, or the time at\nwhich the planet is released to migrate.\n", " We investigate the evolution of binary fractions in star clusters using\nN-body models of up to 100000 stars. Primordial binary frequencies in these\nmodels range from 5% to 50%. Simulations are performed with the NBODY4 code and\ninclude a full mass spectrum of stars, stellar evolution, binary evolution and\nthe tidal field of the Galaxy. We find that the overall binary fraction of a\ncluster almost always remains close to the primordial value, except at late\ntimes when a cluster is near dissolution. A critical exception occurs in the\ncentral regions where we observe a marked increase in binary fraction with time\n-- a simulation starting with 100000 stars and 5% binaries reached a core\nbinary frequency as high as 40% at the end of the core-collapse phase\n(occurring at 16 Gyr with ~20000 stars remaining). Binaries are destroyed in\nthe core by a variety of processes as a cluster evolves, but the combination of\nmass-segregation and creation of new binaries in exchange interactions produces\nthe observed increase in relative number. We also find that binaries are cycled\ninto and out of cluster cores in a manner that is analogous to convection in\nstars. For models of 100000 stars we show that the evolution of the core-radius\nup to the end of the initial phase of core-collapse is not affected by the\nexact value of the primordial binary frequency (for frequencies of 10% or\nless). We discuss the ramifications of our results for the likely primordial\nbinary content of globular clusters.\n" ] embeddings = model.encode(sentences) similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [4, 4] - Notebooks
- Google Colab
- Kaggle
Welcome to the community
The community tab is the place to discuss and collaborate with the HF community!