url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://wordpress.org/support/topic/plugin-woocommerce-accept-terms-and-conditions-by-default
|
code
|
After trying to find a solution in the dashboard, the way I solved this was to dig into the php code of my site.
In my case (using the Mystile theme), the appropriate file is found here:
Find the code that applies to your terms checkbox. Mine looks like this:
<p class="form-row terms">
<label for="terms" class="checkbox"><?php _e( 'I have read and accept the', 'woocommerce' ); ?> <a>" target="_blank"><?php _e( 'terms & conditions', 'woocommerce' ); ?></a></label>
<input type="checkbox" class="input-checkbox" name="terms" <?php checked( isset( $_POST['terms'] ), true ); ?> id="terms" />
Then, right behind
and save the edited file. The "agree with terms" checkbox should now be selected by default.
Hope that helps!
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983026851.98/warc/CC-MAIN-20160823201026-00152-ip-10-153-172-175.ec2.internal.warc.gz
|
CC-MAIN-2016-36
| 721
| 9
|
http://meyouzeum.weebly.com/projections-whats-my-question.html
|
code
|
Projections: What's My Question?
This workshop station includes several activities around identity that probe and critique the ways people interact and engage with one another, delving into the dilemmas typical questions such as “Where are you from?,” “What do you do?,” and “What are you?” can elicit. Together we will explore and expand our ways of getting to know one another more fully.
This workshop will have 3 parts:
Discussion on the discourse:
Expanding the discourse:
Document question on white paper, or project onto participant wearing white T-shirt.
Our Stereotype Introductions
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314904.26/warc/CC-MAIN-20190819180710-20190819202710-00360.warc.gz
|
CC-MAIN-2019-35
| 603
| 7
|
http://stackoverflow.com/questions/8391640/number-of-variables-pass-to-function/8391761
|
code
|
Sorry if this is a dumb question. I don't ever have to write anything in VB.NET.
But I am passing variables named "name" to a function and sometimes it may be 1 name or 2 names, etc. I want to check in the function if its only 1 name don't add a comma, if it's 2 names add a comma. Is there a way to get how many "names" there are?
EDIT: Added my ode to make my question a little more clear. sorry for not doing it before.
Public Function GenerateCSV(byval str as string, byval str1 as string, byval str2 as string, byval GrpName as string) IF GroupName <> GrpName THEN GroupName = GrpName CSVString = "" END IF IF str = "" CSVString = "" ELSE CSVString = CSVString & str & ", " & str1 & ", " & str2 & ", " END IF return CSVString End function
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398463315.2/warc/CC-MAIN-20151124205423-00085-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 743
| 4
|
https://scientistsolutions.com/forum/jobs-bioinformatics/director-bioinformatics
|
code
|
RaNA Therapeutics is an energetic, rapidly growing early stage biotechnology company in Cambridge, MA founded by leading scientists from industry and Massachusetts General Hospital. RaNA's mission is to discover and develop drugs to improve health for patients by modulating disease-associated epigenetic patterns, thereby restoring normal expression of individual targeted genes.
We are currently seeking a highly motivated and results-oriented computational biologist to join RaNA. Reporting to the CSO, this position will be responsible for developing the RaNA bioinformatics strategy, use existing tools and develop novel solutions for the analysis of high-throughput sequencing and other datasets to support corporate goals and academic collaboration/publication efforts.
 The candidate must have a PhD in bioinformatics, computer science or a related area. A solid understanding and demonstrated achievements in applying bioinformatics principles to epigenetics and chromatin/ncRNA biology will be a strong plus.
 Ability to analyze diverse types of data, including gene expression profiling, RNA-Seq, RIP-Seq and ChIP-Seq, from major platforms such as Illumina and Pacific Biosciences in an integrated manner.
 Experienced in data analysis tools such as Bioconductor, GenePattern, or equivalent.
 Solid understanding of quantitative and biostatistical principles. Experienced user of one of the statistical packages, ideally R.
 Experience with Python and/or Perl is preferred. Basic database (SQL) and software development training/experience are a plus.
 Ability to multitask and work independently and collaboratively as part of an interdisciplinary team in a matrix environment with 15-20 scientists. Excellent organizational and time management skills, the ability to generate high quality data analysis results under tight deadlines.
 Supervisory experience is necessary as the group will expand over time
 Ability to interact well across the organization and with external collaborators.
RaNA Therapeutics is committed to equal employment opportunity. All applicants must have authorization to work in the U.S. RaNA employee benefits package include health/dental insurance, retirement plan, and more.
Please apply via email: eval(unescape('%64%6f%63%75%6d%65%6e%74%2e%77%72%69%74%65%28%27%3c%61%20%68%72%65%66%3d%22%6d%61%69%6c%74%6f%3a%67%73%74%72%69%63%6b%6c%61%6e%64%40%72%61%6e%61%72%78%2e%63%6f%6d%22%3e%67%73%74%72%69%63%6b%6c%61%6e%64%40%72%61%6e%61%72%78%2e%63%6f%6d%3c%2f%61%3e%27%29%3b'))
Deadline: Contact employer
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588257.34/warc/CC-MAIN-20211028034828-20211028064828-00507.warc.gz
|
CC-MAIN-2021-43
| 2,556
| 13
|
https://github.com/txus/hijacker/commit/593a0725bbc6c8c54b29e3de55922a72c964f96f
|
code
|
Please sign in to comment.
- Loading branch information...
|@@ -36,7 +36,7 @@ And that's it! Oooor not. You have to spawn your server. In the command line:|
|-Where <handler> must be a registered handler (for now there is only 'logger').|
|+Where \<handler\> must be a registered handler (for now there is only 'logger').|
|So you type:|
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170864.16/warc/CC-MAIN-20170219104610-00266-ip-10-171-10-108.ec2.internal.warc.gz
|
CC-MAIN-2017-09
| 337
| 6
|
https://www.br.freelancer.com/projects/php-c-programming/string-matrix-help/
|
code
|
Write a C++ program that prompts the user with the following options:
Matrix String Quit
• If the user selects “M?? or “m?? then call a user-defined function called double average (void) that will the following:
a. Prompt the user to enter three sets of five numbers (type float) and store the numbers in a 3 x 3 array.
b. Determine the largest value of 9 values and print it in the main function. c. Determine the transpose of the original matrix and then find the sum of the original and transpose matrices and print the result in this function.
• If the user selects “S?? or “s?? then call a user-defined function void rev_str (void) that
a. Prompts the user to enter a string and store it into one-dimensional array. b. Replace the content of the string with the string reversed and save into another array.
c. Print both the original string and reversed one in this function.
• If the user enters “Q?? or “q?? then you should terminate the program.
If the user enters any other selection it should prompts the user invalid selection and try again.
Note: Do not use any global arrays or variables.
Rent A Coder requirements notice: As originally posted, this bid request does not have complete details. Should a dispute arise and this project go into arbitration "as is", the contract's vagueness might cause it to be interpreted against you, even though you were acting in good-faith. So for your protection, if you are interested in this project, please work-out and document the requirements onsite.
1) Complete and fully-functional working program(s) in executable form as well as complete source code of all work done.
2) Deliverables must be in ready-to-run condition, as follows? (depending on the nature? of the deliverables):
a)? For web sites or? other server-side deliverables intended to only ever exist in one place in the Buyer's environment--Deliverables must be installed by the Seller in ready-to-run condition in the Buyer's environment.
b) For all others including desktop software or software the buyer intends to distribute: A software? installation package that will install the software in ready-to-run condition on the platform(s) specified in this bid request.
3) All deliverables will be considered "work made for hire" under U.S. Copyright law. Buyer will receive exclusive and complete copyrights to all work purchased. (No GPL, GNU, 3rd party components, etc. unless all copyright ramifications are explained AND AGREED TO by the buyer on the site per the coder's Seller Legal Agreement).
Microsoft C++ 6.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863662.15/warc/CC-MAIN-20180520170536-20180520190536-00374.warc.gz
|
CC-MAIN-2018-22
| 2,557
| 18
|
https://postitviral.com/1161/we-became-giga-rich-vs-broke-for-24-hours
|
code
|
Can Rebecca Zamolo survive 24 hours as a giga rich mom or will she be broke? It all started When Rebecca Zamolo posted "Last To Leave PROM Wins." Now Rebecca and her best friends Matt, Daniel and Maddie are all giga rich for 24 hours. Each stage they will compete in challenges and if they lose they must be broke. Do. you think this is a good idea or the worst idea? Thank you for watching my funny entertainment comedy adventure vlog videos in 2022!
Our New Game Master Mansion Mystery Book - https://www.harpercollins.com/products/the-game-master-mansion-mystery-rebecca-zamolomatt-slays?variant=39715492724770
More awesome videos! Aphmau | Escape From DETENTION In Minecraft! https://www.youtube.com/watch?v=xfRLR3jRlLo
SSSniperwolf | Crazy Karens That Went Too Far https://www.youtube.com/watch?v=KvB2fQkpZNA
Lazarbeam | Worlds Best LEGO Builds https://www.youtube.com/watch?v=ECqZp53MSes
Airrack | Tinder In Real Life! https://www.youtube.com/watch?v=w3rHYUlj5gQ
▶ Get ZamFam merch! rebeccazamolo.com Rebecca Zamolo Social Media Instagram https://www.instagram.com/rebeccazamolo/ TikTok https://www.tiktok.com/@rebeccazamolo Twitter https://www.twitter.com/rebeccazamolo Facebook https://www.facebook.com/rebecca.zamolo
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034877.9/warc/CC-MAIN-20220625065404-20220625095404-00432.warc.gz
|
CC-MAIN-2022-27
| 1,227
| 7
|
https://www.servuo.com/archive/bulk-order-point-deed.1935/
|
code
|
Deed that adds a predetermined number of points to a predetermined crafts bulk order banked point balance.
Set scroll options in the code:
private const int _Points = 100; // Set your points here. private const BODType _CraftSkill = BODType.Smith; // Set skill type here. Example: BODType.Smith
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654031.92/warc/CC-MAIN-20230608003500-20230608033500-00348.warc.gz
|
CC-MAIN-2023-23
| 294
| 3
|
https://forums.ageofempires.com/t/toggle-hp-bar-hotkey-also-toggles-idle-pointers/236434
|
code
|
- GAME BUILD #:
- GAME PLATFORM: Steam
- OPERATING SYSTEM: Windows 10
DESCRIBE THE ISSUE IN DETAIL (below). LIMIT TO ONE BUG PER THREAD.
simple as title
How often does the issue occur? CHOSE ONE; DELETE THE REST!
- 100% of the time / matches I play (ALWAYS)
List CLEAR and DETAILED STEPS we can take to reproduce the issue ourselves… Be descriptive!
Here’s the steps to reproduce the issue:
- use the hotkey
What was SUPPOSED to happen if the bug you encountered were not present?
toggle hp bar hotkey only does what its supposed to do, instead of idle pointer
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511361.38/warc/CC-MAIN-20231004052258-20231004082258-00525.warc.gz
|
CC-MAIN-2023-40
| 564
| 12
|
https://deepai.org/publication/are-distributed-ledger-technologies-ready-for-smart-transportation-systems
|
code
|
It is widely recognized that next generation Internet services will massively resort to crowd-sourced and crowd-sensed data, coming from multiple sensors installed on multiple devices. Data aggregation provides the backbone for analyses able to capture some data findings that would not be possible from single sensors. This is true in smart transportation systems as well, where services are built through data sensed by vehicles [vanderHeijden:2017]. Transportation efficiency, travel safety, vehicle security, environment monitoring, are just few examples of types of services that might be offered [mousannif2011cooperation].
While the amount of possible services is countless, a number of issues must be considered, that are basically related to the gathering, storing and level of trust of the data. In fact, in order to share, aggregate and trade data coming from vehicles, some features must be provided by the digital services in use, such as access control, authenticity, verifiability and proof-of-location [ccnc2020]. This is where a new kind of technology can come to aid. Distributed Ledger Technologies (DLTs) are thought to provide a trusted and decentralized ledger of data. DLTs are a novel keyword, that extends the famous “blockchain” buzzword, to include those technological solutions that do not organize the data ledger as a linked list of blocks. Blockchains gathered momentum when Bitcoin and other crypto-currencies skyrocket. Then, the interest was mainly devoted to the possibility of building decentralized applications based on smart contracts [D'Angelo:2018, cryblock2019]. Currently, DLTs are widely utilized in scenarios where: i) there are multiple parties that concur in handling some shared data, ii) there is no complete trust among these parties, and often iii) parties compete to the access/ownership of such data. This is a typical scenario of smart transportation services that exploit data sensed from multiple sources (vehicles). Hence, the question now is if DLTs can be efficiently employed in such scenarios.
As a matter of fact, there are DLTs which have been designed with the intent to support the Internet of Things (IoT) [DiPietro:2018, gda-hpcs-16, gda-simpat-iot, sf-gda]. The main features of these novel technologies are concerned with the attempt to solve some main limitations that are commonly attributed to other blockchains, such as the lack of scalability, sustainability, transaction verification rate (i.e. how fast is the system to add novel data to the ledger). Examples of these novel DLTs for IoT are IOTA [popov2016tangle] and Radix [radixkb]. However, while their design is very interesting, at the time of writing we are aware of just few, and usually simplified, experimental studies on these technologies [Bartolomeu2018IOTAFA, BROGAN2018257, Elsts:2018:DLT:3282278.3282280, ccnc2020, 8767356, DBLP:journals/corr/abs-1902-04314]; none that demonstrate the viability of these proposed technologies in IoT and smart cities scenarios.
The aim of this work is, first, to propose a novel system architecture that exploits DLTs for the support of smart transportation systems. Second, we present an experimental evaluation on DLTs, based on the use of real data traces to emulate the data generation of a smart city traffic application. We analyze the performance of the IOTA DLT, through tests that measure its degree of scalability and responsiveness in real-time scenarios. Through our tests, we demonstrate how the Masked Authenticated Messaging (MAM) extension module of the IOTA protocol can be used to reliably and securely store and share sensed data in smart mobility applications. However, the latencies for the transactions’ validation result quite high (i.e. 23 sec, on average). Clearly enough, these latencies might not be acceptable in certain real-time application scenarios. Thus, there is still room for improvement.
Moreover, we report on some tests over the Radix alphanet test network. However, due the infancy of the Radix project, we are able to provide only some preliminary outcomes. Still, obtained results seem to be encouraging, but further studies are needed on this DLT.
The remainder of this paper is organized as follows. Section II provides some background on the IOTA DLT. Section III describes the application scenario that has been built to perform the study. Section IV presents all the details of the experimental evaluation, how we conducted the experiments and which metrics have been considered. In Section V, we describe results of the extensive experimental evaluation we conducted over IOTA. Section VI provides a discussion on the obtained results and on possible techniques to improve the DLTs performance. Finally, Section VII provides some concluding remarks.
A DLT is a software infrastructure maintained by a peer-to-peer network, where the network participants must reach a consensus on the states of transactions submitted to the distributed ledger, to make the transactions valid. Every participant to a DLT contains a local replica of the ledger, which provides data transparency to network participants and ensures high availability of the system. The information recorded to a DLT is append-only, using cryptographic techniques that guarantee that, once a transaction has been added to the ledger, it cannot be modified.
In this work, we mainly focus on IOTA, a specific DLT that is well suited for the IoT and smart transportation systems. This project aims to solve problems about scalability, control centralization, as well as post-quantum security issues, which are present in other blockchain technologies [Bartolomeu2018IOTAFA]. IOTA is a lightweight, permissionless DLT that enables participants to transfer immutable data and value among each other. From a distributed system point of view, IOTA nodes are organized as a peer-to-peer overlay, where nodes exchange messages containing updates on the decentralized ledger. Nodes that run the entire DLT protocol are commonly referred as full nodes. Being the IOTA architecture still in its infancy, currently a “coordinator node” is present in the system. Its task is to perform a periodic checkpointing of the ledger, with the aim to sustain possible large-scale security attacks. It releases milestone transactions that confirm that all the previous transactions are valid. The purpose of the IOTA foundation is that, after the transient phase, the coordinator will be shut off, hence making IOTA a pure peer-to-peer system [coordicide].
The IOTA decentralized ledger is not structured as a blockchain, but as a Direct Acyclical Graph (DAG) called the Tangle [popov2016tangle]. In the Tangle, graph vertices represent transactions and edges represent approvals. When a new transaction is issued, it must approve two previous transactions and the result is represented by means of directed edges. This process whereby a node selects two random tip transactions from its ledger is termed “tip selection”. In addition to the tip selection, in order to attach a novel transaction to the Tangle, a node must perform a Proof of Work (PoW), i.e. a computation to obtain a piece of data which satisfies certain requirements and which is difficult (costly and time-consuming) to produce but easy for others to verify [popov2016tangle]. The purpose of PoW is to deter denial of service attacks and other service abuses.
The validation approach is thought to address two major pain points that are associated to traditional blockchain-based DLTs, i.e. latency and fees. IOTA has been designed to offer fast validation, and no fees are required to add a transaction to the Tangle [BROGAN2018257]. This makes IOTA an interesting choice to support smart services built through crowd-sourced data.
An important feature offered by IOTA is the Masked Authenticated Messaging (MAM). MAM is a second layer data communication protocol which adds functionality to emit and access an encrypted data stream over the Tangle. Data streams assume the form of channels, formed by a linked list of transactions in chronological order. Once a channel is created, only the owner can publish encrypted messages. Users that possess the MAM channel encryption key (or set of keys, since each message can be encrypted with a different key) are enabled to decode the message. Messages are pushed on the channel in chronological order, and each message has a link to the next message to be created. Thus, once a user gains access to the MAM channel, he is enabled to see data from that moment on, whilst he cannot look back through the history of the channel before his entrance[BROGAN2018257]. In other words, MAM enables users to subscribe and follow a stream of data, generated by some devices. The data access to new data may be revoked simply by using a new encryption key.
Iii On the Use of DLTs for Smart Transportation
We consider a set of vehicles, equipped with sensors that can generate data of some interest (see Figure 1). Such sensed data can be transmitted through a network to an edge computing infrastructure. Thus, each vehicle interacts with a gateway, transmitting sensed data on a periodical basis. The gateway collects and handles the data, based on the specific service being realized. The nature of this specific platform is out of the scope of this work, since it truly depends on the kind of service to be hosted. For instance, it might be organized as a classic cloud system, rather than a distributed file system to store data, e.g. IPFS [Benet2014IPFSC].
In order to provide a level of traceability, verifiability and immutability of the generated data, the data itself, or a related digest (when the data is a large file or when it is a sensitive information), is added to a DLT [ccnc2020]. We assume the gateway is able to issue messages to a DLT node, thanks to authentication. These messages are converted to transactions added to the ledger. In general, all DLTs provide such kind of functionalities. For instance, in IOTA, Radix and Ethereum (e.g. through the INFURA APIs), there are APIs that allow entities, external to the DLT, to send a novel transaction. The main point here is that these transactions must be registered in the DLT in a fast way. Second, a good level of scalability must be guaranteed. Third, since a high amount of data is produced, the DLT should offer low fees (or no costs at all). Finally, we need to treat all these transactions as a data-stream, easy to retrieve. By its design, IOTA is recognized as a responsive, scalable, feeless DLT, with MAM channels as the tool to treat data as streams. For this reason, in the evaluation we will focus on IOTA.
Iv Experimental Evaluation
In this work, we are interested in evaluating the goodness of the adoption of IOTA as the immutable registry for smart transportation systems. Thus, we focused on the transmission of sensed data to IOTA, measuring latencies needed to issue, insert and validate transactions, and also the level of reliability of the full nodes.
Iv-a The Trace-driven Vehicles Simulation
We conducted a trace-driven experimental evaluation. Traces were generated using the RioBuses dataset, a real dataset of mobility traces of buses in Rio de Janeiro (Brasil) [coppe-ufrj-RioBuses-20180319]. Based on these traces, we simulated a number of buses that, during their path, generate sensed data. (The type and purpose of such data is out of the scope of this evaluation, since we are mainly interested in the behaviour of the DLT; it suffices to assume that they represent typical, small sized sensed data, such as a temperatures, air pollution values, etc.) We assume that the time spent to fetch such data is negligible, with the respect to the time to publish it to the DLT.
These messages were utilized to generate real transactions transmitted to the DLT. Each message was sent to a given DLT node. How this node was selected is discussed in the next subsection. Figure 2 shows the paths of 10 buses, as an example, that were considered during our tests. We varied the number of buses in the range: 60, 120, 240. For each bus, we utilized one hour of trace data. Based on the paths, each bus was set to generate approximately 45 message/hour. Thus, we made one hour long tests, where each bus generated, on average, a message to be issued to the DLT every 80 sec, which is a reasonable time interval to sense data in an urban scenario. For each test configuration, we replicated the experiment 12 times.
For each transaction, we recorded the outcome of the request, i.e. successful or unsuccessful, due to some DLT nodes internal error, as well as the latency between the transmission of the transaction and the confirmation of its insertion in the ledger.
Iv-B IOTA Setup
Each bus was emulated by a single process (issuing messages based on the data trace). Thus, the first task was to find, for each bus, a full node of the IOTA DLT to interact with. In IOTA, network full nodes do not usually allow to get their neighbors in the P2P overlay, through API. This hinders the possibility to perform a in-depth graph search on the overlay, in order to retrieve an up-to-date list of active nodes to interact with. Thus, in our tests we were enabled to rely only on services that maintain a public list of active nodes [iotanodes]. With this in view, the scheme we designed to select the IOTA nodes to contact, is as follows. Given the list of public nodes, a filter is applied to keep only nodes that are fully synchronized, i.e. the node has solidified all the milestones up to the latest one released by the coordinator, and that allows remote PoW. During testing these nodes were
. Then, we designed three heuristics for the selection of a full node to pair to each bus from the public pool:
Fixed Random: Each bus is assigned to a random IOTA full node from the pool, during the setup phase; then, every transaction generated by that bus is handled by this node, for the whole duration of the test.
Dynamic Random: A random node from the pool is selected every time a message has to be published by a bus.
Adaptive RTT: For each bus, its associated node actively changes every time a message has to be published, while the previous one is still pending. Based on results of past interactions, the known IOTA nodes are ranked through the experienced Round Trip Time (RTT) [jacobson1988congestion]. Then, a new node is chosen by selecting the best known node or, if every known node is in the process of publishing a message, a new node is picked randomly from the pool.
We used a MAM channel associated to each single bus. Every message to be published in the MAM channel requires three transactions to be issued, i.e. one containing the data and two other messages for the signature. The advantage of this approach is that through each MAM channel it is possible to easily retrieve the bus’s data stream and that only the channel owner can publish on it. An example of a (private) MAM channel, specifically created for a bus during our tests, can be found by querying the IOTA DLT with the root: JEIJZEVPUGHHKEKKDSFFEYYTVSFRXOU YWFH9LZIKKKQEDO9L9MK9LIVOZUIPCML9RCHNDR QYPNGNOUOGO. The entire dataset and the scripts used are stored in a github repository [githubrepo]. For each transaction, we measured the time required to perform the tip selection as well as the PoW. The tip selection depth parameter, i.e. the number of milestones to go back to start the random walk to select tips, was set to 3, whilst the minimum weight magnitude, i.e. the number of trailing zeros of a transaction hash, was 14 (minimum standard value for the IOTA mainnet).
Figure 3 shows results obtained for different test repetitions, when the number of emulated buses was set equal to . In particular, we show the results for each scheme we employed for the selection of the nodes. In the upper part, the histograms report the average latencies measured during a single test. The orange (lighter) part of the histogram shows the average latency to perform the tip selection, while the blue (darker) part shows the average latency associated to the PoW. The red (central and smaller) bars refer to the percentage of errors (the related y-axis is shown on the right of the figure), i.e. amount of transactions that failed to be added to the Tangle, due to full nodes’ errors. On the lower part of the figure, we show the average standard deviations related the specific tests, both for the tip selection and PoW. From the figure, it is possible to appreciate how in general a random selection of the full node to issue a transaction does not lead to good results. The amount of errors is quite high, as well as the measured latencies. Thus, these tests seem to conclude that, at the time of writing, the IOTA DLT is not fully structured to support smart services for transportation systems. On the other hand, the good news is that if we carefully select the full node to issue a transaction, the performances definitely improve. In fact, our third scheme “Adaptive RTT” has a low amount of errors, on average around 0.8%. Measured latencies are lower than other approaches. Still, the average latency amounts to 23 seconds, which is far from a real-time update of the DLT. The level of acceptability of latency values truly depends on the application scenario.
These first results suggest that some scalability tests might give further insights on the viability of the use of IOTA as the DLT to support smart transportation system. For this reason, we made some tests with an increasing number of buses.
shows average results obtained using our three considered schemes, when varying the number of buses. Results are reported as box plots. Thus, each box plot corresponds to the average results for a scheme in a given scenario. This allows to assess the scalability of each scheme, by looking at the results for an increasing amount of buses. At the same time, it is possible to compare the three schemes by looking at their performance for each scenario. In the box plot, the diamond represents the mean value of the overall latency (i.e. the time from the transaction transmission to the node to the acknowledgement that it has been added in the Tangle). The rectangle identifies the Inter-Quartile Range (IQR), i.e. values from the 25th to the 75th percentile. The middle box thus represents the middle 50% of values. Hence, the lower part of the box (let denote it Q1) is the first quartile (25th percentile), the highest (denote it Q3) is the third quartile (75th percentile). The red line inside the box is the median value. The lower and upper values identified by the vertical line are the whiskers. In box plot, the whiskers are defined as 1.5 times the IQR. Thus, the lower whisker is Q1 - 1.5*IQR, while the upper whisker is Q3 + 1.5*IQR; they represent a common way to describe the dispersion of the data. Finally, the “
” symbols outside the whiskers are the outliers. To better show the obtained results, the y-axis is reported in a log scale.
Results confirm that “Adaptive RTT” provides better results. Average latencies are definitely lower than other schemes. It is worth noticing that, being the y-axis in log scale, the difference on the performance is relevant. In particular, the first two schemes have outliers well over sec. In all cases, average latencies increase with the number of buses. This suggests that the number of full nodes devoted to the transaction management should increase proportionally to the number of buses. Indeed, if we assume that 60 full nodes are used, in the 240 buses tests we have 4 buses per node, that receive msg/sec, on average. This means that every 2 sec a IOTA node receives a request for a novel transaction, that requires 23 sec (on average, using ”Adaptive RTT”). Results confirm an important difference between the 240 buses scenario (i.e. msg/sec) and the 120 buses scenario (i.e. msg/sec, on average). This means that further improvements are needed to solve scalability issues.
|# buses||Heuristic||Avg Latency||Conf. Int. (95%)||Errors|
|60||Fixed Random||72.68 sec||[70.43, 74.94] sec||15.37%|
|Dynamic Random||56.0 sec||[54.51, 57.5] sec||18.26%|
|Adaptive RTT||22.99 sec||[22.69, 23.29] sec||0.81%|
|120||Fixed Random||87.75 sec||[85.38, 90.12] sec||29.49%|
|Dynamic Random||67.6 sec||[66.29, 68.9] sec||18.99%|
|Adaptive RTT||27.35 sec||[27.11, 27.58] sec||1.1%|
|240||Fixed Random||177.62 sec||[174.25, 181.0] sec||42.81%|
|Dynamic Random||128.2 sec||[126.28, 130.12] sec||44.85%|
|Adaptive RTT||73.26 sec||[72.68, 73.85] sec||7.55%|
To better emphasize the outcomes, Table I reports some summarized statistics (shown in the box plots) and the error rates. Actually, the main difference on the performance of the approaches is on the amount of errors. While the average error for “Adaptive RTT” is %, for the other two schemes we have errors well above %. These error rates are clearly unacceptable, meaning that these approaches are unusable.
Finally, Figure 5 shows the empirical cumulative distribution function obtained for the compared schemes in the 120 and 240 bus scenarios. In this case, for the sake of a better visualization, the x-axis is shown in a log-scale. These charts further confirm the better performance obtained by the “Adaptive RTT” scheme.
Vi-a On the Performance of IOTA
Obtained results require some discussion. In fact, on one side, it is shown that through a proper selection of full nodes, it is possible to obtain a reliable ledger update (low errors), thus making viable the use of IOTA to support smart transportation systems. On the other side, however, the measured latencies are relevant. In our tests, we employed public available IOTA full nodes to add transactions. Thus, we refer to such nodes to perform the tip selection and the PoW. The rationale behind this choice was based on the assumption that sensors placed on the buses do not have computation capabilities to behave as full nodes [Elsts:2018:DLT:3282278.3282280].
IOTA offers a view of the status of public full nodes [iotanodes]. Thus, it is possible to monitor their computation capabilities and the workload. During our experimental evaluation, all these nodes had usually a low computational load. Nonetheless, results confirm that the selection of the node is quite relevant. As a further confirmation of this claim, in our preliminary tests we tried to exploit a heuristics, alternative to those presented in the previous section. The idea was to select the best N full nodes, in terms of available resources, and use them to issue the transactions. With N=10, we measured very poor performances. This was due to the fact that, while apparently well provisioned, certain full nodes were not able to sustain the workload coming from our application ( message/sec). Trying to increase the scalability of the system and better balance the nodes workload, we increased the value of N. However, with N=20, we noticed a high variability on the performance of the employed full nodes, with substantial difference between the highly ranked and the lower ranked public full node. For this reason, we found that it was simpler (with similar performances) employing the “Fixed Random” approach.
An alternative approach might be to employ an edge computing system model, where the execution of the PoW is executed locally by the gateway (see Figure 1). (The tip selection must be always accomplished at a full node, that maintains a complete copy of the Tangle.) The rationale would be to relieve the IOTA node from the computational burden of the PoW. However, this would force to equip the gateway with sufficient computational capabilities to perform the PoW for all the transactions generated by the buses it handles.
Finally, it would be possible to ask the gateway to act as a full node for the DLT. This would actually resemble the testbed we considered in this work (due also to the fact that the exploited IOTA public nodes had a low workload overhead, concurrent to our tests). In this case, the difference would be that it would be possible to have a direct control of the full node. Its hardware characteristics might be properly set to tolerate a certain predicted workload, and this node might be reserved to handle transactions from the specific smart transportation system application, only. This scenario represents an interesting further work.
Vi-B On the Use of Alternative DLTs
We conducted preliminary tests with other blockchains, such as the well known Ethereum. However, Ethereum was not designed to register a huge amount of transactions containing (typically small sized) sensed data. The costs to issue a transaction in a block are usually quite high. Moreover, the confirmation times and scalability limitations are two other main factors that discourage the adoption of this technology. In fact, because of a hard-coded limit on computation per block, the Ethereum blockchain currently supports roughly 15 transactions per second. All this makes Ethereum an impractical technology to be used in our scenario.
Clearly enough, it might be interesting the evaluate the performance of DLTs, thought to support smart transportation scenarios, that implement novel techniques to improve scalability. An example is sharding, i.e. breaking the ledger into smaller, more manageable chunks, and distributing those chunks across multiple nodes, in order to spread the load and maintain a high throughput.
A novel DLT that implements sharding techniques is Radix [radixkb]. At the time of writing, the Radix technology is still at its infancy and a main net does not exist, yet. Nevertheless, we exploited the alphanet test network to issue transactions on the ledger. This gave us some preliminary results that we report in Table II
. In the table, we show the average latency, confidence interval and error percentage to add transactions on this Radix alphanet. Results are averaged over an amount of 12 test repetitions. In this case, we obtained very low latencies (below 1 sec), with a non negligible (but low) error rate. It is worth to point out that these results can be indicative of the functioning of Radix. However, we claim that it is difficult to compare these results with those obtained for IOTA. In fact, in IOTA we exploited the main net, while in Radix we had to employ a preliminary testnet, with few nodes involved to the ledger management (nodes) and basically no additional workload, apart from our tests. As a matter of fact, comparable results can be obtained if tests are executed on the IOTA test net, where the PoW is faster (we obtained average latencies around sec).
|# buses||Avg Latency||Conf. Int. (95%)||Errors|
|120||777.17 msec||[774.68, 779.65] msec||2.73%|
While there are novel interesting proposals to improve scalability, such as sharding or the Ethereum plasma [poon2017plasma], a main problem refers to the high fees that should be associated to every transaction. IOTA is designed to be feeless, in order to let billions of devices and sensors to interact with the Tangle without costs. Conversely, Radix and Ethereum use fees. The possible costs may be acceptable only when the transaction fees are negligible with respect to the value of the data. However, we claim that, in general, the need for fees hinders the use of a DLT in smart transportation scenarios.
In this paper, we proposed an architectural solution resorting to DLTs to support smart transportation systems. The benefits on the use of the distributed ledgers is that they would allow to safely and securely store sensed data, offering authenticity, verifiability and immutability features. Moreover, the use of DLTs can be employed to provide proof-of-location [ccnc2020].
We analyzed the main characteristics of current DLTs and focused on the DLT that, among the others, promises to be the best solution for smart transportation scenarios, i.e. IOTA. We thus made an extensive experimental evaluation, whose results have been summarized and analyzed. The conclusion is that, probably, work must be done, in order to provide effective distributed ledgers for smart transportation systems. In fact, it is important to be able to select proper nodes to interact with in order to have acceptable error rates. Moreover, measured latencies resulted higher than 20 sec, which is quite high if we think at real-time applications, reasonable for less time demanding services. In any case, this might be a transient problem, that could be solved by improving the IOTA peer-to-peer infrastructure.
Furthermore, in our tests all the work (i.e. tip selection and PoW) was performed by the full nodes. The rationale was to relieve sensors and devices from this task [Elsts:2018:DLT:3282278.3282280]. An alternative solution might be to delegate the PoW to some other entity, such as a gateway in between the vehicle sensors and the full node. Moving the PoW from the full nodes elsewhere might strongly improve the performances of the DLT nodes. The study of this possible improvement is ongoing.
A technique to improve the responsiveness may be based on the use of sharding. Indeed, we studied a novel technology, i.e. Radix, that is specifically based on sharding, obtaining interesting results. However, an open security question arises, i.e. if we decrease the amount of nodes that validate transactions (as the sharding does), then does the risk of a hack increase?
This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie ITN EJD grant agreement No 814177.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358973.70/warc/CC-MAIN-20211130110936-20211130140936-00367.warc.gz
|
CC-MAIN-2021-49
| 29,632
| 62
|
https://www.gunterisd.org/o/ghs/article/1201127
|
code
|
Mr. Moore's Chemistry students started off the year having to save their teacher from becoming a zombie. In escape room fashion, the students had to solve a variety of puzzles to find the ingredients in the lab to keep Mr. Moore from turning into a zombie! Nobody wanted a zombie as a teacher, so they worked together and solved the puzzles, and found the ingredients. We are happy to report that Mr. Moore is still Mr. Moore (the human)!
Saving Mr. Moore!
AUG 14, 2023
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474595.59/warc/CC-MAIN-20240225103506-20240225133506-00831.warc.gz
|
CC-MAIN-2024-10
| 469
| 3
|
http://saeofw.ga/dgog1qtk4rdjxzy.html
|
code
|
Este paquete es compatible con los modelos de controlador siguientes: ATI Mobility Radeon X1350.Latest AMD/ATI drivers for mobility / notebook graphics and Microsoft Windows.ATI MOBILITY RADEON X1350 - there are 6 drivers found for the selected device, which you can download from our website for free. Select the driver needed .Jan 10, 2014 ATI Mobility Radeon X1350 - Which version of Catalyst Control Center with Mobility Modder to have the newest working (stable) driver.RADEON X1350 WIN 10 DRIVERS. with an ATI RADEON X1350 video card. For Win 10 which is the best solution about ATI drivers ? Can I use another driver.ATI Mobility Radeon X1350 - Driver Download. Updating your drivers with Driver Alert can help your computer in a number of ways. From adding new functionality.Ati Mobility Radeon X1350 Driver for Windows 7 32 bit, Windows 7 64 bit, Windows 10, 8, XP. Uploaded on 4/20/2019, downloaded 6574 times, receiving.
Samsung Galaxy S2 Bluetooth Driver
Realtek Rtl8188s Driver Windows XP
This package supports the following driver models:ATI Radeon Xpress 1250 ATI PCI Express (3GIO) Filter Driver.This package supports the following driver models:ATI Mobility Radeon X2300 Update your nVidia graphics processing unit to the latest drivers.Les drivers Catalyst Mobility sont certifiés Microsoft WHQL. Ils prennent en charge le chipset Radeon : HD 3200, HD 3450, HD 3470 et HD 3870 X2. La fonction.ATI Radeon Mobility HD 4500 - Driver update on Windows 10 and found "ATI Radeon Mobility HD 4500" listed it for some old drivers.Update your computer's drivers using DriverMax, the free driver update tool - Display Adapters - ATI Technologies Inc. - ATI Mobility Radeon X1350 Computer Driver Updates.ATI Mobility Radeon X1350 How to install drivers in Windows 8 (x64), Windows 8.1 (x64) and Windows 10 (x64): Disable Driver Signature Enforcement in Windows.How to solve driver support problems especially when you the original ATI drivers ATI Mobility Radeon X1300, ATI Mobility Radeon X1350.
Hola. Datos: Notebook: HP PAVILION ZD8323EA (Intel Pentium 4 3GHZ 2Mb caché, hyperthreading y 2Gb de RAM 533MHZ). Grafica: Ati Radeon Mobility X600 de 128mb dedicada.here you can download driver for ATI MOBILITY RADEON X1350.The Mobility Radeon X1350 was a graphics card by ATI, launched in September 2006. Built on the 80 nm process, and based on the M62 graphics processor.Download ATI Mobility Radeon 9500, 9600, 9700, X300, X600, X800, X1300, X1600, X1800 and FireGL 6.5 Drivers. OS support: Windows XP. Category: Graphics Cards.Download drivers for AMD ATI MOBILITY RADEON X1350 video card, or download DriverPack Solution software for automatic driver download and update.AMD/ATI driver for Mobility Radeon X1350 Windows Vista (32bit). Version: 6.11 / 188.8.131.52. Release Date: 2006-08-31. Filename: .Download the latest ATI Mobility Radeon X1350 device drivers (Official and Certified). ATI Mobility Radeon X1350 drivers updated daily. Download.
Mobility Radeon X1350: M52: Mobility Radeon Treiber für AMD/ATI Radeon * Drivers for AMD/ATI Radeon devices * Controladores para el AMD/ATI Radeon dispositivo.Download latest mobility drivers for AMD/ATI Mobility Radeon X1350 and Microsoft Windows Vista 32bit.Download the latest drivers for your ATI MOBILITY RADEON X1350 to keep your Computer up-to-date.Laden Sie die aktuellsten ATI Mobility Radeon X1350 Gerätetreiber (offiziell und zertifiziert) herunter. ATI Mobility Radeon X1350 Treiber warden täglich aktualisert.Download the latest Windows drivers for ATI Mobility Radeon X1300 (Microsoft Corporation - WDDM) Driver. Drivers Update tool checks your computer for old drivers.Jul 21, 2007 This package supports the following driver models:ATI Mobility Radeon X1350.Install AMD ATI MOBILITY RADEON X1350 driver for Windows 7 x64, or download DriverPack Solution software for automatic driver installation and update.
This package provides the ATI display driver for Microsoft Windows Vista.This package supports the following driver models:ATI Mobility Radeon X1350.Scarica gli ultimi driver di dispositivo ATI Mobility Radeon X1350 (ufficiali e certificati). Driver ATI Mobility Radeon X1350 aggiornati quotidianamente. Scarica.Download the latest driver for ATI MOBILITY RADEON X1350 , fix the missing driver with ATI MOBILITY RADEON X1350.Download latest mobility drivers for AMD/ATI Mobility Radeon X1350 and Microsoft Windows Vista 32bit.Download the latest drivers for your ATI Mobility Radeon X1450 to keep your Computer up-to-date.ATI Portable Graphics Driver for Windows Vista 32-bit. ATI Mobility Radeon 9600 Series, \DELL\DRIVERS\R153383 in the Open textbox and then click.
Specifications and benchmarks of the ATI Mobility Radeon X1350 video card for notebooks.AMD/ATI driver for Mobility Radeon X1350 Windows Vista (32bit).Ati Mobility Radeon X1350 Driver for Windows 7 32 bit, Windows 7 64 bit, Windows 10, 8, XP. Uploaded on 4/20/2019, downloaded 6574 times, receiving a 87/100 rating.Ati mobility radeon x1350 drivers download, download and update your Ati mobility radeon x1350 drivers for Windows 7, 8.1, 10. Just download.Official ATI Technologies Inc ATI Mobility Radeon X1350 Drivers download center, download and update ATI Technologies Inc ATI Mobility Radeon X1350 drivers in 3 steps.Drivers Catalyst Mobility Legacy pour les cartes graphiques équipées d'un chipset ATI. Ces drivers ont été modifiés (moddés) avec l'utilitaire.This release is for all the graphic card family products that updates the media driver to the latest version. This unified driver has been further enhanced to provide.
Driver Vista Home Basic Iso Imagem free
INSTALLING USING WINDOWS UPDATE. Base on my experience installing windows 7 on Acer Aspire 4920G which use ATI Radeon X1350. Windows automatically install the drivers.Use the links on this page to download the latest drivers for your ATI MOBILITY RADEON X1350 from our share.Has anyone successfully installed a driver in Windows 10 for the ATI Mobility Radeon™ X1350 support for 1440x900 widescreen resolution is drivers.ATI mobility drivers for Vista i cant download ATI catalyst mobility drivers from ATI longer. Mobility Radeon™ X1350; Mobility Radeon™ X1300.Download ATI Radeon X2300 Mobility Graphics Drivers for Windows 7, 8.1, 10, Just update ATI Radeon X2300 Mobility Graphics drivers for your device.Sep 30, 2018 please help need driver for ATI Mobility Radeon™ X2300 for Radeon X1350 to the X2300, but there are no Windows 7 drivers.ATI Mobility Radeon X1350 (Microsoft Corporation - WDDM) Drivers may also be available for free directly from manufacturers' websites.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402128649.98/warc/CC-MAIN-20200930204041-20200930234041-00441.warc.gz
|
CC-MAIN-2020-40
| 6,621
| 10
|
http://iusbwriteweb.blogspot.com/2008/10/most-useful-gadget-for-students-on.html
|
code
|
I am blogging for class and to share my experience with computer systems.
Thursday, October 23, 2008
Computers and Education: The Most Useful Gadget for Students on a Budget
Now that you paid for your classes and books, you discover you are short on cash to purchase some of those really cool gadgets to get you through the semester. Now what do you do? Well now you are in triage mode, you make a decision as to what you absolutely need to have and list them by priority. Since I am a part-time student I do understand your pain.
One of the most common computer help requests I receive both on and off campus is "My computer will not start and I need my files".... I cannot count the times this has happened, especially after one of our Indiana thunderstorms and lightning has fried the computer. Students need not ever fall into this by using the gadgets I recommend.
With all the computer labs available on campus, a laptop even though it would be real nice and useful, I cannot put a laptop on the top of the list because you can get through classes without one if you are short on cash. So what should I have?
I believe the most useful gadget is the USB/pen/thumb drive. Why? To backup your homework is why! All the lab computers both PC and Mac are compatible with USB drives. Now a good drive with 4 GB storage is what I recommend and two or more if you are an arts student with lots of graphic files. If you need more storage, get an external USB hard drive with 1 TB drives now available for less than $250.00 you will have plenty of room to backup anything you need for class.
With all the external storage options available, a student can use a Mac or PC in the labs and store their precious homework and research on the campus network drive as well as your own external drive. If a student has a PC or Mac at home, then it is simple for them to move their data to and from school. Let me reiterate, backup ,backup and backup. If you need the document for your grade, back it up, even if it is just a few notes for a speech, back it up. Knowing you have your schoolwork safely stored in multiple places will take a bit of stress off your mind.
Do I suggest any particular brand or model? No for there are far too many out there, but I do suggest Staples or Best Buy stores, online Newegg and Tiger Direct are other useful gadget sites. If you can afford a laptop...I do prefer the new Apple Macbook, it will run Windows as well as OS X and you will have all the built in gadgets you need for school. Now wait for the next episode...
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794868003.97/warc/CC-MAIN-20180527044401-20180527064401-00587.warc.gz
|
CC-MAIN-2018-22
| 2,543
| 9
|
https://cgnet.com/blog/learn-more-about-azure-backup/
|
code
|
In case you’re a TL:DR kind of person, I thought I’d share some highlights from the Azure backup webinar.
Azure Backup Highlights
- You can create an Azure backup (“Data Protection Module”) configuration for backing up servers (duh) but also computers and laptops. In either casae you must install a DPM client on the end device.
- Azure backups are stored in three different files by default (simple redundancy). You can also specify that you want geo-redundancy (files stored in different data centers) as well. I does cost a little more.
- Like most backup and recovery tools, Azure backup will create full and incremental (what they call “differential”) backups. For the incremental backups, you specify how many days back you want to go. For instance, you can specify that the incremental backup should cover the prior five days of data.
- You can perform backups daily, but only once or twice. This is fine for data that doesn’t change a lot (documents, for example). But Azure backup isn’t a good solution for data that changes rapidly, such as database information. (There are other solutions for frequent backup use cases).
- Azure works with many backup applications. You can continue to use your application (e.g., Veeam) but specify Azure as the storage location. In these situations, you set up an Azure storage account and configure the backup application to communicate with Azure over a REST API.
Azure Site Recovery
Maybe you need to go beyond Azure backup and plan to back up an entire server—application and data. I wrote about this recently here. You can use a service called Azure Site Recovery to do this.
Here for Nonprofits
Microsoft, Google and others have all stepped forward to help nonprofits, in response to the COVID-19 pandemic. That’s great to see. CGNET recently joined Microsoft’s Technology for Social Impact team, which focuses on connecting nonprofits with partner organizations and assets serving those organizations.
One nice thing in the webinar: there’s a nice slide toward the end (try the 52:31 mark) that summarizes the offers for nonprofits. For instance (now I know) that if your organization expects to spend a lot on Azure service, you can qualify for a 15% discount.
Oh, and if you need help understanding how to register for Microsoft nonprofit offers, check out this short video.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817146.37/warc/CC-MAIN-20240417075330-20240417105330-00814.warc.gz
|
CC-MAIN-2024-18
| 2,354
| 13
|
http://389a.blogspot.com/2010/08/computational-projects-part-1.html
|
code
|
I made a diagram that illustrates most of the algorithms I wish Sage had to support my current mathematical research program. I am focusing most of my Sage development effort on ensuring that these get implemented as soon as possible. I will not depend on anybody else to help me with any of this, though if people would like to help in various ways that would be greatly appreciated. Over half of these are in the closed commercial Magma software already, which makes it much easier for me to implement them in Sage.
Research Versus the Goals of the Sage Project
A concern I have is that there might be disagreement in the Sage community about some of the design decisions that are necessary to support the implementation of my planned algorithms. For example, improving number field arithmetic to be dramatically faster requires making some sacrifices so that the implementation takes one week instead of two months (e.g., fundamentally depending on FLINT for basic arithmetic, which at least one Sage developer will be quite unhappy about). Even though I'm the leader of the Sage project, I won't just add code to Sage that a lot of people don't like. However, implementing everything planned here is a lot of work, and given the time and resources I have available, it will be impossible to fully document and test everything up to the standard necessary for inclusion in Sage. If this gets to be a nontrivial problem, I will maintain my own alternative ``fork'' of Sage, which I'll distribute separately. It will have `bleeding edge'' code that I'm developing, and I'll make full source releases available on a separate website, but I will not have to worry about constantly updating packages (like Maxima and Scipy) that are irrelevant to my research. The architecture of Sage makes creating such a fork easy, and of course I know how to manage this. In a few years, once I've built everything and complete the research I plan to do with it, I can spend the time merging back the good parts into mainline Sage. I know that several other Sage developers (e.g., Nick Alexander, David Roe, etc.) do something similar, since their personal research is so important to them, and getting code into Sage is overly difficult. It is very important that I acknowledge this tension, because currently the demands of "publishing" code in Sage -- and indeed of maintaining Sage as a general purpose system -- are so tough that they make genuine development of Sage as a research tool too difficult. Solution: create something that isn't Sage.
Here is the diagram. It is a tiny picture, but if you click on it you should get a bigger PDF that you can zoom into (if you are using Linux, download this PNG instead).
I will spend the rest of this post discussing the first few algorithms at the top of the diagram, and my motivation for implementing them. Future posts will address the rest of the algorithms.
The top of the diagram lists number fields, and the two projects involve greatly speeding up basic arithmetic in Sage with elements of number fields. This is important to my research, since I intend to do explicit comptutations with elliptic curves and modular forms over number fields, including making large tables, hence fast arithmetic is important. Arithmetic in some cases in Sage is ridiculously slow, especially for relative fields. Fixing this involves switching from using NTL for all number field arithmetic to using FLINT (via Sebastian Pancratz's new rational polynomials) for absolue fields and Singular for relative extensions. I spent some time on this project and the results are at Trac 9541, but the code there should not be used as is. Instead, I'll finish this project by extracting out the best ideas from those patches, and adding more. The main idea for relative number fields is to create a C interface (written in Cython!) to Singular for computing with transcendence degree 1 function fields over the rational numbers. The other lesson I learned when working on Trac 9541 is that it is orders of magnitude easier to delete all code related to using NTL for number fields instead of trying to simultaneously support several different models for numbers fields.
Function FieldsContinuing clockwise, the next cluster of algorithms involves function fields of transcedence degree 1 over the prime subfield, i.e., function fields of algebraic curves. Function fields are important for my research because an application of explicit Riemann-Roch spaces and divisor reduction is a key step in completing a paper on torsion points on elliptic curves over quartic fields that I'm writing with Kamienny and Stoll. Function field arithmetic, rings, ideals, polynomial factorization, norm, traces, ``Poppov reduction'', etc., are the subject of Trac 9054, which should be relatively easy to finish. The next step is to implement Florian Hesse's algorithm for Riemann-Roch spaces, by following the description in Florian's papers and talks on the topic. Next, I'll have to implement divisor reduction and computation of maximal orders (normalization of curves). I suspect that Chris Hall will help. Parallel to this, it is also a good idea to use the same code as in the number field case to make basic arithmetic in certain function fields much faster than the generic code of Trac 9054.
Elliptic Curves over Function Fields
I am also interested in code for computing with elliptic curves that are defined over function fields (usually over finite fields). This is related to work of Chris Hall, Sal Baig, and others at recent SageDays on function fields. This code will make it easier to computationally explore function field analogues of new ideas related to the Gross-Zagier formula and the BSD conjecture. The main goals are to implement algorithms for computing every quantity appearing in the Birch and Swinnerton-Dyer conjecture, including torsion, Tamagawa numbers, L-functions, Mordell-Weil groups (2-descent), and regulators. Also, I need to implement code for doing computations with Drinfeld modules, since Drinfled modules provide the analogue of modular curves and Heegner points in the function field setting.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027318011.89/warc/CC-MAIN-20190823062005-20190823084005-00199.warc.gz
|
CC-MAIN-2019-35
| 6,139
| 9
|
https://asylums.insanejournal.com/valloic/tag/stranger+things:+mike+wheeler
|
code
|
Sep. 5th, 2022 at 10:30 AM
( Check her Out )
The waypoints are nice, but it's going to be nice to be able to get out and drive again.
She's very roomy. Want a tour? I can come pick you up.
She's going to need a little TLC if I'm going to have her ready for the race at the end of the month (she'll definitely be ready one way or another), if any of you want to help out a bit.
Want to be my copilot?
Hey, so I've been working on an open campaign, but I don't want to step on any toes with what you've already got set up. Mind if I pick your brain a bit?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334579.46/warc/CC-MAIN-20220925132046-20220925162046-00673.warc.gz
|
CC-MAIN-2022-40
| 553
| 7
|
https://discourse.julialang.org/t/different-requirements-for-packages-using-building-and-testing/2170
|
code
|
Is there a way of specifying distinct requirements for a package, for using it, for building, and for testing?
For example, I have a package that needs LightXML, but only for running deps/build.jl, which downloads one or more data files, stores them in a data subdirectory, and then produces a data file which it also stores there, which is used by the package. The package itself does not need LightXML, and I don’t want the extra overhead of that dependency just to use the package. The same is true for testing, where we want BaseTestNext (for v0.4.7) and BenchmarkTools, but only when running the unit tests, not for deployment.
If there is no way of handling those issues, will that be something addressed by the Pkg3 redesign?
Also, is it acceptable for the build.jl or runtests.jl to check to see if the packages(s) are already present, and if not,
call Pkg.add (and possibly Pkg.build), before
using the package(s), in order to keep those out of REQUIRES?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158205.30/warc/CC-MAIN-20180922064457-20180922084857-00392.warc.gz
|
CC-MAIN-2018-39
| 965
| 6
|
http://writers.stackexchange.com/questions/tagged/research+dictionary
|
code
|
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
Picture-based Identification Dictionary
A long time ago, I dropped by a session of a creative writing course where the teacher brought a lot of reference books. One of them was a lovely picture/sketch-based identification dictionary. There ...
Dec 15 '11 at 2:44
newest research dictionary questions feed
Hot Network Questions
A "good scale" that is not really a scale
Double \prec as a single symbol?
Why should the generalization of a 'sequence' be called a 'net'?
Is the threat of computer malware to steal Bitcoin exaggerated?
Underage but very valuable user on our site
Deamination of Spermidine?
Multiply with restricted operations
How to look up lost iTunes shared library password?
What is the Elo of a player who is half as good as another player?
Why doesn't std::atomic initialisation do atomic release so other threads can see the initialised value?
Do mosquitoes urinate on you when they bite you?
How to fool the "try some test cases" heuristic: Algorithms that appear correct, but are actually incorrect
Moisture Sensitivity Level - More than one chip in sealed bag
Is there any way to increase the number of spells I can cast per day as a wizard?
How to put a site in maintenance mode by non-admin
Picklist with different value and name
What happens when you book a flight to a country you don't have a visa for?
What do you call a disgusting mixture you don't want to drink?
Beginning a sentence with a function name?
What axioms does ZF have, exactly?
StackOverflowError during recursive call
openright sections in article class
Requesting PhD supervisor after he refused acceptance
The existence of a 1-1 continuous map between two topological spaces.
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1409535920849.16/warc/CC-MAIN-20140909052629-00417-ip-10-180-136-8.ec2.internal.warc.gz
|
CC-MAIN-2014-35
| 2,308
| 52
|
https://mail.python.org/archives/list/distutils-sig@python.org/message/YTCKI5R4FFXAWNOENS5TDC34WLKR3R52/
|
code
|
On Mon, Oct 12, 2009 at 9:10 AM, Reinout van Rees
On 2009-10-09, Tarek Ziadé
On Fri, Oct 9, 2009 at 2:03 PM, kiorky
AND no, virtualenv must continue to provide setuptools, backward compatibility, you know?
The *whole* point of Distribute 0.6.x is to be backward compatible, meaning that if virtualenv switch to it, you will not even notice it.
For something that's supposed to be a drop-in replacement, it sure makes it self noticeable. Note that I'm not holding that against distribute! The mess is of setuptools' making.
- When using buildout, I get lots of warnings. The 1.4.2 isn't out yet, but I also won't update all old projects' pinned zc.buildout version so I'm stuck with warnings for a time.
Please try with your current zc.buildout, we've also fixed compatiblity with older zc.buildout versions yesterday so it should work.
Any idea what these kinds of errors mean? I can't believe I'm the only one. It happened on my osx laptop and on the linux server and within a buildout-installed buildbot.
Do you happen to have a shared eggs directory ? if it's the case, all buildouts that uses it need to be re-bootstrapped with the distribute boostrap Tarek
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712297284704.94/warc/CC-MAIN-20240425032156-20240425062156-00645.warc.gz
|
CC-MAIN-2024-18
| 1,162
| 10
|
https://github.com/ereckers/feather
|
code
|
A simple and easy responsive WordPress bootstrap3 starter theme from Red Bridge Internet.
Feather is a WordPress starter theme based on Bootstrap and influenced by Roots and Underscores. Feather is simple and easy to use and employs the simplest implementations of Twitter's Bootstrap3, WordPress Codex, and libraries from the popular WordPress starter themes Roots and Underscores.
This is very near to a WordPress skeleton theme. There are no wrappers, no rewriting, no advanced configurations, no init scripts, and it's not all that DRY. You'll see familiar template files such as index, single, archive, and page, sidebars, a few widgets/sidebars, and a couple menus.
This is meant for hacking away at. Don't bother creating a child theme for it. Once you grab it, don't feel compelled to update it. It uses enough WordPress functionality that we're essentially outsourcing backwards compatibility to WordPress, which they have taken pretty seriously. This means this thing should last you a good long while.
This starter theme optionally employes a number of popular and well regarded plugins to enhance functionality. A list of these plugins can be found below. These can be activated to enhance functionality, but are not required.
Frameworks, Libraries, and Resources:
This theme makes use of and the following frameworks, libraries, and resources:
- Google Fonts
- Font Awesome (Override)
This theme includes the following template files. We are using the WordPress naming conventions for template files.
This theme registers the following widgets/sidebar areas:
|sidebar-primary||Primary Sidebar||Default primary sidebar usually containing blog specific widets.|
|sidebar-page||Page Sidebar||Used on page templates usually containg page/site menus and other material.|
Feel free to update these to match any registered widgets/sidebars on already existing sites.
This theme registers the following menu areas:
Feel free to update these to match any registered menus on already existing sites.
We defer to popular and wellregarded plugins to extend functionality withing the theme.
|WordPress SEO||SEO, Breadcrumbs, Sitemap solution.|
|Jetpack||Stats, subscriptions, carousel, related posts, sharing, contact form, widget visbility, custom css, shortcode embeds, extra sidebar widgets, monitor, enhanced distribution.|
|Custom Field Suite||Appears as (Field Groups) and allows for creating custom fields. Basic port of ACM with available repeatable fields functionality.|
|Custom Content Type Manager||(Custom Content Types) post types (use field groups for custom fields).|
|WP Minify Fix||A current Fork of WP Minify. JS and CSS concantenation and minification. http://wordpress.org/plugins/wp-minify-fix/|
A documenation of plugins we have made use of to achieve certain requirements and functionality.
|Gravity Forms||Forms creation. Requires a License to use and replaces Contact Form & (which is a great free option).|
|Posts 2 Posts||Create many-to-many relationships between posts of any type.|
|Post Types Order||Order posts and post types objects using a drag and drop interface.|
|Google Analytics for WordPress||Plugin for implementing Google Analytics.|
|Configure SMTP||SMTP Manager. For times in which your server does not provide a mail server.|
|Members||A user, role, and content management plugin.|
|WP Session Manager||Adds $_SESSION-like functionality to WordPress.|
|RICG Responsive Images||Bringing automatic default responsive images to WordPress.|
- make framework even simpler by removing redundant template files (ie. page-2col)
- if Comments are closed do not display Leave a comment
- role in some recent modifications from client work
- blog list featured image
- notations for add_image_sizes
- add Theme Options via Option Tree
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710870.69/warc/CC-MAIN-20221201221914-20221202011914-00659.warc.gz
|
CC-MAIN-2022-49
| 3,768
| 37
|
http://theallensbaby.blogspot.com/2010/03/we-do-this-lot.html
|
code
|
We take people down a hallway and find any open door. Then we go in the room and slowly close the door and say "Bye, bye. Bye, bye. Bye, bye." Then when it's ALMOST shut, we... Open it and laugh really hard!
When Life Doesn't Turn Out Like You Expected
5 days ago
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00273.warc.gz
|
CC-MAIN-2018-30
| 263
| 3
|
https://forum.glyphsapp.com/t/make-metrics-keys-lowercase/13075
|
code
|
this is probably something that should rather be posted in a thread concerning the respective script, but maybe this would be useful in Glyphs itself: I am using the script Make Glyph Names Lowercase by mekkablue, in order to make my newly generated smallcap names (from capitals) lowercase, as, I assume, most people do when starting with smallcaps. Glyphs works great with the metrics keys, as they are automatically updated to point to the respective smallcap glyph instead of the capital it stems from. When using the script, however, the metrics keys are not made lowercase, so that they now point to empty glyphs (to H.smcp instead of what is now called h.smcp). Is there any workflow that circumvents manually making the metrics keys lowercase for all smcp glyphs? Would this be useful in the aforementioned script? Or do I not have any excuse that prevents me from just writing two lines of script myself, come to think of it?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00036.warc.gz
|
CC-MAIN-2022-40
| 934
| 1
|
https://armv8r64-refstack.docs.arm.com/en/v4.0/manual/boot.html
|
code
|
As described in the High Level Architecture section of the Introduction, the system can boot in 3 different ways. The corresponding boot flow is as follows:
The booting process of baremetal Zephyr is quite straightforward: Zephyr boots directly from the reset vector after system reset.
For baremetal Linux, the booting process is as the following diagram:
And the booting process of virtualization solution in this stack is as the following diagram:
The Boot Sequence section provides more details.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296950110.72/warc/CC-MAIN-20230401160259-20230401190259-00364.warc.gz
|
CC-MAIN-2023-14
| 499
| 5
|
https://www.sitefinity.com/developer-network/forums/developing-with-sitefinity/how-can-i-check-the-access-right-to-a-returnurl-querystring
|
code
|
Hm... i try it.
I have three Roles. Admin, Author and Members.
1. The User is not loggedin on my Sitefinity-website.
2. The User click a Link on the Navigation (Sample: http://.../documents.aspx
). This Page need access right from a Admin or a Author.
3. For this reason, it is redirected to http://.../login.aspx?
4. The User logs in, but he is only a User with a Members-Role.
5. About ReturnURL he comes back to documents.aspx, but he have no access right for this Page.
6. It appears an error message.
I want to avoid this error. The user should be redirected to another page.
....i have to lern english :-(
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934807056.67/warc/CC-MAIN-20171124012912-20171124032912-00144.warc.gz
|
CC-MAIN-2017-47
| 611
| 11
|
https://philosophy.stackexchange.com/questions/102125/does-russells-objection-to-meinongianism-apply-whenever-we-take-the-meta-versio
|
code
|
A third problem, one of Russell’s objections to Meinongianism (see [Russell 1905a, 1907]), turns on the fact that existence is, on Meinongianism, a property and hence figures into the base of the naive comprehension principle. So, consider the condition of being winged, being a horse, and existing. By the naive comprehension principle, there is an object with exactly these features. But then this object exists, as existing is one of its characterizing features. Intuitively, however, there is no existent winged horse. An existent object cannot so easily be thought into being. Indeed, for every intuitively nonexistent object that motivates Meinongianism—Zeus, Pegasus, Santa Clause, and Ronald McDonald—there is, by the naive abstraction principle, an object just like it but with the additional property of existing. But then there is an existing Zeus, an existing Pegasus, etc.. This is overpopulation not of being but of existence as well.
So suppose we use the encoding/exemplifying distinction. Shouldn't we be able to encode an object for the predicate exemplifies something, just like we can encode something per exists? If this is not enough to recapitulate Russell's objection, what about encoding an object as encodes exemplifying something? And then encodes encoding exemplifying something, etc.?
Or, then, can we show that any attempt to adjust our theory of predication to accommodate these pairs of existence-like predicates will fail once taken to its second- or maybe third-order? E.g., can we not stipulate that there is a Meinongian object for, "A cleverly disguised shrimp with a property that is both the bare existence property and a nuclear property"? The nuclear/extranuclear distinction is supposed to block that by making such a property into a contradiction, which is nullified from the system. However, for a (neo-)Meinongian, is it not possible that there is an impossible object with the property of being a violator of the nuclear/extranuclear distinction, or more classically-speaking being such that its nuclear properties are its extranuclear properties (c.f. Russell's paradox, and the motives behind his theory of types)?P
P: or take a naively/toy Platonic distinction between active and passive essential predication. One might also call this a fountain/mirror distinction, although historically, it is emphasized in the form of the participation terminology, wherein (at least some) Forms do self-participate (at one stage in Plato's thought, anyway, perhaps). At any rate, suppose the Form of Nonexistence was the set of things that don't exist. If it has no elements, then everything exists. However, if it is not an element of itself, then this Form does exist. But if the Forms self-participate, i.e. they are (by type) what they are reflected as, then the Form of Nonexistence should be the prototype of all its possible elements, to wit a nonexistent thing. Accordingly, if Plato doesn't have a Form of Nonexistence, here, then there's a Form for everything, including paradoxically (inconsistently) the very Form of Nonexistence, too. For if this Form doesn't exist, then the Form of Nonexistence is an element of itself. But if it does not exist, it has no elements. So if it didn't exist, it would be both an element and not an element of itself. So either way, it exists.
Or consider the Form of Imperfection, i.e. of imperfect participation, such as by physical exemplars. If it perfectly participates in itself, then it imperfectly participates in itself perfectly. But if it doesn't exist, then all participation is perfect and the supposedly most important distinction between Forms and exemplars is extinguished. (In fact, it seems as though Plato did wander through these woods already with his inchoate talk of a demiurge, or a demi-Form (supposing we take there to be a Form-operator F in our toy system, so that we can go to the demi-operator for F) perhaps; and of a world-soul, too, no less (and a world of Forms, which should be the Form of the World, except then it's also the Form of the Good (if the world is the sun and what it illuminates, so to say), and the Form of the Good is the Form of Forms and the Form of demi-Forms, after all; and so on and on).)
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817650.14/warc/CC-MAIN-20240420122043-20240420152043-00666.warc.gz
|
CC-MAIN-2024-18
| 4,232
| 5
|
https://sophiabits.com/blog/vercel-and-netlify-are-slick
|
code
|
Vercel and Netlify: slick AF
When I was about 10 years old, I was introduced to HTML and GameMaker (version 7!) while at the George Parkyn Centre. As we were all quite young, they were only able to show me the basics but I was immediately hooked. My closest experience to programming prior to this was typing into DOS prompts while my stepfather showed me how to set up Windows computers.
I made a lot of really crappy HTML pages, and slowly worked my way through all of the online tutorials that were available (is DHTML a term still?). At the end of that school year, I even made a good bye website for my teacher as she was leaving for South Africa—by far my biggest project at that point. She was extremely grateful for it, and I think that was really the moment when I decided to one day work in the field of software: it was immensely satisfying to see the effect my website had on my teacher.
This website was just a collection of HTML files on our classroom’s shared desktop computer. I didn’t know how to actually deploy my pages to the Internet so that others could access them just yet, and in fact one time I actually tried to get an online friend to check out one of my websites by typing in something like
C:\html\page.html into his address bar. Curiously enough, that didn’t work.
A couple short years later and I ran into PHP. Back then, PHP was great! Node.js was literally only just released (it was 2009) and so PHP was just the way you did things. I made a little browser-based RPG that used a MySQL server and “deployed” it to a free hosting account I had with Byethost.
Deploying new changes was a bit of a pain because I was using their clunky web-based interface to perform FTP transfers, but I was mostly just happy that I was actually able to share my web-based things with other people at long last.
The next big discovery was when I started writing scripts which would do all the FTP operations automatically for me, which was a big win for my operational efficiency :) Version control via subversion (TortoiseSVN was great!) similarly improved things because it meant I didn’t need to copy the whole app folder every time I wanted to make a change and still have a backup handy.
And then at some point I learned about CI/CD. The holy grail! At this point I was on Git, so I could push my code and automatically have it deployed. No extra effort necessary at all… aside from needing to set up all those annoying configuration files.
(Note that here I’m talking about hobby projects. For software at scale of course you’ll want to fine tune things like your CI/CD pipelines)
Anyway, all of that history was described to lead up to the real topic of my post today. Providers like Netlify and Vercel fill in that missing gap of automatic CI/CD, and it’s absolutely brilliant. We’re at a point in time where you can run create-react-app, push it to a Git repo, and then click that Git repo from the Vercel UI to automatically wire up everything you need for continuous delivery and feature branch previews. You don’t even need to pick the boilerplate you’re using—both Vercel and Netlify can infer it correctly almost 100% of the time.
And these services have free tiers!
In today’s world, it’s disappointingly common that developer tooling comes with steep learning curves and behaves flakily. Vercel, Netlify, and other products in the same space are a real breath of fresh air and inject so much joy back into software development. They’re great—and I want more opinionated, turnkey solutions because when we make it easy to get into software development, the whole world benefits from all the added people taking up the profession.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00634.warc.gz
|
CC-MAIN-2023-50
| 3,697
| 13
|
https://www.tfaforms.com/4862352
|
code
|
A condition of our funding from the federal government is that we ask the following demographic questions, in order to ensure we are reaching diverse audiences. Please note that these responses will not affect the content or delivery of the workshop, but are strictly for reporting in aggregate to the government.
If you don’t know exact percentages, that’s fine — rough estimates are perfectly acceptable.
If you're strongly opposed to answering any of the demographic questions, please email firstname.lastname@example.org
, so we can submit your registration manually. Before exiting, please make sure to click "Save my progress and resume later" above, and follow through with the questions needed to save the form.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474690.22/warc/CC-MAIN-20240228012542-20240228042542-00435.warc.gz
|
CC-MAIN-2024-10
| 725
| 4
|
http://www.dailynorseman.com/2013/5/31/4382426/king-vis-fantasy-football-league
|
code
|
No, this is not an invitation. Unfortunately, all of the spots are taken as far as I know. This is just a way for me to try to get King's attention. I have posted two questions on the league home page 3 days ago and have not seen a reply. One question was in regards to the draft date. I have only participated in 2 fantasy drafts myself, but it seems to me that the set date, June 20th, is quite early. At that point, there are still many personnel decisions to be made, injuries to be had (though we hope that there aren't any). The second one was regarding a feature of the ESPN fantasy website. The site contains a mock draft function. So far as I have seen, they only have 12-team mock drafts, but I haven't done much investigating. So, I was wondering if any of the league's participants would be interested in doing a practice mock draft.
So, King VI, or anyone participating for that matter, please respond to this if you are interested. Leave a comment below, or you can e-mail me Jessie_Silbaugh@yahoo.com.
Also, I recently saw Star Trek (2009), so I'll leave you with this.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500835872.63/warc/CC-MAIN-20140820021355-00027-ip-10-180-136-8.ec2.internal.warc.gz
|
CC-MAIN-2014-35
| 1,084
| 3
|
https://socratic.org/questions/how-do-you-convert-5-5-x-10-3-into-standard-notation
|
code
|
How do you convert 5.5 x 10^-3 into standard notation?
So you divide
But you can also move the decimal point 3 places to the left, and pad zeroes to fill the empty places:
In both casesthe answer will be
Easy to remember: you now have 3 zeroes at the left side of your number (including the one in front of the decimal point)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511406.34/warc/CC-MAIN-20231004184208-20231004214208-00222.warc.gz
|
CC-MAIN-2023-40
| 325
| 5
|
https://wiki.libsdl.org/SDL2/SDL_IntersectFRect
|
code
|
Calculate the intersection of two rectangles with float precision.
const SDL_FRect * A, SDL_bool SDL_IntersectFRect(const SDL_FRect * B, SDL_FRect * result);
|A||an SDL_FRect structure representing the first rectangle|
|B||an SDL_FRect structure representing the second rectangle|
|result||an SDL_FRect structure filled in with the intersection of rectangles
Returns SDL_TRUE if there is an intersection, SDL_FALSE otherwise.
result is NULL then this function will return SDL_FALSE.
This function is available since SDL 2.0.22.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100724.48/warc/CC-MAIN-20231208045320-20231208075320-00206.warc.gz
|
CC-MAIN-2023-50
| 527
| 8
|
https://www.halfbakery.com/idea/web_20app_20protocol
|
code
|
h a l f b a k e r y
"It would work, if you can find alternatives to each of the steps involved in this process."
add, search, annotate, link, view, overview, recent, by name, random
news, help, about, links, report a problem
or get an account
Imagine I've subscribed to some online calendar. Imagine also that I've gone to some site
and just bought tickets for some movie tonight. It would be
awesome if there were some way for distributed web-app sites to
transfer information from one to the other such that I could
immediately go back to
my calendar online and see that it now
shows an entry for that date and time - even though the
calendar and ticket site are separate companies.
Component Service Provider
http://www.halfbake...nt Service Provider
Jim's Flanagan's coinage of this term is apt. XML-RPC (and SOAP) is a reasonable protocol. Furthermore, it's working: Jim added discussion to his weblog using Take It Offline's XML-RPC interface. With similar published interfaces, sites should be able to easily add services provided by other sites. [syost, Mar 28 2000, last modified Oct 04 2004]
The iCalendar XML DTD could be of use here. Note that it expired as an Internet-Draft last week (http://www.imc.org/draft-many-ical-xmldoc). iCalendar seems to be about general calendar sharing. More is probably needed to define an interface for updating an individual calendar. I haven't read the full draft. Maybe a quick answer could be gotten from Frank Dawson. [syost, Mar 28 2000, last modified Oct 04 2004]
XML Protocol Comparison at W3C
Compares protocols with various levels of generality and gives their current status [syost, Mar 28 2000, last modified Oct 04 2004]
Calendaring and Scheduling IETF Working Group
"This work will include the development of MIME content types to represent common objects needed for calendaring and group scheduling transactions and access protocols between systems and between clients and servers..." [egnor, Mar 28 2000, last modified Oct 04 2004]
||Perhaps, though XML-RPC has issues.
||But Triptych's scenario requires
something more. Ideally, the ticket
vendor wouldn't just hook up with one
particular calendar vendor, but
instead send a general "calendaring
event" through the user's browser to
*any* Web site the user has elected
to use for calendaring.
||It's unclear whether this would
happen immediately, or whether the
event would wait, cookie-like, until
the user next visited the calendar
||(Of course, the protocol should be
fully general, not restricted to
calendaring per se.)
||It would be a Cool Thing, when
combined with standard schemas
for things like calendar events.
Security and privacy options abound;
it would have to be constructed
||Sending "events" through a browser from one site to another is fraught with security problems and of course requires new browsers. I would like, however for a ticket site to give me a choice of calendaring sites -- that list being the ones that support the defined interface.
||The fully general protocol is at the level of XML-RPC or SOAP. The calendaring instance would be, for example, a specific XML-RPC/SOAP interface that's agreed upon by calendaring vendors.
||RSS is another XML-based description that's enabling lots of interesting sites like my.netscape.com, my.userland.com, http://www.oreillynet.com/meerkat/, and others. It's just another example of inter-site coordination that's enabled by agreeing on a spec.
||What are the issues with XML-RPC? Post here or send me email: syost at takitoffline dot com
||Thanks to [syost] for drawing my attention to this discussion. I posted a rant at Edge City (http://edgecity.convey.com/index.cvy?a=page&sec_id=7492&page_id=20656) on a similar topic. Here's a quote:
"There is no interoperability (or next to none) between the various services. I cannot maintain more than one calendar(or maybe two, at great personal and manual involvement).
||What would be cool would be to have the equivalent of POP3 access to calendar information, so that one calendar system could suck information from another. This is an obvious opportunity for the syncronization people, like Intellisync and Starfish. (Instead, Starfish has created yet another not very good web PIM, www.truesync.com... I never could figure out how to a/ contact customer support, or b/ how to created shared calendars.) "
||stoweboyd, could use you use "link" to enter a URL, instead of "annotate"? That way it's clickable.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00419.warc.gz
|
CC-MAIN-2022-49
| 4,443
| 50
|
https://bitcoin.stackexchange.com/questions/37430/how-many-people-own-crypto-currencies
|
code
|
Is there an estimate on how many people in total own some cryptocurrencies?
How is this distributed among bitcoin and altcoins?
As Luca mentioned it is really difficult to measure, but there is a way to map Bitcoin addresses to IP address under certain conditions. Refer this paper
Title : An Analysis of Anonymity in Bitcoin using P2P Network Traffic.
Authors : Philip Koshy, Diana Koshy, and Patrick McDaniel.
Note: It analyses relay pattern of transactions. It cannot analyse transactions that are already confirmed in blocks.
Some companies publicly disclose their data and it might be possible to come up with a ballpark estimate. Blockchain.info shares the number of wallets used in their My Wallet service.
It is next to impossible to estimate.
Consider this scenario: You live in a deserted place with just phones/post offices. Using just pen & paper/calculator you derive a fresh new bitcoin address and its private key. You call any of your friends, who could be anywhere in the world, who have internet and you give them this address. They make a transaction sending satoshis worth buying milk and relay it to the network.
You could then go give away your privatekey to a milkman for buying milk. If he doubts you he can call any of his friends in the world who have internet and can confirm that that address holds balance.
You and the milkman are untraceable unless your friends decide to give you up. No technology can do it. Well, maybe some neuro-tech can scan their brains but ..
I could travel across borders holding my money in my mind.
As of Today, BlockChain's MyWallet has about 3.75M users. Adding this with CoinBase's user base of 2M and Multibit's 1.5M. That makes about 7.25M Bitcoin users. Note that actual number of users may be much above this figure as you can figure out all that is needed to encrypt the data is just a key!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473347.0/warc/CC-MAIN-20240220211055-20240221001055-00074.warc.gz
|
CC-MAIN-2024-10
| 1,855
| 13
|
https://devhub.io/topic/raspberrypi
|
code
|
Developed by devhub.io
Lightweight justice for your single-board computer.
Live GLSL coding render for MacOS and Linux
A Python package and CLI tool to work with w1 temperature sensors like DS1822, DS18S20 & DS18B20 on the Raspbery Pi, Beagle Bone and other devices.
Open Internet of Things Framework - ThinG Wider!
A Simplified Approach to Container Orchestration
Advanced Debian "jessie" and "stretch" bootstrap script for RPi2/3
Wyliodrin STUDIO is a Chrome based IDE for software and hardware development in IoT and Embedded Linux
An All-In-One home intrusion detection system (IDS) solution for the Raspberry PI.
RaspberryPi (minimal) unattended netinstaller
Flashing Tool for SBCs
A smart time-lapse driver for Raspberry PI / raspistill
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00472.warc.gz
|
CC-MAIN-2021-17
| 742
| 12
|
http://meteorology.ou.edu/event/dr-muschinski-march-13-colloquium/
|
code
|
Optical propagation through the turbulent and non-turbulent atmosphere
The propagation of light through the atmosphere is affected by refraction, diffraction, scatter, and absorption. These processes are important for the understanding, design, operation, and optimization of various kinds of optical systems, including optical scintillometers, lidars, optical imaging systems used in astronomy and the geosciences, navigation and surveillance systems, and free-space optical communication systems. This presentation gives an overview of various observational, theoretical, and computational aspects of the line-of-sight propagation of light through the turbulent and non-turbulent, clear atmosphere. We will present and discuss various passive optical remote sensing techniques to probe winds, waves, and turbulence in the atmosphere.
Dr. Muschinski received his Diplom-Physiker degree (M.S. in physics) from the Technical University of Braunschweig, Germany in 1990. His Dr. rer. nat. (Ph.D. in natural sciences) and Habilitation degrees in meteorology are from the University of Hannover, Germany (1992 and 1998, respectively). From 1998 through 2004 he was a CIRES Research Scientist at the University of Colorado in Boulder and was affiliated with the NOAA Environmental Technology Laboratory. From 2004 through 2011, he taught at the ECE Dept. of the University of Massachusetts in Amherst, MA (2007: Jerome M. Paros Endowed Professor in Measurement Sciences; 2008: full professor). Since 2011, he has been a Senior Research Scientist at the Boulder office of NorthWest Research Associates. During the last 28 years, he has conducted research on atmospheric turbulence and atmospheric wave propagation.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335276.85/warc/CC-MAIN-20220928180732-20220928210732-00405.warc.gz
|
CC-MAIN-2022-40
| 1,708
| 3
|
https://www.cornellelectricvehicles.org/platform
|
code
|
Today, our autonomy compute hardware is general-purpose, power-intensive, and heavy. The Platform Subteam will address these shortcomings, and investigate the burgeoning possibilities of heterogeneous computing, leveraging industry FPGA SoCs.
We hope to develop and integrate optimized computing platforms to improve autonomy capabilities and achieve research outcomes under the guidance of faculty. We plan to use a mix of traditional CPU computing with FPGAs to take advantage of their core advantages:
- Parallelization: FPGA’s parallel computing capabilities allow us to accelerate large-scale operations, such as image processing, and assign independent tasks to dedicated hardware sections.
- Configurability: FPGA’s configurability allows us to iteratively improve our self-driving algorithms and easily synthesize new designs. Their customizable I/O blocks allow us to optimally interface compute with exteroceptive sensing for self-driving.
If this challenge is of interest to you, please reach out to Full Team Lead Kunal Gupta (kg379) directly.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100599.20/warc/CC-MAIN-20231206130723-20231206160723-00722.warc.gz
|
CC-MAIN-2023-50
| 1,059
| 5
|
https://slashdot.org/~kermidge/tags/maybe
|
code
|
My project involves research where I run my own code to repeatedly analyze a ~30TB dataset, where data is in flat files on an xfs filesystem and and each job reads a subset of these files, always sequentially. The task is inherently sequential, so this isn't parallelizable in ways suited to Map-Reduce, for instance. I get parallelism by running one research job per CPU core, and conducting many experiments in parallel.
The architecture I have so far is to use a direct-attached external JBOD chassis and hardware RAID6 over 16 x 4TB 3.5" SATA drives. I then NFS export this read-only dataset to other compute nodes nearby, which also run compute jobs accessing the same data. I am currently CPU bound, and so far I think I could grow to 2-4 more compute nodes all reading from this NFS filesystem. Once I exceed the bandwidth of the drives, I'll buy another chassis & drives (50TB) and mirror my dataset there.
I'm looking to expand compute capacity, and looking for advice between:
— scale out with many cheap nodes (Dell R620)
— fewer beefy nodes (Dell R820)
— Dell blade solutions (M1000e + many M620 blades)
— Dell VRTX blade with internal storage + compute
I have heard that blades can be finicky (setup, compatibility), and it seems surprising to me that these enterprise technologies would be the most price-efficient. Yet the pricing seems reasonably good, and the power consumption should be better. Are blades a popular choice for HPC?
I've also heard that it's possible to directly attach storage to multiple compute nodes. Presumably if I do this, I need to switch filesystems — is this advisable? It seems like it might perform better than NFS over 1GigE.
Happy to hear what people think.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123172.42/warc/CC-MAIN-20170423031203-00179-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,715
| 10
|
https://bluerosegirls.blogspot.com/2013/05/why-editors-sometimes-dread-talking-to.html?showComment=1368103264664
|
code
|
A few months ago, I got a ms. of mine back from its publisher -- NOT Little, Brown. The publisher had:
- rewritten most of the book
- added two pages of text that contradicted the main message
- completely changed the ending
- changed the title
-- and all without asking the author, me. By the time I saw the edits, the book was in pages, with all the art in place. This wasn't a work-for-hire project; it was a contracted book.
I wrote a couple of emails to the publisher, requesting changes as politely and firmly as I could; the first was ignored. They replied to the second saying they had, in an editor's words "gone as far as we can" (which wasn't very far: they changed a few words). At that point, I let it go.
EVERY author, I think, has high hopes for every book -- but sometimes, you have to just move on. Thanks partly to the wise advice of friends, I did. I put it out of my mind and went back to the novel I'm now rewriting.
Luckily, my agent will be the one submitting this, not me.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988775.80/warc/CC-MAIN-20210507090724-20210507120724-00324.warc.gz
|
CC-MAIN-2021-21
| 996
| 9
|
https://chdk.setepontos.com/index.php?topic=3974.10
|
code
|
maybe more important - on all proxies between you & the machine that hosts/deliver the page).
So, there was an existing problem ....
you accidentally removed the 2009-07-18 news entry
No, I removed that to see if it would be missing from the main page, which it was not, of course.
So the "solution" is to make an edit (on the page where the template is embedded) to force a refresh
Thanks for your courteous help
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652184.68/warc/CC-MAIN-20230605221713-20230606011713-00347.warc.gz
|
CC-MAIN-2023-23
| 413
| 6
|
https://docs.telerik.com/reporting/report-items/graph/structure
|
code
|
The Graph is a powerful and complex report item that displays a variety of elements to adequately display visual information as required.
The following image displays combined Column and Line charts in a Graph item with a Cartesian coordinate system.
The following image shows a Rose (Bar) chart in a Graph item with a Polar coordinate system.
The Graph is a data report item and, similarly to the Table item, allows you to summarize data by the CategoryGroups and SeriesGroups hierarchical dimensions.
The following images visually compare the data representation offered by the Graph and Table report items:
Depending on its series type, the Graph enables you to display one or more measures. Like the other data-based report items, the Graph connects to a single data source and provides additional options for sorting and filtering the input data, binding, conditional formatting, and so on.
Conceptually, both report items use the same multidimensional data model:
- The ColumnGroups in the Table are identical to the CategoryGroups of the Graph.
- The RowGroups in the Table are identical to the SeriesGroups of the Graph.
- The Cells in the body of the Table definition are identical to the Series definitions of the Graph.
The CategoryGroups hierarchy defines the data points in the Graph series. For example, if you have a group by product categories in the CategoryGroups hierarchy, the number of the different categories will determine how many data points the series will have at runtime. If the product categories consist of Accessories, Bikes, Components, and Clothing categories, the series in the Graph will have four data points.
The SeriesGroups hierarchy defines the series at runtime. For example, if you have a group by the Year field in the SeriesGroups hierarchy, the number of the different years will determine how many series will appear on the Graph. If the Year field contains the years 2001, 2002, 2003, and 2004, the Graph will display four series for every series definition that is bound to this group.
The Graph series display aggregated data to visualize one or more measures. At runtime, the intersection between the SeriesGroups and the CategoryGroups hierarchy members defines the data points in the series. For each data point, one or more aggregate functions are calculated to define the value or coordinates of the data points.
Depending on the series type, the Graph can visualize one or more measures:
- The Bar and Area series, including all derived subtypes, such as Pie, Doughnut, Bar, Column, and so on, represent single measures.
- The Range series, such as the Range Bar and Range Area, emphasize the distance between two values or measures.
- The Line series including all derived subtypes, such as Scatter, show the correlation between three different measures.
The Graph item uses a two-dimensional coordinate system that uniquely identifies the position of each data point. Each coordinate system consists of two reference lines called "coordinate axes" (or just "axes") and an "origin".
The Graph provides support for the Cartesian and Polar two-dimensional coordinate systems. Since there is a direct conversion between the two coordinate systems, they are interchangeable in the Graph report item. The coordinates of the data points in the Graph are represented by the
(x, y) pair that for the Polar coordinate system is converted to
(ϴ, r), that is,
(x, y) ⇔
The coordinate system also defines the default appearance and style of the two axes.
In a Cartesian coordinate system, each point is defined by an ordered pair of two coordinates which are the distances of the point to the two perpendicular axes.
The Cartesian coordinate system provides the following axes:
- X axis—The horizontal axis.
- Y axis—The vertical axis. The data point in the Cartesian coordinate system is represented by an ordered pair of two coordinates (x, y).
A Polar coordinate system is used where each point on a plane is determined by a distance from the origin (called the radial coordinate or radius) and an angle from a fixed direction (the angular coordinate, polar angle, or azimuth).
The Polar coordinate system provides the following axes:
- Angular axis—The circular axis for the angular coordinate.
Radial axis—The axis for the radial coordinate. The data point in the Polar coordinate system is represented by an ordered pair of two coordinates (
The coordinate axis represents a single dimension of the coordinate system.
An axis consists of the following elements:
- Scale—Defines how the data is projected on the axis.
- Tick marks—Major and minor, represent the periodic graduations.
- Labels—The numerical or categorical indications accompanying the tick marks.
- Title—The title of the axis, usually a brief description of the dimension.
- Grid lines—Within the Graph, a grid of lines may appear to aid the visual alignment of data. You can enhance the grid by visually emphasizing the lines at regular or significant graduations. The emphasized lines are then called major grid lines and the remainder are minor grid lines.
For more information on styling the axes, refer to the article on formatting the axes of the Graph report item.
Scales define how the data is projected on the corresponding axis, that is, how the data from the user domain is converted to coordinates.
According to the type of the input data, the Graph supports the following scale types:
- NumericalScale—Represents a scale with a continuous domain of numbers, such as, integer numbers (Int16, Int32, Int64) or floating point numbers (Single, Double), and so on.
- LogarithmicScale—A numerical scale that applies a logarithmic transformation with a given base to the input data.
- DateTimeScale—Represents a scale with a continuous domain of DateTime values.
- CategoryScale—Represents an ordinal scale with a discrete domain such as names and categories.
A Graph Series represents a series of data points that constitute an individual measurement. This section lists the series types supported by the Graph report item.
Graph Series support Tooltips. The Tooltips are related to the data points of the Series. Therefore, in LineSeries, the
DataPointStyle.Visible should be
True and the Data Point Marker
MarkerSize should be non-zero to have the Tooltips appear in the preview.
Bar charts display data points as bars to show comparisons between categories. One axis of the chart shows the specific categories being compared, and the other axis represents a discrete value.
You can arrange the Bar series in different ways to emphasize various aspects of the data:
- Clustered Bar Graphs—Bars are clustered in groups of two or more series.
- Stacked, Stacked 100% Bar Graphs—Show the bars divided into subparts to display a cumulative effect.
In a Cartesian coordinate system the bars have a rectangular shape and can be horizontal (Bar chart) or vertical (sometimes called Column chart).
In a Polar coordinate system, the bars appear in a wedge shape. If the series are arranged on the radial axis, that is, the wedges start from the radial axis and go by the angular axis, the result is a Pie chart. Otherwise, if the bars are arranged by the angular axis, the result is a Rose chart.
Line charts display a series of data points connected by straight or smooth line segments. Data points are represented by markers that can vary by shape (circle, square, diamond, cross, and so on) and can display a third variable or measure with its size (also known as Bubble charts).
When a Line series is projected on a Polar coordinate system, the result is also known as a Radar or Spider Line chart. Line series may be stacked to show a cumulative effect (stacked or stacked 100%).
Area charts are similar to the Line series. Area series display series of data points connected by straight or smooth line segments too but the area below the line is colored to indicate the volume.
When an Area series is projected on a Polar coordinate system, the result is also known as a Radar or Spider Area chart. Area series may be stacked to show a cumulative effect (stacked or stacked 100%).
Range Bar charts are similar to the Bar series. However, the bars do not start from the axis but at a given value. The Range Bar emphasizes the distance between two values or measures.
Range Area charts are similar to the Area series. However, the bottom point does not start from the axis but at a given value. The Range Area emphasizes the distance between two values/measures.
When the data appearing in a chart contains multiple variables, the chart may include a legend. The legend contains a list of the displayed chart variables and an example of their appearance. Legend content allows the user to identify the data from each variable in the Graph.
For more information on styling the legend, refer to the article on formatting the legend of the Graph report item.
The Graph report item can have one or more titles that provide a brief description of what the displayed data refers to.
For more information on styling the title, refer to the article on formatting the title of the Graph report item.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476592.66/warc/CC-MAIN-20240304232829-20240305022829-00244.warc.gz
|
CC-MAIN-2024-10
| 9,144
| 66
|
https://documentation.dnanexus.com/developer/ingesting-data/molecular-expression-assay-loader/example-input
|
code
|
An Apollo license is required to use the Molecular Expression Assay Loader on the DNAnexus Platform. Org approval may also be required. Contact DNAnexus Sales for more information.
For the molecular expression model, the core unit to be measured is the “feature”. To represent a molecular expression assay in a Dataset, there are three terms used to describe a feature: feature type, feature ID type, and feature value type. The feature type refers to the general category of what is being measured. The feature ID type refers to a standardized naming method for how an individual feature is identified. The feature value type refers to the method of measurement. For practical purposes, the following is a list of accepted combinations:
Feature ID Type
Feature Value Type
Either ENSG* or ENST*
Input Data Format
Software programs and data suppliers provide data in different formats. DNAnexus aims to support common formats to reduce any data transformation burden prior to ingestion. The following formats are currently supported for simplified ingestion.
N x M matrix of N features (rows) by M samples (columns), where each feature and sample is unique. A header row must be provided as part of this format, including a column for the feature ID. For example:
(N x M) x 3 table of N features with M samples (rows) and 3 columns with headers, where the first column is the “feature_id,” the second column is the “sample_id,” and the third column is the “value.” Each row should contain a unique combination of feature ID and sample ID. For example:
Manifest Format (Single Sample Files with a Manifest File)
Two sets of files; one manifest file which describes the respective data file ID and associated sample, and the set of individual data files. The manifest file should have two columns with headers, “file_id” and “sample_id.” Individual files should each have two columns with the headers “feature_id” and “value.” For example:
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817442.65/warc/CC-MAIN-20240419172411-20240419202411-00822.warc.gz
|
CC-MAIN-2024-18
| 1,968
| 11
|
https://kotlindev.stream/its-not-just-google-banks-are-chasing-kotlin-developers-too-efinancialcareers/
|
code
|
If you can code proficiently in Kotlin then this week has brought you some very good news. – At its 2019 i/o conference, Google said it’s going to make Kotlin the preferred language for Android app developers. This can only mean that demand for Kotlin developers is about to rise, a lot.
For neophyte-technologists only just getting to grips with Python or Java, Kotlin is a statically-typed programming language from JetBrains, a software vendor with offices in Eastern Europe and Boston. Kotlin’s strength lies in the fact that it’s entirely interoperable with Java, whilst also being more concise and more safe. Google’s initial espousal of Kotlin in 2017 came as a surprise; its decision two years later to relegate Java to second place seems even more startling.
Investment banks aren’t known for being at the front of the curve when it comes to software development. If you spend your career creating Java code in a bank, you might therefore be at risk of falling behind in the Kotlin revolution.
Kotlin is however, inveigling its way into some areas of the banking world. For example, Citi has Kotlin developers working on a regulatory reporting system and is also looking for Kotlin developers for its equity derivatives platform. JPMorgan is looking for Kotlin programmers to work on its React front-end development library, and Credit Suisse is using Kotlin to help develop its ‘Giraffe Risk Platform,’ which sits in an equities risk area known as, ‘The Zoo.’ And
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038072180.33/warc/CC-MAIN-20210413092418-20210413122418-00183.warc.gz
|
CC-MAIN-2021-17
| 1,493
| 4
|
https://forums.civfanatics.com/threads/religion.174795/
|
code
|
Seeing as there is only one religion left to get (Islam), I think we should decide wether or not we want a religion. Last time we tried to get one we came so close. But we still missed it. Should we try an get religion? If we do we would get some benifits, but this is a bit of a risk. So... should we get Islam or remain Pagans?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00296.warc.gz
|
CC-MAIN-2020-29
| 329
| 1
|
https://github.com/dkahle
|
code
|
Create your own GitHub profile
Sign up for your own profile on GitHub, the best place to host code, manage projects, and build software alongside 31 million developers.Sign up
- Waco, Texas
430 contributions in the last year
Created an issue in dkahle/mpoly that received 2 comments
varorder internally calls
reorder.mpoly(), but this is not the expected behavior.
mpoly(), sets intrinsic variable order, i.e. the ord…
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202496.35/warc/CC-MAIN-20190321051516-20190321073516-00304.warc.gz
|
CC-MAIN-2019-13
| 420
| 8
|
http://www.ign.com/boards/threads/nanosuit-in-crysis-3.452893437/
|
code
|
Can someone give me an accurate description of the new Nanosuit? For example, I've heard and have seen that the invisibility disappears once you fire an unsuppressed weapon. Can anyone elaborate on everything (specifics) the suit does? I have a feeling I don't know everything about it. Thanks.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592846.98/warc/CC-MAIN-20171217015850-20171217041850-00605.warc.gz
|
CC-MAIN-2017-51
| 294
| 1
|
http://www.verycomputer.com/30_8d3908d15b637641_1.htm
|
code
|
I recently configured a new Alpha Station 200 4/233 and all is up and running.
The DEC field rep installed a DEClaser 3500 onto the MMJ communication port
via a converter box hooked to the printer. The problem is I don't know what
the device name is on the system. I imagine it is probably a TT port, since I
see a TTA0:, but I was unable to
$ COPY file TTA0:
I got no reaction from the printer. And I can't find the port designation in
the manuals. I'm thinking I have a hardware problem with either the cable or
adaptor. I'd like to know where DEC cross-references a specific device with its
system device name? Which manual?
I assume that the converter box is going from MMJ (serial) to parallel on the
Donald G. Plugge
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860089.13/warc/CC-MAIN-20180618070542-20180618090542-00337.warc.gz
|
CC-MAIN-2018-26
| 722
| 12
|
https://escholarship.org/uc/item/0k46r57z
|
code
|
Trajectory Planning Optimization for maximizing the probability of locating a target inside a bound domain
This work presents a new trajectory planning formulation that aims to maximize the probability of finding a target inside a bound domain using a robot over a specified time interval. A preliminary algorithm is developed to detect stationary targets, which is further extended to detect moving targets. Values are assigned at every grid point in the domain based on its distance from the robot; each value represents the probability of not finding the target if it is at that location. A cost function is formulated that computes the likelihood of not finding the target for any given path provided the target is inside the domain.
This cost function is minimized with a set of bound constraints on inputs to obtain an optimal path to find the target. The algorithm incorporates an adjoint-based gradient method to link the input parameters to the cost function. The cost function is nonlinear which makes it hard for most commercial off-the-shelf (COTS) optimization packages to solve it.
For faster convergence the recently developed low storage reduced Hessian box constraint optimization method (LRH-B) was used.
Results show our algorithm outperforms other optimization algorithms, and also explain how this framework is beneficial in terms of application.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494986.94/warc/CC-MAIN-20230127132641-20230127162641-00218.warc.gz
|
CC-MAIN-2023-06
| 1,367
| 5
|
https://ondataengineering.net/technologies/streaming-analytics-manager/
|
code
|
A suite of open source web based tools to develop and operate stream analytics solutions and analyse the results, with pluggable support for the underlying streaming engine. Consists of Stream Builder (a web based GUI for building streaming data flows), Stream Operations (a web based management and operations tools for streaming applications) and Stream Insight (a bundling of Druid and Apache Superset to serve and analyse the results of streaming applications). Stream Builder supports creation of streaming flows using a drag and drop GUI, with support for a range of sources (including Kafka and HDFS), processors (including joins, window/aggregate functions, normalisation/projection and PMML model execution), and sinks (including email, HDFS, HBase, Hive, JDBC, Druid, Cassandra, Kafka, OpenTSDB and Solr), as well as support for custom sources, processors, sinks and functions (including window functions), and the ability to automatically deploy and execute applications. Stream Operations supports the management of multiple execution environments, the deployment, execution and management of applications within an environment, the capture of stream metrics via pluggable metrics storage (with support for Ambari and OpenTSDB), and web based dashboards to monitor applications and visualise key metrics. Started by Hortonworks in May 2015, with an initial release as part of HDF 3.0 in June 2017.
Other Names SAM, Streamline Vendors Hortonworks Type Commercial Open Source Last Updated June 2017 - v0.5
Packages Apache Druid (Incubating), Apache Superset (incubating) Is packaged by Hortonworks DataFlow
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663006341.98/warc/CC-MAIN-20220527205437-20220527235437-00499.warc.gz
|
CC-MAIN-2022-21
| 1,616
| 3
|
https://web.codedfilm.com.ng/download/love-and-addiction-or-addicted-to-love-the-maury-show/LS1jV1F6NW54cVlIOA
|
code
|
Love and Addiction or Addicted to Love? | The Maury Show
Samantha is 11 months clean and the man who helped her turn he life around is her boyfriend Terry! They met while in recovery. However since they’ve been together, Samantha has come into some money and now wonders if Terry is with her out of love or for the love of money!Subscribe NOW to The Maury Show: http://bit.ly/MauryTVWatch The Maury Show weekdays!Check your local listings for show times: http://bit.ly/WatchMaury#Maury22 #MaurysOnWatch full episodes of Maury on demand for free at: https://www.Nosey.comGet more of The Maury Show:Follow The Maury Show : https://twitter.com/TheMAURYShowLike The Maury Show: https://www.facebook.com/mauryshowThe Maury Show on Instagram: http://instagram.com/OfficialMauryShowSnapChat: @OfficialMauryVisit The Maury Show website: http://www.mauryshow.com/The Maury Show explores compelling relationships and family issues including: sexual infidelity, out-of-control teens, domestic violence, paternity testing and much more!
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195069.35/warc/CC-MAIN-20201128040731-20201128070731-00318.warc.gz
|
CC-MAIN-2020-50
| 1,026
| 2
|
https://forum.shotcut.org/t/two-bugs-keyframe-timeline-contrast-filter/13580
|
code
|
(I do not know yet, if there is a work on it…)
If you have a keyframe in a part (e.g. Mask) and that part is part of a transition and you delete the transition, the length of the timeline of that keyframe is too short (length of the video part - length of the deleted transition).
If you have that filter on a video part (i.m.c. and e.g. from 4,5 to 25) and you delete the last point or move it, the video part faded to green, or to red.
It happend now often to me…
And that “effect” is permanent, is meaning, also after a restart of SC the fade to red, or green is still there. Also deleting that filter takes no effect (to get again the normal coloring).
I also noticed a flickering into red, or green while sliding the value…
----------------- EDIT --------------
Further info on Contrast Filter with keyframe:
If you have that keyframe (i.m.c. 4,5) and you move a point in the keyframe timeline the video part fades into green.
----------- EDIT 2 -----------------
It seems, that this Contrast bug appears only with keyframes, also if the points aren’t moved (or such). I just had a “Fade into Green” by only setting two points (start and end of a video part). In the middle it got that “Fade Into Green”… After deleting the keyframe, the picture again looked normal.
BTW: That “Fading” into green/red appears also, but very short if you adjust a keyframe point…
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506480.7/warc/CC-MAIN-20230923094750-20230923124750-00382.warc.gz
|
CC-MAIN-2023-40
| 1,394
| 12
|
https://fluent.docs.pyansys.com/release/0.12/api/solver/tui/results/graphics_window/picture/index.html
|
code
|
Enter the hardcopy/save-picture options menu.
Set the DPI for EPS and Postscript files, specifies the resolution in dots per inch (DPI) instead of setting the width and height.
Use a white background when the picture is saved.
To set jpeg hardcopy quality.
Plot hardcopies in landscape or portrait orientation.
Display a preview image of a hardcopy.
Select from pre-defined resolution list.
Use the currently active window’s resolution for hardcopy (ignores the x-resolution and y-resolution in this case).
Set the width of raster-formatted images in pixels (0 implies current window size).
Set the height of raster-formatted images in pixels (0 implies current window size).
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494936.89/warc/CC-MAIN-20230127033656-20230127063656-00264.warc.gz
|
CC-MAIN-2023-06
| 677
| 10
|
https://download.parallels.com/ras/v19/docs/en_US/Parallels-RAS-19-Administrators-Guide/46777.htm
|
code
|
This page is used to configure template distribution to multiple Microsoft Hyper-V hosts. Note that this page will only appear if the source VM is a Microsoft Hyper-V machine. For the description of this feature and requirements, please see Multi-provider template distribution.
To configure template distribution:
- Select the Enable multi-provider template distribution option.
- In the Available list, select one or more providers and click Add (or Add all to add all available providers). Note that only providers of the same type and subtype as the source VM are displayed in this list.
- In the Number of providers for concurrent distribution field, specify the number of concurrent distribution operations. The template is distributed to target hosts using Hyper-V Live Migration, which first exports the virtual machine to a file and then moves it to the destination host. For each host in the Target list, a Live Migration operation must be performed. The number specified here dictates how many network copy operations should be started at the same time. The larger the number, the more network resources will be required. Note that virtual machine exports (the first step of Live Migration) are always done one VM at a time, so the number you specify here affect only the copy operations.
When done, click Next to proceed to the next wizard page.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511386.54/warc/CC-MAIN-20231004152134-20231004182134-00296.warc.gz
|
CC-MAIN-2023-40
| 1,357
| 6
|
https://talk.macpowerusers.com/t/mpeg-2-to-hevc-or-mp4/16883
|
code
|
Hevc is supposed to be same quality as mp4 but with much reduced file size.
I have some mpeg 2 streams which I have converted to both hevc and mp4, but the quality degrades considerably on hevc even though files size is alarmost half. On the other hand mp4 is much better quality with only small reduction in file size.
I am using permute from setapp to convert.
Am I missing anything ?
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710684.84/warc/CC-MAIN-20221128235805-20221129025805-00179.warc.gz
|
CC-MAIN-2022-49
| 386
| 4
|
http://www.winasm.net/forum/index.php?showtopic=684&st=25
|
code
|
Thanks for clarifying the window message thing - doh!
I'm still left with the above problem, maybe I need to explain it better. I'm already checking which build is selected and then setting it to debug before building. Everything is working fine except when an /out parameter is specified as below:
- Project name is "Hello", no built files exist yet
- Release is selected and release /out string contains "test.exe"
- Debug /out string is empty
- My addin sets build type to debug, and builds "Hello.exe" as expected
- My addin sets build type back to release and returns 0 for IDM_MAKE_DEBUG
- MiniDBG reports "The system cannot find the file specified." as it is looking for "test.exe", the file specified for a release build.
Obviously I need MiniDBG to be looking at the debug build settings.
If a release build exists then MiniDBG will start ok but will use the release build that has no debug info.
Doesn't it make sense that MiniDBG should always
retrieve the debug build settings??
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368711515185/warc/CC-MAIN-20130516133835-00058-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 990
| 12
|
https://forums.macrumors.com/threads/i-know-nothing-about-raid.111128/
|
code
|
I know virtually nothing about setting up a RAID array and need help on deciding whether to use RAID 0 or 1. I know that RAID 0 offers much better performance over RAID 1 hower does RAID 1 still offer any speed improvement over using just one drive? How reliable and safe is a RAID 0 setup? I currently have one Drive which is my boot drive would I have to do a reformat and reinstall of that drive if I were to add a second and create a RAID 0 array? How important is using the same model drives in a raid Array? Thanks in advance.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370494064.21/warc/CC-MAIN-20200329074745-20200329104745-00410.warc.gz
|
CC-MAIN-2020-16
| 532
| 1
|
https://debsdespatches.com/author/debscarey/page/5/
|
code
|
A Swedish word, although a concept that is shared across Scandinavia, it means not too much, not too little – just right. The wags amongst
Oh how hard they can be – especially when you’ve had a particularly enjoyable weekend – whether that be of the wild party or the
This week’s Friday feeling comes courtesy of Monty Python & James Bond for, on this day in 1969 & 1962, they each made their debut.
The Insecure Writers Support Group is a marvellous group set up by Alex Cavanagh. On the first Wednesday of every month, members post thoughts, fears
This writing prompt caught my eye as I thought it might be fun to play with. Until the real world rushed that is. One of
Some weekends are mapped out well in advance, others happen in a more reactive sense. One Saturday morning, while sitting comfortably in front of the
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999273.79/warc/CC-MAIN-20190620210153-20190620232153-00163.warc.gz
|
CC-MAIN-2019-26
| 830
| 6
|
http://stoneygrove.com/search.htm
|
code
|
|Use the form below to search Stoney Grove for specific words
or combinations of words. A weighted list of matching documents, with better matches shown
first, will be shown. Each list item is a link to a matching page.
You can use AND OR and NOT to extend your search.
Created by Stoneygrove.com.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473738.92/warc/CC-MAIN-20240222093910-20240222123910-00289.warc.gz
|
CC-MAIN-2024-10
| 297
| 5
|
http://stackoverflow.com/questions/3721187/best-practice-for-scrum-done-concept-in-jira
|
code
|
I work at a small service based company where we are starting to implement Scrum practices, and we are also starting to use JIRA with greenhopper for issue tracking. Our team has defined "done" as:
- unit tested
- integration tested
- peer reviewed
- qa tested
- documentation updated
I'm trying to figure out whether this should be done using a separate issue for each item in the above list for each "task", or if some of these items should be implemented in the ticket workflow, or if simply lumping them together in one issue is the best approach.
I'm disinclined to make these subtasks of a task, as there is only one-level nesting of issues and I fear there is a better use for that capability.
I also am not too excited about modifying the workflow, as this approach has proved to be a burden for us in other systems.
If all of these items are part of the same ticket then that seems weird to me because the work is likely spread between multiple team members, and it'll be hard to make tasks that are under 16 hours that include all of those things.
I feel like I understand all of the issues, but as of yet I don't know what the best solution is.
Is there a best practice? Or some strong opinions?
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678699570/warc/CC-MAIN-20140313024459-00017-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 1,206
| 12
|
https://vi.stackexchange.com/questions/19176/how-to-make-vim-text-bold
|
code
|
In order to have my vim text always bold, I found this config in stackoverflow:
:hi MyGroup cterm=bold
:match MyGroup /./
But somehow, it has messed up search highlighting. It is no longer highlighted, but it shouldn't have any effect on that, since it is only changing the cterm property, and not ctermfg/ctermbg.
I thought if I added the following I would fix it, but it didn't:
:hi Search ctermbg=yellow ctermfg=black term=bold cterm=bold
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475727.3/warc/CC-MAIN-20240302020802-20240302050802-00007.warc.gz
|
CC-MAIN-2024-10
| 441
| 6
|
https://github.com/Kruptein/planarally
|
code
|
A companion tool for when you travel into the planes.
PlanarAlly is a web tool that adds virtual battlemaps with various extras to your TTRPG/D&D toolbox.
Some key features are:
Self hosting: You can run this software wherever you like without having to rely on an external service
Offline support: This tool can be used in a completely offline set-up for when you play D&D in a dark dungeon.
Simple layers: Organize your scenes in layers for easier management.
Infinite canvas: When a limited workspace is still not enough!
Dynamic lighting: Increase your immersion by working with light and shadows.
Player vision: Limit vision to what your token(s) can see. Is your companion in a different room, no light for you!
Initiative tracker: Simple initiative tracker
Floors!: Look down upon lower floors when standing on a balcony!
This tool is provided free to use and is open source.
Typically only one person in your group should have to download and install PA, alternatively you can use a publicly hosted version.
Releases of PlanarAlly can be found on the release page.
For more information on how to use/install PA, see the documentation.
User documentation can be found here.
If you wish to contribute to the docs, they are hosted in a different repository.
If you want to contribute to this project, you can do so in a couple of ways.
If you simply have feedback, or found a bug, go to the issues tab above. First see if your feedback/bug/issue already exists and if not create a new issue!
If you want to contribute to the actual codebase, you can read more about how to setup a development environment in the CONTRIBUTING document.
If you want to contribute some gold pieces, feel free to checkout my Patreon
Credits to Gogots for the background map used source
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662584398.89/warc/CC-MAIN-20220525085552-20220525115552-00256.warc.gz
|
CC-MAIN-2022-21
| 1,769
| 22
|
http://rivetweb.org/
|
code
|
Apache Tcl is an umbrella for Tcl-Apache integration efforts. These projects combine the power of the Apache web server with the capabilities of the mature, robust and flexible Tcl scripting language.
New to Tcl? Curious about its advantages? Have a look at Why Tcl?
Apache Rivet replaces most of the functionalities once in mod_dtcl and neowebscript. We have taken the things we like most about both, fixed things we didn't like, and combined efforts in order to come up with something we're very pleased with. Rivet runs with Apache 2.2 and 2.4. Currently the prefork MPM is required to run Rivet 2.2.3 (currently official release). The development version supports also the worker MPM
Websh is a rapid development environment for building powerful, fast, and reliable web applications. Websh is versatile and handles everything from HTML generation to data-base driven one-to-one page customization. It has been used for years commercially, and is now part of the Apache Tcl project. For Apache 1.3, 2.0 and 2.2.
Support and development for mod_tcl has been dismissed. We recommend that new real world projects meant to run on Apache 2.x servers be developed with the current actively developed and/or supported sub-projects Rivet and Websh
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471982298977.21/warc/CC-MAIN-20160823195818-00062-ip-10-153-172-175.ec2.internal.warc.gz
|
CC-MAIN-2016-36
| 1,243
| 5
|
http://programmers.stackexchange.com/questions/tagged/dependency-management?sort=unanswered&pagesize=15
|
code
|
in java we have ivy, maven and others for handling library dependencies. For example I tell ivy that my program uses a framework-jar version 1.0 and ivy makes sure, that my program gets this jar when ...
I have code reviewed a piece of Python code, but to me it looks really ugly, hacky and complex for something that can be achieved very easily. The code looks something similar to the following: ...
I have been using Entity Framework for a few years. I have flip-flopped between calling out to repositories in my business logic or using lazy loading to retrieve data as I work my way through the ...
What is the problem that git submodules solve well? When should I use them? Or rather what is their use case? The only use of submodules that I have seen 'in the wild' has been when used to share ...
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413507444312.8/warc/CC-MAIN-20141017005724-00271-ip-10-16-133-185.ec2.internal.warc.gz
|
CC-MAIN-2014-42
| 802
| 4
|
https://forum.solidworks.com/thread/39884
|
code
|
To be Honest I am pretty upset finding this today. I had been working with my VAR and SW waiting on a solution. Its been a month and I am still waiting to hear from SW as to why and how I can get around this problem.
Problem: When I put VBA code, or add buttons to my worksheet then close out the Design Table and reopen it... all the code and code for the buttons are gone and nothing works from a VBA standapoint.
This was the question I had been waiting for an answer on for weeks. I gave up and sat down with it yesterday and found the problem yesterday afternoon. As it turns out in Office 2007 MS released 2 separate work sheet styles Macro-Enabled and Standard Worksheets. Solidworks only allows users to use Standard Worksheets which does not allow the use of VBA code within them. In past version of Office this was not a problem and I made some killer Design Table Spread sheets that really made things easy for the users, but now I can't because we use Office 2007 and SW has limited everyone to only use these cheap crappy standard excel Worksheets.
I got in today thinking I will insert my own macro enabled worksheet... nope the only option is *.xlsx, which is the standard worksheet. *.xlsm is the Macro Enabled Worksheet.
As I was writing this post I got a call from my VAR and Solidworks. Turns out this is a limitation SPR# 548310. I hope they fix this soon and within SW 2011 so when we upgrade to 2011 in June/July I will be able to use this functionality, since SW should have been supporting it since the release of Microsoft Office 2007.
If you use Design Tables or any type of VBA coding in your worksheets, please submit this to your VAR or SW and get this issue moved up the list. This functionality can and is very limiting for not only myself, but anyone that tries to use it. Its a brick wall, that not only myself but others have not been able to get around.
Purchasing Driveworks or the SPN software is not an option, not only because of the cost for each user (100+ users here), but also because SW is Microsoft Compliant and this functionality should already exist.
Thanks for reading and I hope you submit this bug!
Scott Baugh, CSWP
CAD Administrator / Design Engineer
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141745780.85/warc/CC-MAIN-20201204223450-20201205013450-00177.warc.gz
|
CC-MAIN-2020-50
| 2,203
| 10
|
https://sysprogs.com/w/forums/topic/formatting-issue-during-typing/
|
code
|
Tagged: clang format
June 10, 2022 at 00:41 #32738
I have cmake project with clang format setup (.clangformat in attachment)
There is annoying issue. I am in state as first image in imgur and I have cursor between curly braces then when I press enter the curly braces it does not properly align (as shown in second image). I think that the formatting engine does not respect clang format file.
I also had problem that I must change in Options->Text Editor->C/C++ (VisualGDB)->Tabs indent size to match the one in clang format file. When I have different settings in clang format and in visual studio settings then it does not work. I had to run Edit->Advanced->Format document to apply the correct formatting settings. I think there is some issue with formatting engine (during typing) so it does not respect clang format settings at all.
Thanks for help
I have version 5.6r6June 10, 2022 at 04:05 #32739June 10, 2022 at 09:20 #32742
Unfortunately, it is hard to suggest anything specific based on the description you provided.
In order for us to provide any help with this, we need to be able to reproduce the problem on our side.
Please provide complete and detailed steps to reproduce the issue as described below:
- The steps should begin with launching Visual Studio. They should include every step necessary to create the project from scratch and reproduce the issue.
- Please make sure the steps do not involve any 3rd-party code as we will not be able to review it. If the problem only happens with a specific project, please make sure you can reproduce it on a clean project created from scratch.
- The steps should include uncropped screenshots of all wizard pages, VisualGDB Project Properties pages and any other GUI involved in reproducing the problem. This is critical for us to be able to reproduce the problem on our side.
You can read more about the best way to report VisualGDB issues in our problem reporting guidelines, If you do not wish to document the repro steps and save the screenshots, please consider recording a screen video instead and sending us a link to it.June 13, 2022 at 02:50 #32747
There are videos of all steps. I used a xbox gamebar on windows and it cannot record across multiple windows so its splitted into multiple videos. Used clang format is in the attachment.
There is some inconsistencies that the intellisense formatting does not use clang format rules. But format document works properly.
Thanks for helpJune 13, 2022 at 07:19 #32749
I noticed that there was some error
[+0:18:05.330] Starting operation: Determining code formatting settings for C:\Users\otodu\Downloads\VisualGdbTest\VisualGdbTest.cpp
[+0:18:05.331] Error HRESULT E_FAIL has been returned from a call to a COM component. while trying load C:\Users\otodu\Downloads\VisualGdbTest\.clang-format
[+0:18:05.331] Stack trace:
[+0:18:05.331] Server stack trace:
[+0:18:05.331] at hs2.c(Byte b, Int32 a)
[+0:18:05.331] at sj3.d(String a)
[+0:18:05.331] at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object args, Object server, Object& outArgs)
[+0:18:05.331] at System.Runtime.Remoting.Messaging.StackBuilderSink.SyncProcessMessage(IMessage msg)
[+0:18:05.331] Exception rethrown at :
[+0:18:05.331] at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
[+0:18:05.331] at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
[+0:18:05.331] at sj3.d(String a)
[+0:18:05.331] at wn1.t4.a()
[+0:18:05.331] Operation completed: Determining code formatting settings for C:\Users\otodu\Downloads\VisualGdbTest\VisualGdbTest.cpp [0 msec]
Maybe there should be some warning that it cannot load .clang-format file
But even if i provide correct clang format file intellisense does not apply formatting settings from there
[+0:18:05.329] Indentation size: 3, will insert spaces
for example it uses indentation size 3 instead 2June 13, 2022 at 14:13 #32753
Thanks for the detailed repro steps. It looks like your .clang-format file is corrupt or contains statements that are not supported by Clang 6.0 used by VisualGDB:C++1Error HRESULT E_FAIL has been returned from a call to a COM component. while trying load C:\Users\otodu\Downloads\VisualGdbTest\.clang-format
Please try narrowing this down to a specific statement/line that prevents the file from being loaded. E.g. you can remove half of the statements and check if the file still loads. If yes, restore half of the previously removed ones, and try again.
If you can pinpoint a specific statement that is not loaded properly, please let us know and we will provide more details.June 13, 2022 at 23:46 #32755
Yeah in previous message I wrote that I figured it out. But even with correct clang format it does not work. The correct clang format in the attachmentJune 14, 2022 at 09:13 #32759
Sorry about the confusion.
The indentation settings shown in the diagnostic log are taken from Tools->Options->Text Editor->C/C++ (VisualGDB)->Tabs. The clang-format file should normally override them, but if it’s not happening, you can always change them manually via the VS Options window.
You must be logged in to reply to this topic.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00515.warc.gz
|
CC-MAIN-2022-33
| 5,208
| 43
|
https://community.cisco.com/t5/network-security/asdm-on-mac-os-x-10-7-x-lion/m-p/1867743
|
code
|
ahh, okay folks, if you're on OS X, don't use ASDM-645-206.bin... 645 works fine tho so far.
i'm getting tired of correcting myself. So i set the ASA to use the asdm-645.bin instead of 645-206 (latest as of posting) asdm image and my already installed ASDM app on the macbook works fine and doesn't ask me to update. but if you need to actually INSTALL asdm (if you don't have it already installed) you can't seem to do that. i still get a 'page cannot be displayed' type message from Safari, Firefox and Chrome on my mac when i try to open HTTPS://ASA_IP_ADDRESS where i would usually install from.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945381.91/warc/CC-MAIN-20230326013652-20230326043652-00779.warc.gz
|
CC-MAIN-2023-14
| 599
| 2
|
https://therollup.co/%F0%9F%92%AC-revolutionizing-ticketing-this-decade-w-nft-access/
|
code
|
NFT Access is a project being built with the goal to bring the value of ticketing back to the community!
👩💻 Join the The Rollup Discord Server!
DeFi Slate Fam,
The coolest part about Web3 is the potential for total industry reformation in certain ‘rent-seeking’ parts of our society.
There are so many rent-seekers, for example, I went to a Flume concert recently and the ticket was $60 but after Ticketmasters fees it was $90!
50% increase! Talk about rent seeking. Wtf man.
Today we introduce NFT Access, a project being built with the goal to bring the value of ticketing back to the community and create a true decentralized ticketing community where you can be a TRUE fan!
Amidst this crazy market down turn, we must keep building and growing the ecosystem.
Lets do it!
🙌 Together with:
It’s officially time to start testing the first version of Synonym. Break things, provide feedback, and of course, get rewarded.👉🏽 Here
Off-chain governance. On-chain execution. Optimistically execute governance transactions on-chain where your community can approve them. Check it out Here!
The Rollup Report 📰📊📈
⚠️ DISCLAIMER: Investing in cryptocurrency and DeFi platforms comes with inherent risks including technical risk, human error, platform failure and more. At certain points throughout this post, we might get a commission for promoting certain projects, if this is the case we will always make sure it is clear. We are strictly an educational content platform, nothing we offer is financial advice. We are not professionals or licensed advisors.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100308.37/warc/CC-MAIN-20231201215122-20231202005122-00311.warc.gz
|
CC-MAIN-2023-50
| 1,583
| 14
|
https://gitlab.nic.cz/knot/knot-resolver/-/issues/252
|
code
|
Test DNS64 module with weird answers
Presentation DNS64 at scale – Turning off IPv4 contains on slide 14 queries which return intentionally weird answers. We should test our DNS64 module that it reacts reasonably.
If there is something which is not RFC-compliant, let's fix it in the DNS64 module. If there is something worth fixing for non-compliant cases, it should probably be in workarounds module.
Please talk to me before introducing workarounds for non-compliant cases.
Also, this might require some new Deckard tests.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038069267.22/warc/CC-MAIN-20210412210312-20210413000312-00248.warc.gz
|
CC-MAIN-2021-17
| 527
| 5
|
http://www.tuaw.com/2008/08/15/macbundlebox-15-apps-for-50-bucks/
|
code
|
MacBundleBox: 15 apps for 50 bucks
MacBundleBox is offering 15 Mac applications for $49.95: an 85 percent discount (compared to buying each app individually). The apps included are:
- Headline - A full-featured RSS/ATOM feed reader with an ultra-minimal UI.
- Mac Pilot 3 - A system optimization and customization utility.
- iConquer - A game not unlike Risk.
- Mahjong Forests - A traditional mahjong game.
- Shoebox Express - A solution for organizing all your photos by content.
- Caboodle - A way to collect random snippets of text or images on your machine.
- Narrator - A program that will read out stories in multiple voices.
- WriteRoom - A distraction-free word-processor, and possibly the most popular app in the bundle.
- Scribbles - A simple drawing utility.
- Money - An accounting app.
- Operation - A simple project management application.
- Aurora - An iTunes-integrated alarm clock.
- Compositor - A CoreGraphics-based image editor.
- Sofa Control - Allows you to control your applications remotely, using an Apple Remote.
MacBundleBox is available directly from their website.
Software Updatesmore updates
- Apple Remote Desktop updated with Yosemite support
- OS X Yosemite 10.10.2, iOS 8.1.3 updates now available
- Sports Illustrated 120 SPORTS channel comes to Apple TV
- Logic Pro X update brings AirDrop support, new effects, tools, and more
- Parallels Access 2.5 released, adds file manager, computer-to-computer remote access
- The Google Translate iOS app is about to get a lot smarter
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936471203.9/warc/CC-MAIN-20150226074111-00060-ip-10-28-5-156.ec2.internal.warc.gz
|
CC-MAIN-2015-11
| 1,513
| 24
|
http://creativeilog.blogspot.com/2011/07/networking-on-os-x-lion.html?showComment=1312662820759
|
code
|
Hello World! -An Introduction-
Fixing Mac OS X Lion (10.7) Network & Internet Sharing... Oh yeah!
When I first got Mac OS X Lion running on my MacBook, I went through every single "new feature", and everything worked smoothly. But when I was asked if I could share my ethernet connection through an ad-hoc network, that's where I found a problem. Internet sharing for me is crucial because I have an ethernet connection and I'm so cheap and stingy to buy a router. Therefore, finding a solution to have a wireless access point on my house with my laptop was magical. Previously, I had shared my internet connection through an ad-hoc network using Windows Vista (believe it or not), Windows 7, and Mac OS X Snow Leopard. They all worked flawlessly (though changing adapter settings in Windows because it magically changed permissions by itself was a pain), and me and everyone at my house were happy. So a couple of days back, when I upgraded to Lion, I noticed how creating networks and internet sharing were not working anymore. I was really sad. I honestly felt ripped off. I then began searching forums and exhaustively checked every single prompt on the Apple Support Discussions board, but found no answer. I was desperate, and when I had lost all hope (and thought of waiting for a Lion update), I found an interesting quote on a forum somewhere on the web. And thanks to whoever wrote it, I revealed the key to make it work! And here's how:
How To: Fix Mac OS X Lion (10.7) Network & Internet Sharing
Step 1: Open your System Preferences, which should be located on your dock or on your Applications folder.
Step 2: Click your "Sharing" folder under "Internet & Wireless".
Step 3: Make sure "Share your connection from:" -> "Ethernet" is selected (it is the fourth choice on the dropbox), and "To computers using:" -> "Wi-Fi".
Step 4: Click on the "Wi-Fi Options..." button.
Step 5: Notice how the Network Name uses your computer's name. This is because the network driver is pointing at the old Airport utility (on OS X Snow Leopard and older), and to be able to configure it to the new "Wi-Fi" scheme, you need to rename the network to something short like: "SM MBP".
Step 6: After you have changed the name of the network, click "OK", and you will be prompted with the System Preferences Change window:
Step 7: Type in your password, and click "OK". This will now point the network driver to the new Wi-Fi utility.
How to know it really worked: After you've done this, the arrow-pointing-up icon on your Wi-Fi status icon will show up. This also works with creating a network directly by clicking your Wi-Fi status icon -> "Create network". The prompt is similar to the "Wi-Fi Options" above. Rename the network to a shorter name, and voila! Your Wi-Fi statusbar icon should be replaced with the computer icon.
Update #1: Mac OS X Lion 10.7.1 does not fix the networking issues. You may use the steps on this guide to get it working again after updating.
Update #2: Networks created following the steps on this guide will be discoverable and usable by any Wi-Fi enabled device. I tested it with my Nintendo Wii console and it worked flawlessly. (It's weird since I had previously done this with Snow Leopard with no success at all, but it works now :')).
Update #3: Mac OS X Lion 10.7.2 is out and still no fix for networking issues. This guide is still useful.
Update #4: This was supposed to be my first Newsstand app publication, but for the time being, it'll be a public release through a public web server. Scroll to page #7 for the OS X Lion networking fix extended guide. Hope it helps! http://dl.dropbox.com/u/40383609/Creative_September_NetworkFixLion.pdf
Update #5: This will work on OS X Mountain Lion as well. :)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863972.16/warc/CC-MAIN-20180521082806-20180521102806-00330.warc.gz
|
CC-MAIN-2018-22
| 3,734
| 17
|
http://stackoverflow.com/questions/12423867/big-data-in-sharepoint-2010/12487932
|
code
|
I have a big amount of data (7 Mio. rows) in CSV-Format I have to import into a SharePoint-Project once in a month automaticly. The total amount of data is not that big (100 kB). The query in that data usually only retrieves one or a few rows)
Because SharePoint does not really "like" big lists (Threshold etc.) I wonder which would be the best way to solve that bottleneck.
Just put the Data into the List (Would not prefer this, cause even the deletion before import would surely take hours)
Save the data into a sql-Database and write a "wrapper" to connect to SQL directly
These are my first thoughts about the possibility to solve this. Are there any other (better) approaches?
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701159376.39/warc/CC-MAIN-20160205193919-00223-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 683
| 5
|
https://community.spiceworks.com/topic/298606-remote-port-monitor-on-2800-procurve
|
code
|
Hi, multiple 2800 series switches in my environment, and am looking to do some traffic analysis (with snort). Ideally, I will have one snort box connected to a switch, and want to be able to monitor traffic from hosts on a different switch? Is this possible? If it is, can someone point me in the right direction?
I believe the 2800 supports sFlow - that might not work with Snort, but it would be useful for monitoring.
Don't think it supports remote port mirroring - believe it's on the 3500 and above. Couldn't find anything useful online.
Thanks, I played with some test switches, trying to use the remote port mirroring concepts I've been reading about for juniper and cisco equipment, but had no success...
Thanks for the info.
Some of the higher end HP switches have remote monitoring port, just doesn't look like that one does. Check if you have any other switches you can use instead.
If you're using Snort, you probably want it to monitor the traffic through your firewall - can you connect your snort machine directly to the same switch that connects to your firewall and use a local mirror port?
All the 3Com heritage enterprise switches in HP portfolio can do this, but not the Procurves to my knowledge. Use sFlow instead as suggested by Nitroz.
3500, 5400 and higher can do it. Thanks for the info on the 3com Mechanical, wasn't sure about those.
Thanks everyone, I'll look into Sflow. Priscilla, I will take you up on your offer as well. Thanks!
Can this be marked as answered or do you have further questions?
Thanks to everyone. I'll do some testing on Monday, and will keep you posted.
I have confirmed through HP support that remote port mirroring is not supported on the 2800 series procurve.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743913.6/warc/CC-MAIN-20181117230600-20181118012600-00178.warc.gz
|
CC-MAIN-2018-47
| 1,713
| 13
|
http://blogs.msdn.com/b/opal/archive/2011/06/30/the-mystery-behind-sharepoint-2010-patching.aspx
|
code
|
[Update: People asked why the official guidance on SharePoint Team Blog and CAPES Blog are different from my post - The official guidance is to keep the consistency with the patching methods in the past. At the meantime we are also working hard internally to provide crisp clear official guidance in the future, so stay tuned. ]
There’re always lots of questions around SharePoint patching. How to patch SharePoint? Which patch should I use to get a certain build number? In which order should I apply those patches? What is CU? What is SP? What is the difference between them?… All these questions are hard to answer – because to understand the answers, you have to understand the patching process of SharePoint.
Let’s go through different scenarios of SharePoint patching. Hopefully this can explain the mystery. I will start with the easy one and then the complex one.
Apply a security/feature fix on SharePoint Server Farm
This is the most simple scenario in SharePoint patching. Download the patch, apply it to every SharePoint Server in the farm and run PSConfig on all of them. That’s it.
Wait, is this really that simple?
The answer is no. Because there will be quite a few questions here:
Wdsrv-x-none.msp Not Applicable 44,946,432 17-Mar-2011 22:59
Use my table above to look up the component, you will find WDSRV does not belong to Office Web Apps, but SharePoint Server 2010. It is the Word Server (Word Automation) component. Someone mistakenly labeled it Office Web Apps hotfix. So, if you tries to apply this update to a SharePoint Foundation + Office Web Apps installation, it will never work – because this WDSRV component does not exist. If you applied any SharePoint Server update that has a higher version number than this patch, it will not be able to be applied either since the fix is already included.
Is this helpful? Let’s go on to the second scenario: Cumulative updates.
Apply Cumulative Updates/Service Packs on SharePoint Server Farm
CU packages are released in a bi-monthly manner. We want to make it more predictable so customer can plan their patching window, and save time by package everything together. CU does not have the same testing quality with Service Packs in theory, and the release schedule is always a little bit funny. Most of the CUs are scheduled to be released at the end of that month (subject to change), for example Apr CU was released on Apr 26th. If a last minute regression comes in at that time, then the release will be delayed, and you may already noticed Feb CU was released in March 3rd, to ensure the quality.
I used to post CU release information on SharePoint Team Blog. Now Stefan Goßner’s blog is my favorite to catch up with all the information. For example his Apr CU post is quite clear on the packages: http://blogs.technet.com/b/stefan_gossner/archive/2011/04/27/april-2011-cu-for-sharepoint-2007-and-2010-has-been-released-today.aspx
You can see he listed all the full server packages. Please remember, unless necessary, please only use these server packages to apply CU update. Internally we call them “Uber” updates, which includes all language packs and all components. If you are not using these packages, you may be missing some component updates. How to tell that? Using the table above to compare with the file information tables in KB article, and you will find all of the components are covered, with all MUI packages. Individual fixes do not have all of them.
Service Packs are different. Traditionally the service pack downloads does not give you an all-in-one package like the Uber updates do. Language packs are not there in the main package, you need to download and apply them separately.
Now here are the questions:
All the above is just to identify which update should be installed. The best “how to patch” article is still the one on TechNet: http://technet.microsoft.com/en-us/library/ff806338.aspx and it covered how to monitor patching, how to reduce downtime, etc.
Really good article, this should be on MS site somewhere...it is a shame they don't publish this to make it easier for us to figure out.
why did you changed your mind about Foundation?
This is to make sure I'm not giving different guidance than the official ones. My view in this post is purely from technical perspective - while the official guidance needs to consider more of the consistency with the past. Two years ago with the help of SharePoint Customer Advisory Team (CAT) we created the guidance of applying cumulative updates for 2007. At that time, WSS CU is not a part of MOSS CU. So I published the guidance to instruct people to install WSS CU first, then MOSS CU. In fact the order does not really matter, but the purpose of the guidance was to reduce issues and simplify repo steps. Now for 2010, SharePoint Foundation files are included in SharePoint Server installation files, service packs and uber CU server packages (you can double check this with my table 1). Technically what I temporarily crossed out from the post is correct but I agree to some point, the old guidance still serve its purpose - reducing issues and simplifing repo steps.
I just found that the update (MS11-074: Description of the security update for SharePoint Server 2010 (wosrv): September 13, 2011) cannot be updated. the link is below : support.microsoft.com/.../2566960
My environment is :
windows server 2008 sp2 x64
sharepoint server 2010 x64
office web app x64
Jie Li this article is a wonderful contribution to the working community. I'm always hungry for deep dives and the tools that will help me to be more decisive in my SharePoint environment. This article does the job for patching SP2010. THANK YOU!
Really good article, thank you.
Do you know exactly what the svrproof packages are? I know they must be to do with Languages but nothing more.
Can't microsoft make this much, much easier by developing a "patching program" which reads all the obscure (and sometimes incorrect!) files and "just works?"
oh wait, then consultants would not have jobs.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160582.7/warc/CC-MAIN-20160205193920-00299-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 6,038
| 30
|
https://sys-techs.com/going-to-make-a-career-in-data-science/
|
code
|
If someone is thinking of making a career change or choosing a career as a data scientist, then this is the right time to take the decision and get enrolled in Data Science Course. Now the world is going towards artificial intelligence, and all the corporates are moving towards it then it becomes necessary for them to hire a data scientist to crack those complex data into understandable language. So, that artificial intelligence can work properly.
Many people have already enrolled themselves in this course. Most of the data science institutes in hyderabad come with a time period. It depends upon the person which course they opt for. Some of the courses last up to 5 months and some last up to 1 year. And in every data science course institute, there will be many options available for a person to choose from that variety of courses, which will be best suitable for them and can enhance their career.
Where to go for a data science course
Well in India Bangalore is known as the IT hub of India and it is best suitable for a Data Science Course in Bangalore. Because Data Science Course in Bangalore not only offered the course and projects, but the institute will also be responsible for the placement. And as it is an IT hub, there will be much more opportunity than any other place. So, just do a bit of research about the courses in data science and get enrolled in the best institute of data science, which can make someone’s career.
Is it the right decision to make a career switch?
Of course, it is the right decision. There are many people who come from a mechanical engineering background, and right now they are working with Wipro and TCS and many other companies. It’s just that if the candidate has the potential and knowledge, then the company will hire the candidate. And data science is not an easy course, so only the best got the certificate. And if someone is good with mathematics and logics, then they can become a data scientist and can give their career a new start. Visit the data science institutes in Bangalore to get an admission now.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103464.86/warc/CC-MAIN-20231211013452-20231211043452-00850.warc.gz
|
CC-MAIN-2023-50
| 2,073
| 6
|
https://cryptonewmedia.press/2019/05/16/coinected-io-coinected-io-medium/
|
code
|
Coinected.io – Coinected.io – Medium
Coinected is the easiest platform for users around the world to buy and sell cryptocurrencies directly from each other, with multiple payment options — using local fiat currency and any payment method as the fiat payment settlement channel (bank transfer, PayPal and others).
The biggest limitation within the current landscape of OTC exchanges and P2P fiat-crypto gateways is the lack of liquidity as platforms are finding it extremely difficult to match buyers with sellers willing to trade the same token, using the same fiat currency and payment method.
Typical user experience on most OTC exchanges:
- Alice wants to buy ENG token using USD, with PayPal as a payment method.
- She browses through available sell offers and finds Bob’s offer.
- Unfortunately, Bob is only interested in selling ZRX token, as he does not hold ENG.
- It is currently not possible to convert between ZRX and ENG tokens, therefore the exchange does not match Alice and Bob, and the trade cannot be executed.
- Alice is not able to buy crypto and leaves disappointed. Bob, as a seller, has just lost a potential customer and some profit.
Enabling greater liquidity
To solve the aforementioned problem in user experience, Coinected.io has integrated Kyber’s liquidity protocol into our trading flow. Kyber’s token swap technology, which supports over 70 ERC20 tokens, allows an exponentially higher number of matches between buyers and sellers, hence enabling much better liquidity.
Seamless experience with Coinected.io:
- Alice is looking to buy ENG tokens.
- Bob is selling ZRX tokens only, but thanks to Kyber’s protocol we are able to easily match Alice and Bob.
- Bob transfers ZRX tokens to our secure escrow smart contract through Kyber (ZRX->ENG swap).
- Escrow smart contract receives ENG tokens, which are released to Alice’s account once the fiat payment is completed. Alice and Bob leave satisfied.
Learn more about how Coinected.io works HERE
We are very proud to announce this exciting integration as it is a very important step towards our vision of Coinected.io as a global, decentralized fiat-crypto gateway.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257601.8/warc/CC-MAIN-20190524084432-20190524110432-00294.warc.gz
|
CC-MAIN-2019-22
| 2,160
| 18
|
https://hyvor.com/blog/symfony-blog
|
code
|
If you want to add a blog to your Symfony application, this guide is for you.
Hyvor Blogs is a simple and powerful blogging platform. In this tutorial, we will see how to create a blog with Hyvor Blogs and host it at
/blog in your Symfony application. All contents of the blog will live inside a cache pool in your application. We will use webhooks for cache invalidation.
First, we will create a blog in Hyvor Blogs and configure the basic settings needed for self-hosting.
First, create a blog at the Hyvor Blogs Console. You will get a subdomain, which you need in the next step.
Go to Settings → Hosting
Update Hosting on/at to Self-hosting
Set the Self-hosting URL to the absolute URL of your Symfony application’s blog route. For this tutorial, set it to
https://mywebsite.com/blog. You can customize the
/blogroute later if needed.
Note: To test webhooks, your Symfony application URL should be publicly accessible. Therefore, if you are developing locally, we recommend using a tool like ngrok to expose your Symfony site temporarily on the internet.
Go to Settings → API Keys
Set a name (ex: “For Symfony Blog”)
Select Delivery API as the API
Create API Key
This API key will be needed in the next step.
Go to Settings → Webhooks and create a Webhook with the following values.
URL: Set this to “your website URL + /hyvorblogs/webhook”. Ex:
Select the following events
You will need the Webhook Secret in the next step.
Next, we will configure your Symfony application to “communicate” with Hyvor Blogs APIs/Webhooks to render your blog correctly.
First, install the
hyvor/hyvor-blogs-symfony package (bundle) in your project using composer.
1composer require hyvor/hyvor-blogs-symfony
Then, add the bundle to
1<?php23return [4 // other bundles...5 Hyvor\BlogsBundle\HyvorBlogsBundle::class => ['all' => true],6];
Then, add the following configuration to the
1hyvor_blogs:2 webhook_path: /hyvorblogs/webhook3 blogs:4 -5 subdomain: your-subdomain6 base_path: /blog7 delivery_api_key: '**********'8 webhook_secret: '**********'9 cache_pool: cache.app
webhook_path is the route in your application that will handle the Webhooks from Hyvor Blogs.
blogs key contains an array of blog configurations. It allows you to set up multiple blogs in the same Symfony application. The following configurations are supported:
subdomainis the subdomain of the blog, which you created in the first step. You can get it to Console → Settings -> Hosting.
base_pathis where your blog will be rendered in your Symfony application.
delivery_api_keyis the Delivery API key that you created earlier at Console → Settings -> API Keys
webhook_secretis the secret key you got at Console → Settings -> Webhook
cache_poolis the Symfony Cache Pool that will be used to save the blog content. By default, it uses the app cache pool.
Finally, clear the Symfony cache to refresh the service container.
1php bin/console cache:clear2# or3php bin/console cache:clear --env=prod
That is everything. Now, try visiting the
/blog path of your blog. If everything is configured correctly, you should see your blog. Try updating the content of a post to make sure the content is updated on your blog, ensuring webhooks are working properly.
If you have any troubles, comment below or contact support.
If you would like to know how this works internally, see our self-hosting with web frameworks documentation.
Once you have set up everything, you can customize your blog theme and start writing. Our documentation has all the information you need.
If you like to set up multiple blogs within the same applications, you can add more configuration arrays to the
blogsarray in the config file.
Any problems with the Symfony package? Raise an issue or contribute on Github.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.3/warc/CC-MAIN-20230923194908-20230923224908-00625.warc.gz
|
CC-MAIN-2023-40
| 3,762
| 45
|
https://paginas.fe.up.pt/~easss2021/
|
code
|
The 22nd European Agent Systems Summer School (EASSS 2021) will be organized by the Artificial Intelligence and Computer Science Lab of the University of Porto (LIACC), and held online, on July 19-23, 2021. EASSS 2021 is organized under the auspices of the European Association for Multi-Agent Systems (EURAMAS) and the Portuguese Association for Artificial Intelligence (APPIA), as an Advanced School on Artificial Intelligence.
The main goal of EASSS is to provide an exchange of knowledge among individuals and groups interested in various aspects of autonomous, agent-based and multi-agent systems research and practice. This dissemination is provided by formal state-of-the-art courses conducted by leading experts in the field and by informal meetings during the event. The school attracts both beginner and experienced researchers, encouraging cooperation between representatives of many branches of the Multi-Agent Systems research community.
The topics tackled within the field are becoming increasingly important due to the ubiquity of autonomous and distributed systems, such as the Internet of Things, intelligent buildings and smart cities, autonomous robots, agent-driven financial markets, etc. These kinds of applications demand innovative approaches in a number of research fields, including multi-agent systems engineering, coordination and cooperation, human-agent interaction, reasoning and planning, machine learning, game theory, modelling and simulation, and robotics. EASSS 2021 will be a privileged forum to learn the current trends in most of these fields, from both theoretical and practical perspectives. The courses will be taught by leading researchers in the field, and are aimed at Ph.D. students, advanced M.Sc. students, and other young researchers.
Since the initial edition of 1999 (Utrecht, Netherlands) the school has been held annually in different European locations. The previous edition took place in Maastricht, The Netherlands. The school usually attracts circa 50 students every year. A typical course is 4 hours long and provides a general introduction to the selected topic followed by an in-depth exposition of recent and relevant contributions.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00067.warc.gz
|
CC-MAIN-2022-33
| 2,193
| 4
|
https://waterinabox.co/sketchup-for-mac-os-sierra/
|
code
|
To find your SketchUp Pro license serial number, please do the following based on your operating system.Windows
This is a known issue that we're currently investigating. For now, we recommend closing the Components browser and leaving is closed as much as possible.
Situation: You have multiple partitions or hard drives set up on your Mac. After booting into a different partition or drive you're missing your extensions and plugins. When installing SketchUp on a Mac with multiple drives or partitions, we always recommend installing on your root volume. However this can cause problems when booting into another partition. To resolve this problem we suggest one of these two options:
You may see this error when interacting with 3D Warehouse or Extension Warehouse from inside SketchUp.To resolve this issue, please follow these steps:
This is a Mac specific issue regarding shortcuts.To work around this issue, press Ctrl+F7. If you're using a Mac laptop, press Ctrl+Fn+F7.Pressing Ctrl+F7 will disable Full Keyboard Access'. If the shortcut doesn't work, go to System Preferences > Keyboard > Keyboard Shortcuts' , select 'Text boxes and lists only' under 'Full Keyboard Access'.
Sketchup For Mac Free Download
Get good, fast: Whoever asked for complicated CAD software?SketchUp is hands-down the most intuitive and easy-to-learn 3D drawing tool around. Think by drawing in 3D: We designed SketchUp to behave like an extension of your hand, so you can draw whatever you want, however you want. Create accurate, highly-detailed models: SketchUp Pro is accurate to a thousandth of an inch, so you can design. Moreover, not every old OSX version can run Sketchup 2020 Mac Full Version. For example, this latest version can only work on macOS Catalina, Mojave, and High Sierra systems. However, the features and tools provided will be comparable to its capabilities.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00625.warc.gz
|
CC-MAIN-2021-49
| 1,867
| 7
|
http://stackoverflow.com/questions/6132607/prolog-how-this-program-work?answertab=votes
|
code
|
It's written rather poorly. It might be easier to understand if you refactor it a bit and make it tail recursive:
even_numbers( X , R ) :-
even_numbers( X , , T ) ,
reverse( T , R ).
even_numbers( , T , T ).
even_numbers( [X|Xs] , T , R ) :-
Z is X mod 2 ,
Z == 0 ,
even_numbers( Xs , [X|T] , R ).
even_numbers( [_|Xs] , T , R ) :-
even_numbers( Xs , T , R ).
The first predicate,
even_numbers/2 predicate is the public wrapper. It invokes the private helper,
even_numbers/3, which hands back the list of even numbers in reverse order. It reverses that and tries to unify it with the result passed in the wrapper.
The helper method does this:
- if the source list is empty: unify the accumulator (T) with the result.
- if the source list is not empty, and the head of the list (X) is even,
- recurse down on the tail of the list (Xs), pushing X onto the accumulator.
- if the source list is not empty, and the head of the list (X) is odd,
- recurse down on the tail of the list (Xs) without modifying the accumulator.
But as noted by @André Paramés, work through it in the debugger and it will become clear what's going on.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701164268.69/warc/CC-MAIN-20160205193924-00008-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 1,127
| 21
|
https://play.google.com/store/apps/details?id=pl.mawo78.depresiostop&referrer=utm_source%3Dappbrain
|
code
|
Huhh Faltu app...dun dwnld..waste of tym
Now works on Android 2.3.3 and higher,
added button to share info about app in right corner of screen,
support for 12 more languages: italian, spanish, japanese, korean, finnish, swedish, hungarian, french, russian, dutch, german and finnish.
Helping you answer the question "Do I need to see the doctor?"
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736682773.27/warc/CC-MAIN-20151001215802-00216-ip-10-137-6-227.ec2.internal.warc.gz
|
CC-MAIN-2015-40
| 346
| 5
|
https://dropynow.com/product/ferplast-octagonal-enclosure-for-small-animals/
|
code
|
It’s time for a walk! Ferplast’s octagonal enclosure allows rabbits, guinea pigs and other small animals to get some fresh air in complete safety! Made of solid steel, it can be taken apart if you need space.
This enclosure for outdoor use is made up of eight separate elements, connected by clips. If you wish, you can therefore use fewer elements to make a square enclosure, for example. This enclosure also has a lockable door. Once folded, it takes up very little space and is easy to transport thanks to its plastic handles.
Features of the Ferplast octagonal enclosure:
small animal enclosure
ideal for outdoor use: perfect as an additional enclosure for rabbits and guinea pigs
easy assembly: 8 separate metal elements that you can combine using clips
several assembly possibilities: depending on the needs and the available space, you can use the 8 elements or less
integrated door: plastic lock
removable plastic handles: to be able to move it easily
foldable: for practical and space-saving storage
clips, lock, handles: plastic
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103810.88/warc/CC-MAIN-20231211080606-20231211110606-00668.warc.gz
|
CC-MAIN-2023-50
| 1,042
| 11
|
https://forum.kirupa.com/t/very-urgent-how-to-stop-externally-loading-swf-from-going-back-to-start/164838
|
code
|
I am making a website and am having a very hard time with the actionscripting and external loading involved. I have managed to auto load the background externally but the problem is that once the bgload movie(which is loading externally in main.swf) is loaded, it goes back to start.
Could any body please let me knw the code to make the “bgload” movie stop right there after its played once…
this is very urgent
Any help would be highly appreciated
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679511159.96/warc/CC-MAIN-20231211112008-20231211142008-00027.warc.gz
|
CC-MAIN-2023-50
| 455
| 4
|
http://esessaydxit.antiquevillage.us/lock-free-data-structures.html
|
code
|
The concurrent data structure (sometimes also called a shared data structure) libcds - c++ library of lock-free containers and safe memory reclamation schema synchrobench - c/c++ and. Lock-free data structures andrei alexandrescu december 17, 2007 after generichprogrammingi has skipped one in-stance (it’s quite na¨ıve, i know, to think that grad. Lock-free programming is a challenge, not just because of the complexity of the task itself, but because of how difficult it can be to penetrate the subject in the. In general, there are basically three approaches to cope with concurrency issues of synchronization: (a) do not acquire locks, and use a data structure that does not require locking the. What is the best book to learn data structures using java should i learn data structures in c++ or java is there a very fast and highly concurrent non-blocking key-value embedded java.
Immutability, part 4: building lock-free data structures posted on monday, july 29, 2013 as i mentioned last time, the best way to create simple thread-safe lock-free data structures is. Readmemd lock-free a library for lock-free data structures currently the library contains a doubly-linked list and a single-reader single-writer queue. Thread summaries for lock-free data structures sebastian wol fraunhofer itwm, kaiserslautern, germany technische universit at kaiserslautern, germany. L31_lockfree 4 outline problems with locking definition of lock-free programming examples of lock-free programming linux os uses of lock-free data structures.
How can i write a lock free structure multithreading cliff click has dome some major research on lock free data structures by utilizing finite state machines and. Lock-free data structures are not unknown in the computer music world for example the venerable single-reader single-writer atomic ring buffer fifo is found in many systems including. Lock-free data structures are based on two things – atomic operations and methods of memory access regulation in this article i will refer to atomicity.
The concurrent data structures (cds) library is a collection of concurrent containers that don't require external (manual) synchronization for shared access, and safe memory reclamation. A non-blocking algorithm is lock-free if there is guaranteed as the preempted thread may be the one holding the lock a lock-free data structure can be used to.
We introduce numerous tools that support the development of efficient lock-free data structures, and especially trees comments: phd thesis, univ toronto (2017. Transactional memory: architectural support for lock-free data structures maurice herlihy digital equipment corporation cambridge research laboratory. Practical lock-free data structures introduction through careful design and implementation it's possible to build data structures that are safe for concurrent use. Rpg iv free-format data area data structures here is the equivalent code in free-format notice that the lock parameter must be used on the in op-code in order.
However, lock-free data structures still don’t scale indefinitely, because any use of an atomic memory operation still involves every core in the system a sync-free data structure some.
Introduction to lock-free algorithms concurrent data structures a lock provides some form of exclusion guarantees. Some data structures can only be implemented in a lock-free manner, if they are used under certain restrictions the relevant aspects for the implementation. Go have useful concurrent model with goroutines it is easy to create concurrent algorithm, but there are no cocurrent lock-free data structures. A method for implementing lock-free shared data structures (extended abstract) greg barnes max-planck-institut fiir informatik im stadtwald w 6600 saarbriicken, germany. Lock-free data structures use hardware locks instead of os locks to implement atomic read-modify-write (rmw) operations therefore it is useful to interpret lock-free. Lock-free data-structure iterators⋆ erez petrank and shahar timnat ferez,[email protected] dept of computer science, technion - israel institute of. Because atomic instructions can be used to change shared data without the need to acquire and release a lock, they can allow greater levels of parallelism however, because they are low.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511761.78/warc/CC-MAIN-20181018084742-20181018110242-00518.warc.gz
|
CC-MAIN-2018-43
| 4,301
| 7
|
https://premium.wpmudev.org/forums/topic/google-analytics-not-showing-on-dashboards-anymore
|
code
|
It's been a while now (I hoped it would 'go away' on its own) but for some reason the Google Analytics stats on all dashboards (including the Network Admin dashboard) show the circling loading image indefinitely. No stats appear any more.
This has started happening on multiple networks on different VPS's. What they have in common is:
1. running on Nginx
2. WordPress and plugins are up to date
3. using same Gmail account to manage the "Network-wide Tracking Codes" and several site tracking codes.
Is there a known conflict with Nginx? I do not see any related errors in the log files.
Could there be some setting in my Gmail account or in the individual tracking codes that is blocking something? I tried 'Log out from Google account' and authorizing with an access code again. I also tried to set up a Google API project but has the same result...
What am I missing? How to debug? Thanks for any suggestions :slight_smile:
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526818.17/warc/CC-MAIN-20190721020230-20190721042230-00373.warc.gz
|
CC-MAIN-2019-30
| 927
| 8
|
https://answers.sap.com/questions/3120650/time-issues.html
|
code
|
We've correctly set the TIME_AGENT settings on a device and actually it seems that during the synchronization device gets the middleware time(date and time).. But, I am just wondering about time zones...
1) Initially I thought the middleware is sending the GMT/UTC +0 time to the client and the device then perform the necessary calculations to set the time in its time zone... but after that I saw the received time is not GMT/UTC +0, but -1.. strange enough..
2) One can set the Time Zone setting from the Mobile OS settings, however MI have Time Zone setting on the Settings page.. Do you know which is taken in precedence..? And mobile MI edition differs from the win32 MI edition in that the TimeZones for mobile are only three letter abbreviations.. And there is no CET (Central European Time) - do you know if MET (Middle European Time?!?) should be used, or different..?
Any clarifications, remarks regarding TimeAgent are more than welcome, points guaranteed 😉)
Thanks in advance,
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585405.74/warc/CC-MAIN-20211021102435-20211021132435-00270.warc.gz
|
CC-MAIN-2021-43
| 992
| 5
|
https://brotherprinters.co/brother-dcp-l2540dw-troubleshooting
|
code
|
If the Brother DCP L2540DW printer shows errors like printer not connected, printer offline, and printer not connected to Wi-Fi error, proceed with the solutions below to quickly resolve the Brother DCP l2540dw troubleshooting problem.
If the Brother DCP L2540DW printer is not printing, there are solutions available for the issue when the printer is
The following are the solutions for the Brother' printer not printing' issue when it is connected to a wired network.
The steps outlined for solving the 'printer not printing' issue for a Brother printer connected over a wireless network to a Windows computer can be applied for a Mac computer as well.
The Brother DCP L2540DW printer displays the 'Out of memory' message when the document being processed by the printer for a print function has consumed all of the printer's memory. In such a case, the print function cannot be executed. However, this is a common issue and can be easily resolved by following the guidelines mentioned below. Follow these steps to clear the error message when the copy operation is in progress.
Follow these guidelines to clear paper jam on a Brother DCP L2540 printer.
If the printing operation works on the Brother DCP L2540DW printer, but scanning doesn't, you can follow these guidelines to solve the scanning issue.
brotherprinters.co We focus on offering technical assistance to printer users and are not in any associated with the manufacturer. All the information provided on our site are solely intended for service purpose and not to promote any particular brand or company. The logos, images, and trademarks on the site are for reference purpose. Any action that you take by referring to the guidelines on this site is at your own risk. Our services are payment-based and are rendered only upon the request placed by the users.
Copyright © 2020 - 2021 brotherprinters.co | All rights are reserved
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370495413.19/warc/CC-MAIN-20200329171027-20200329201027-00164.warc.gz
|
CC-MAIN-2020-16
| 1,894
| 9
|
https://creactiveit.dk/htaccess-do-not-cache-under-maintance/
|
code
|
.htaccess – do not cache under maintance
Here is a cool piece of code, that lets you control weather your browser should cache or not, using your hosted .htaccess file.
The code tells your webserver not to cahche content, which can be very helpfull with tangible php-systems fx or under maintance. After maintance, I advice you to outcomment/remove the code again.
The code should be applied to the .htaccess file – whereever you want.
<IfModule mod_rewrite.c> RewriteEngine On RewriteRule (.*) - [L,E=Cache-Control:no-cache] </IfModule>
Hope you can use it.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506045.12/warc/CC-MAIN-20230921210007-20230922000007-00293.warc.gz
|
CC-MAIN-2023-40
| 562
| 6
|
https://community.gopro.com/t5/Cameras/Heart-rate/m-p/103901/highlight/true
|
code
|
Hi ! I sow couple videos edited with dashware wher someone put in telemetry fro gopro and some heart rate data ! From what they had it ? Please can you explane me the process ! Thks
You can use Heart Rate data from certain Polar devices. You can learn more here!
ok ! but did you know if i can extract the data from the polaris and add in my video ?
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400192783.34/warc/CC-MAIN-20200919173334-20200919203334-00686.warc.gz
|
CC-MAIN-2020-40
| 349
| 3
|
https://cloudacademy.com/course/kotlin-object-oriented-programming-2614/function-overriding-in-object-oriented-programming/
|
code
|
This course covers the concept of Object-Oriented Programming in Kotlin, which is a method of designing and implementing software. It simplifies software development and maintenance by providing concepts such as object, class, inheritance, polymorphism, abstraction, and encapsulation. This course will explore those.
This course is ideal for anyone who wants to learn how to use Kotlin for developing applications on Android.
This content will take you from a beginner to a proficient user of Kotlin and so no prior experience with the programming language is required. It would, however, be beneficial to have some development experience in general.
Hello, my friends. I'm so glad to see you back. So, now we're going to learn about Function Overriding. If you define a function in a subclass that's already defined in superclass with the same name, same parameter, and same return type. Well, yes, you can do it. It's actually known as function overriding. So, in this case, the function in the superclass is called overridden function, and the function in the subclass is called the overriding function. In fact, I just want to do a simple example here to help understand it just a little bit better.
So, we've got two classes here. A superclass, Vehicle and a subclass, Car. The Car class inherits from the Vehicle class. Both of them have a common function named stop(). The Car class gives its characteristic to the stop function. So, it overrides his stop function. It prints "Car has stopped." instead of "Vehicle has stopped.". You see how that works. We're going to do a few more examples of, so that we can really get this function of overriding. So, first of all, we're going to create a new package and we're going to do the override operations within this package. That's okay if we don't create this package but again, I don't want it to interfere with the classes here. Since, I will be creating similar classes called Vehicle and Car. So, just to make that point, I just want to create a new package Car, alright?
So, here I'll just right click on the project package folder with the mouse, choose 'New Package' option. So, let's give this new packaging name. I'm going to just write override as the name of the package and now we can create a class. Alright, click on 'override' package, choose 'New Kotlin Class/File' as my option and I'll set the name of this class. Make it Vehicle. So, now Vehicle is our superclass. We're going to declare three public unit functions in the Vehicle class. The first function will be start. It print the "Vehicle has started." message by using the printIn function. Second, it's going to be accelerate function. So, this function will take one parameter. The name of the parameter will be speed and it will be of the energy type. We print Vehicle accelerates according to given speed. We use this speed variable in the printIn function and then we'll make the last function stop(). Now, in the stop function, we will print "Vehicle has stopped." as a message just by using the printIn function. So, let's create a new subclass. So, right click on 'overriding' package and select new Kotlin Class/File, specify class name Car. Now, in order to implement function overriding, we have to add the colon sign after subclass Car. Also, we've got to take this Vehicle superclass and make it open. So, our subclass Car inherits from superclass Vehicle and this means that subclass Car inherits all functions from superclass Vehicle. Car class is empty now. So, the question is, well, why do we even need function overriding. Well, let's create a Kotlin file in the same package. So, alright, OverrideTest as the name. And let's create the main function. In the main function, we will create a Vehicle object. I write var vehicle = Vehicle().
First, we call the start function from the vehicle class and then we call accelerate function by passing a value of 80 as the speed and that will be the parameter and last we'll call the stop function. So, what happens when we run the code? Well, you can see the results in the console. The vehicle has started. The vehicle accelerates at an up to 80 and then the vehicle has now stopped. Alright, you with me. So, let's create a Car object, so that we can test this out. So, we call the start function. Then call the accelerate function by passing the value 100 as the parameter and call the stop function. Now, before running any code, let's make a partition here. This way we'll be able to see the two sections separately in the console. So, what happens when you run the code? You see the same results. Vehicle has started, vehicle accelerates to the speed of 100 and vehicle has stopped. But do you see the problem? We'll weight properties of Car, not Vehicle in the second part. So, we need to override the functions of the superclass Vehicle and the subclass Car. You follow. Let me show you. Let's override these functions. Open the Car class and we'll copy the functions of the superclass Vehicle. Now, let's paste the copied methods into the Car class. So, as you can see Android Studio. Hey, no, no, wait, wait, stop, stop. Because the Car class inherits from the Vehicle class, so it can access these methods directly. Therefore, it considers it a mistake to use these methods here with the same name and the same parameters.
And it's going to tell us if we still want to use these methods here, we must override them. We will do the override process as follows. Alright, we actually need to write override at the beginning of each method before the fun statement here, my friends. Alright, so now let's do this for all three methods. But wait, there's still warnings because methods are also final by default. Now, to resolve this error, all we need to do is make each method open in the superclass, that is the vehicle class. So, why don't we go ahead and do that right now. And as you can see when we make the methods open in the vehicle class, the warnings disappear. Alright, my friends. So, as a result, we override the methods of the vehicle class in the Car class. So, let's make the changes for implementing properties of the Car class. So, we'll print "Car has started." in the start function instead of Vehicle. We print Car accelerates according to given speed instead of Vehicle. And we print "Car has stopped." in the stop function and again instead of Vehicle. So, these three functions have the same name, same parameters, same return type with the functions in the vehicle class. So, now let's open up the OverrideTest file. So, now we can run the code. And what do you see two different messages coming from the functions of classes. In the first part, we display the messages from superclass Vehicle and the second part, since we override the functions of the superclass Vehicle in the class Car. Car has started, car accelerates at speed 100, and car has stopped. These are the messages that are now displayed. So, that's function overriding and I think it's pretty though unclear. Go back and finesse anything if you need to.
But before we close out this section here, I do want to mention this keyword super. So, follow my thinking. What do you think we would do if we want to create a function in the Car class that does the same thing as the functions in the vehicle class, but with a different name. I might have given you too big of a hint or a clue, but yes, we can use the keyword super. Just want to show you a quick example. So, within the Car class, I'll create three functions. The first function be called superStart(). The second function be called superAccelerate(). Let the third function be called superStop(). Alright. So, now let these three functions do the same operation as the three functions in the Vehicle class. For this, inside the first function after typing the keywords super and putting it in a dot, I can display the functions in the Vehicle class. The keywords super here represents the superclass that we inherited the Vehicle class. So, here I choose the start function. Similarly, I'll type super.accelerate(). Finally, I'll write super.stop(). And now inside the main method, let's call these functions. So, here I'm typing car.superStart() first. Now, I'm typing in car.superAccelerate(). Then finally, I'll write car.superStop(). Now, lets also copy and paste this line of code here. Now, let's run and test our code. And there you see it, although the function names that we created in the Car class are different. We have used the super keyword to perform the same operation as the functions of the Vehicle class. Wow, that was packed. So, my friends, thanks for listening but that's how you do an override operation and how you get to use that super keyword. It's a super keyword. Alright, my friends, we're going to take a break here but I want to see you in the next video.
Mehmet graduated from the Electrical & Electronics Engineering Department of the Turkish Military Academy in 2014 and then worked in the Turkish Armed Forces for four years. Later, he decided to become an instructor to share what he knew about programming with his students. He’s currently an Android instructor, is married, and has a daughter.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510208.72/warc/CC-MAIN-20230926111439-20230926141439-00026.warc.gz
|
CC-MAIN-2023-40
| 9,186
| 10
|
https://forum.beeminder.com/t/can-an-app-be-allowed-after-a-number-of-time-spent-on-another-app/4347
|
code
|
So I have an idea: block YouTube after 35 hours of reading (35=5x7). Is it possible to track the time spent on one app, and during that time a single instance of YouTube I will be bitten, but after that I’m free to watch as much as I can?
Are you using ios or Android?
Android and Windows
You can setup RescueTime http://bit.ly/1ERrSK8 to block websites (“FocusTime session” feature) based on goals/alerts. So if you spent X minutes on category Y, then an alert is triggered and distracting websites (like YouTube) are blocked.
This also works on the opposite: A minutes spend on (good) category B will stop the block of distracting sites.
The point is, I want to classify it as neutral, not distracting. In fact, I set all categories as neutral, and only leave some of them as productive or very productive. I know that visiting Reddit or YouTube can suggest me a lot of ideas, as long as it’s not too frequently.
There is a strict mode for Focus Time that also blocks neutral sites.
that’s still not optimal, because I actually classify many of other sites as neutral, and it can only block websites, while YouTube can be an app in my phone. Ideally, it’s better to punish me when there are data on a whole category. There is also VLC in this category too.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663012542.85/warc/CC-MAIN-20220528031224-20220528061224-00235.warc.gz
|
CC-MAIN-2022-21
| 1,270
| 8
|
https://nuphy.com/collections/keycaps/products/nsa-shine-through-abs-keycaps
|
code
|
Best Mechanical Keyboard till date!
I love the Shine-through keycaps I have bought additionally, I can see the keys better and the various colourful animating lights can shine through. However, I thought some of the keys can be more "shine through" e.g. the Number keys on the top row - the symbols below are not as bright as I would like, e.g. the media play icons
Nice caps - the ones to get rather than 3rd party
These should have come standard with the keyboard; the others could have been optional. There's little point in having a keyboard with LEDs facing north where they don't shine through. It's part of the identity crisis of the keyboard. It tries to be 'gamer' and 'work' - it's a great work keyboard with silly lights, really like it now. You can see the keys in low light!
beautiful backlight effect
The shine-through effect and the color scheme of the keycaps are really beautiful.
The rgb backlight of this keyboard comes so beautiful into effect.
It's so nice and cozy to use this keycaps on the nuphy air keyboard on a rainy autumm or a dark cold winter day to sit on your computer and do your stuff.
The only downside of the keycaps is that i tend to get greasy fingerprints on the keyboard very fast and even if you clean them they sometimes still feel a little bit sticky.
My personal hope would be Shine-Through-Keycaps with the material quality of PBT-Keycaps.
Excellent Product A+
I wish the shine through caps came standard but I understand it makes senses to make it a separate purchase from a business prospect. Very happy with the product. Only feedback I have is they need to change the space bar to have a shine through part on it. I found one else where but it's something to think about. Utilize the backlighting as much as possible.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233509023.57/warc/CC-MAIN-20230925151539-20230925181539-00478.warc.gz
|
CC-MAIN-2023-40
| 1,766
| 12
|
https://community.atlassian.com/t5/Jira-Software-questions/How-can-I-control-Jira-quot-convert-to-issue-quot-operation/qaq-p/696430
|
code
|
Today user can convert sub task issue to any type (from example from sub task to Bug) , which is OK .
My problem is start when issue move to "illegal" state on the new issue type.
Example from what it happen to me :
Same can be happen also when I will use Jira "Move" option and change issue type.
How can I force issue to be created in New status when issue type is change (and not to move with current status) ? is it configurable via jira / do I need add-on for it ?
Will be happy to get suggestions .
If "Block" is in the target workflow, then it's not an "illegal" status, it's valid.
You have two options
If you have ScriptRunner or a bit of coding time, you could protect the transition with a condition like "only if issue type used to be a sub-task type"
Thanks for your fast answer , I wants to clarify my problem .
Block is valid status in Story workflow but not from all states.
In sub task - user can move sub task from New to Block while in Story there is no transition from New to Block But the option in Jira "convert to issue" cause that sub task moved to Story issue and its status is Block (and previous status) is New and this bring me to "broken workflow" as you mention.
How I can control / protect the transition ? it not pass via my workflow ? (or maybe it is ? and I can add some validation in my workflow ?)
I understood the situation fine, but I think I missed a point in the beginning of my answer. You can not do anything about the conversion process. There is nothing wrong with it as far as the data is concerned - an issue is in "block", the status is there in the new workflow, so changing the type and keeping the status is perfectly valid and the right thing to do.
The "change issue type" thing is not a transition, it is a structural change to an issue's settings, and has nothing to do with the workflow.
My suggestion was that you allow people to move blocked issues to a new "correct" status, and protect the transition so that it's only used at the right time.
...It's true that there are projects in Jira; but they are merely a way to cut off issues, to tell them apart from other sections of work and to apply rules that are specific to that team (the schemes)....
Connect with like-minded Atlassian users at free events near you!Find a group
Connect with like-minded Atlassian users at free events near you!
Unfortunately there are no AUG chapters near you at the moment.Start an AUG
You're one step closer to meeting fellow Atlassian users at your local meet up. Learn more about AUGs
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249414450.79/warc/CC-MAIN-20190223001001-20190223023001-00240.warc.gz
|
CC-MAIN-2019-09
| 2,529
| 21
|
http://ttt-rostov.ru/ash-eliza-still-dating-1712.html
|
code
|
Ash eliza still dating dating scenez
Internationally regarded for his mix work for One Direction, Dido, The Corrs, Ellie Goulding, Kylie Minogue and many of The X Factor winners (including last year’s winners Rak Su) , Howes’ discography reads like a who’s who of the British pop music.
Ash eliza still dating
Which is of course not helpful for a young woman like Eliza, caught in a unique situation and worried about the well-being of both herself and the children she's carrying.
Fortunately she has her wife Katylin to help keep her grounded, reminding her that she really hasn't changed.
(December might be a bit slow for work, we'll have to see.)Either way, yuri. I'm cheating though, since this drawing has been done since before work got crazy.
Mixer/ Writer/ Musician Clients include: One Direction, Dido, Ellie Goulding, Kylie, Little Mix Blessed with an ability to capture mixes which help to turn great songs into radio hits, Ash Howes is a rare talent and a proven hitmaker.
Who would know her better after all, especially considering how intimately they've come to know each other's bodies. The important part is that Kat is the pillar supporting Eliza (regardless of her shorter stature), just as Eliza in return helps support Kat during her tough times.
It can feel a bit one-sided at times, due to the constant burden that Eliza bears, but they've managed to work it out.
Now, they are given an offer from the leader of a multinational counter terrorist team.
Together with their sibling, they join and begin to walk the path to redemption. I realised that I have been writing some short drabbles of pulsemite ever since https://starshooter-apollo.tumblr.com/ has inspired me to do so.
When something goes on for long enough, it can become a challenge to keep track of what's changed and what hasn't.Tags: Adult Dating, affair dating, sex dating
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481624.10/warc/CC-MAIN-20190217051250-20190217073250-00093.warc.gz
|
CC-MAIN-2019-09
| 1,867
| 12
|
https://community.st.com/thread/45475-sub-ghz-transceiver-daughter-board-with-spirit1-schematic-clarification
|
code
|
I planning to use either SPIRIT1 or S2-LP for my project on WMBus. Refering to the schematic (click here to open), there are these four resistors (R6-R7 in the lower left corner) which are soldered for the selected operation frequency. These are shown to be connected between GND to GND !! Can anyone explain how do these resistors work and affect the frequency of operation? It may be that these are placed on the PCB exactly at the position needed so that the board will operate at the selected frequency. I am not sure and it is very confusing without any information. Would anyone please explain?
Really appreciate your help!!
Thanks in advance!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864725.4/warc/CC-MAIN-20180522112148-20180522132148-00183.warc.gz
|
CC-MAIN-2018-22
| 649
| 3
|
https://stackoverflow.com/questions/8466626/findbin-for-perl-modules-that-reside-in-my-scripts-directory
|
code
|
I have a script that uses modules that are external to the standard Perl library and would like for some way to use them. I don't have permissions to install them into the Perl lib directory and was wondering if I could just have these external modules reside in my scripts directory.
I have read about using FindBin but it seems to not work. Am I using it correctly?
Right now I want to use 3 modules I want to use (2 being directories). So lets say my script is in Dir1, then my modules will be in a subdirectory of Dir1 called Dir2.
So assuming FindBin finds Dir1, then all I have to do is this?
use FindBin '$Bin'; use Dir2 "$Bin/Dir2"; use Dir2::SubDir_ofDir2_1::Module1; use Dir2::Module2; use Dir2::Module3;
My program seems to run but it doesn't do anything. So I am pretty sure it is not importing the modules correctly.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00550.warc.gz
|
CC-MAIN-2021-25
| 829
| 6
|
http://studyabroad.tigtag.com/schoolfile/69021.shtml
|
code
|
Marketing High Technology Products -opportunities and risks. Hult International Business School professor to lead seminar in Shanghai on strategic marketing of high-tech products.
( 13th May, 2010) What drives consumer behavior with products like iPad and iPhone? What determines success and failure in hi-tech industries? How can marketers select creative strategies in an environment of continuous technological progress and a global market place?
Dr. Jehiel Zif, a professor at the Hult International Business School in Boston, will lead a seminar in Shanghai on May 15 explaining these issues.
By examining cases of new high-tech products and how they were marketed, Dr.
Zif will clarify the process of marketing.
"We have some understanding why some high tech products succeed and others fail," said Dr. Zif. "Strategic marketing can make a difference.
The Shanghai seminar follows on the heels of a four-city Asian lecture tour last February by Hult professors Dr. Hitendra Patel and Joel Litman, which drew over 500 attendees. Hult has seen a huge increase in applications this year; in the last three months alone, Hult has met with over 2,500 candidates interested in studying for MBA and Master's degrees at one of Hult's five global campuses in Boston, San Francisco, London, Dubai, and Shanghai.
"The global nature of the academic program we offer is both unique and practical," said Hult President Dr. Stephen Hodges. "Business leaders of the future can benefit from the international grounding that a Hult education provides."
Dr. Zif is on the faculty of Samat Business School in Israel and Hult International Business School in Boston. For many years he was a member of the faculty of Tel-Aviv University. He holds an engineering degree from Technion - Israel Institute of Technology, and an MBA and PhD from New York University.
Professor Zif has twenty years experience in marketing and management consulting. Previously he was president of a start-up firm in Boston in the field of educational simulations.
- 【兆龙留学】加拿大圣安德鲁学院St. And... (2019-03-19)
- 2019年澳洲八大名校学费汇总,再也不用到... (2019-03-18)
- 浅谈爱尔兰留学 (2019-03-05)
- 留学荷兰为什么选择格罗宁根汉斯大学? (2019-02-21)
- 【兆龙留学】格罗宁根大学世界排名及录取要求 (2019-02-15)
- 【兆龙留学】爱尔兰私立女校亚历山德拉学... (2019-02-15)
- 新西兰排名第1的院校——奥克兰大学 (2019-02-15)
- 北欧顶尖设计专业院校推荐——阿尔托大学 (2019-02-13)
- ACG新西兰大学预科,助力进入奥克兰大学本... (2019-02-07)
- 高一完成读预科,新西兰有哪些院校可以选择? (2019-01-31)
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202125.41/warc/CC-MAIN-20190319183735-20190319205735-00092.warc.gz
|
CC-MAIN-2019-13
| 2,721
| 20
|
https://www.overwatchsecurityadvisors.com/the-team/johnathan-poarch/
|
code
|
Sr. Cyber Security/IT Infrastructure Consultant
Johnathan Poarch is a seasoned Cyber Security Specialist with over 20 years of technical and operational experience in both civilian industry and the US military. Johnathan has experience leading multiple teams of cyber security professionals both within the military and enterprise healthcare, and has experience working in environments that require clearance. He truly enjoys difficult problems and the satisfaction that comes with resolution. In his free time, Johnathan can be found debating others on whether or not Star Trek has better written characters than Star Wars.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816535.76/warc/CC-MAIN-20240413021024-20240413051024-00439.warc.gz
|
CC-MAIN-2024-18
| 624
| 2
|
https://club.myce.com/t/video-ts-convert-to-avi/133531
|
code
|
Trying to convert my video_ts, video_01_0, video_02_0 files etc from a DVD movie to an AVi file.
Tried several tools but it keeps giving me an error message when i try to convert the video_ts file and other files on the dvd to avi.
Is there a way to do it, as there are several files on the movie that need to be converted to one AVi file or do i need a certain software ???
Any help will be much appriciated
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823183.3/warc/CC-MAIN-20181209210843-20181209232843-00202.warc.gz
|
CC-MAIN-2018-51
| 408
| 4
|
http://stackoverflow.com/questions/23168014/how-to-extend-a-textmate-language-definition-and-alter-some-of-its-behaviour
|
code
|
I have a
.tmLanguage file that's including and thus extending a third-party language definition. Among other things, the third-party file handles single line comments which are defined by a leading
<dict> <key>captures</key> <dict> <key>1</key> <dict> <key>name</key> <string>punctuation.definition.comment.nsis</string> </dict> </dict> <key>match</key> <string>(;|#).*$\n?</string> <key>name</key> <string>comment.line.nsis</string> </dict>
Since my own language definition extends the syntax to expressions that need to be ended with semicolons –something that does not apply to the original language– all trailing semicolons will be rendered as if they were part of a comment.
How can I change this behaviour while maintaining the comment-style of the original language file?
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657119965.46/warc/CC-MAIN-20140914011159-00212-ip-10-196-40-205.us-west-1.compute.internal.warc.gz
|
CC-MAIN-2014-41
| 782
| 5
|
https://github.com/ninadpachpute?tab=repositories
|
code
|
Hide content and notifications from this user.
Contact Support about this user's behavior.
Firmware - GitHub mirror of the official SVN multiwii project
One of My Academic Projects
A vim plugin to display the indention levels with thin vertical lines
SystemVerilog vim scripts
SystemVerilog Development Environment
pathogen.vim: manage your runtimepath
Tracks is a GTD(TM) web application, built with Ruby on Rails
Simple Ruby version management
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-30/segments/1500549424575.75/warc/CC-MAIN-20170723162614-20170723182614-00518.warc.gz
|
CC-MAIN-2017-30
| 445
| 10
|
https://discuss.axoniq.io/t/tuning-parameter-for-tep-message-probing-interval/2064
|
code
|
The Tracking Event Processor, on the point where it tries to peek for events on the Event Stream, has hard coded to wait for new events for at least 1 second.
If this doesn’t happen, it tries to extend it’s claim on the token.
The peek and extend operations conclude to the shared queries you’ve pointed out.
As pointed out, the 1 second peek interval is not configurable at this moment.
However, your statement that adding multiple TEPs automatically includes multiple occurrences of these queries is also incorrect.
The EmbeddedEventStore, which I am certain you’re using giving the JpaEventStorageEngine, uses the notion of an EventProducer (to append events) and EventConsumers (to provide the Event Stream).
The EmbeddedEventStore is also an implementation of the StreamableMessageSource which is required by a Tracking Event Processor as the source of the Event Stream.
Lastly, it will ensure that, as soon as a Tracking Event Processor reaches the head of the stream, that it will initialize an optimized event consumption process.
In essence, this means that several Tracking Event Processor will retrieve events from the same EventConsumer.
This ensure that the one query which might be heavy wait, the select on the
domain_event_entry, is optimized for you.
Given the above description I hope the concern is alleviated.
If you’re searching for an ever faster way of handling events, I highly suggest you take a look at Axon Server.
It’s characteristics when dealing with event append and event stream publication is definitely more ideal in the long run then a regular RDBMS solution.
Concluding, if your situation proofs to be more technical, a face-to-face call might also be in place.
To that end, I’d suggest to fill in a form on the AxonIQ website for example.
Hope this helps you out Prashanth!
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178356232.19/warc/CC-MAIN-20210226060147-20210226090147-00612.warc.gz
|
CC-MAIN-2021-10
| 1,824
| 17
|
https://demo.massa.net/
|
code
|
My simulated stakers
Create a simulated staker
Decentralized crypto-currencies based on the Blockchain architecture can only process a few transactions per second in the whole network. This results in a permanent congestion and high transaction fees, which limits their attractiveness and adoption. New generations of crypto-currencies tend to tackle the scaling issue either by giving up on full decentralization or by lowering their security levels.
This sounds like there’s some kind of scalability trilemma at play. The trilemma claims that blockchain systems can only at most have two of the following three properties: Decentralization (…), Scalability (…), Security (…). - Vitalik Buterin
Massa solves the scalability trilemma by allowing the production of parallel blocks in a multithreaded block graph,
while transaction sharding ensures that transactions of parallel blocks are compatible by construction.
Our open-source simulations show that our architecture scales beyond 10,000 transactions per second,
while staying decentralized and secure (see the technical paper or the blog post introduction).
The Massa technology is therefore a promising basis for future crypto-currencies, and can bring decentralization back into the spotlight.
You can explore a live growing blockclique, create a wallet, control a new staker (receiving free coins). Within a few minutes, your staker will produce some blocks and receive block rewards. Try sending coins to your friends!
We simulate on a single server (4Ghz 8-cores) a network of nodes that produce blocks, send blocks to peers and compute the best clique of compatible blocks. We compute the time for each message and block to be transmitted from one node to another depending on the simulated latency and bandwidth between each node of the network (average bandwidth of 64Mbps and latency of 100ms, similar to the Bitcoin network).
The parameters of the architecture are the number of threads T, the average time between two blocks in each thread t0, the maximum block size BS, and the finality parameter F. We set T=32 threads, t0=16s, BS=675 kB and F=64 blocks. Due to cpu constraints (hundreds of nodes on the same server), we do not simulate endorsements, so we assume E=0, and blocks are by default filled with dummy transactions.
We'll be happy to hear your feedback and answer your questions. Please come to the Reddit community r/massanet where we can discuss the demo, the technical paper, the blog post introduction or the open-source simulations. You can also contact us by email at email@example.com.
Damir Vodenicarevic, Sébastien Forestier, Adrien Laversanne-Finot
Disclaimer and license
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304954.18/warc/CC-MAIN-20220126131707-20220126161707-00366.warc.gz
|
CC-MAIN-2022-05
| 2,669
| 15
|