url
stringlengths 13
4.35k
| tag
stringclasses 1
value | text
stringlengths 109
628k
| file_path
stringlengths 109
155
| dump
stringclasses 96
values | file_size_in_byte
int64 112
630k
| line_count
int64 1
3.76k
|
|---|---|---|---|---|---|---|
https://www.massapequafamilydentistry.com/contact/
|
code
|
If you have any questions, concerns, or would like to schedule an appointment, please contact us using the informationprovided below.
Schedule An Appointment
Office HoursMonday 9:00AM-8:30PMTuesday 9:30AM-5:30PMWednesday 9:30AM-1:30PMThursday 10:30AM-7:00PMFriday 9:30AM-5:30PMSaturday 9:30AM-1:30PM
What does your smile say about you? Let us help you radiate confidence with a healthy smile.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817181.55/warc/CC-MAIN-20240417204934-20240417234934-00191.warc.gz
|
CC-MAIN-2024-18
| 392
| 4
|
http://saag.org/taxonomy/term/753
|
code
|
Submitted by asiaadmin2 on Fri, 06/27/2014 - 11:22
Paper No 5734 Dated 27-Jun-2014
Guest Column By Moorthy S. Muthuswamy Ph.D.(The views expressed are author's own)
If the above premise holds true, the coming years promise a new and potentially fruitful approach to mitigating the threat of ever-growing violent Muslim radicalism.
First, some background.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125841.92/warc/CC-MAIN-20170423031205-00270-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 354
| 5
|
http://ciitresearch.org/dl/index.php/set/article/view/SE042014005/0
|
code
|
Improved Software Fault Prediction using Bayesian Network Classifier
N. Fenton and M. Neil, “Software Metrics: Successes, Failures and New Directions,” J. Systems and Software, vol. 47, nos. 2/3, pp. 149-157, 1999.
Y.Jiang, B. Cukic and T. Menzies, “ Fault Prediction using Early Lifecycle Data”, Proc 18th IEEE Int’l Symp, Software Relaiability, pp. 237-246, 2007.
D. Rodrıguez, R. Ruiz, J. Cuadrado-Gallego, J. Aguilar-Ruiz, M. Garre, “Attribute Selection in Software Engineering Datasets for Detecting Fault Modules”
C. Aliferis, I. Tsamardinos, and A. Statnikov, “HITON: A Novel Markov Blanket Algorithm for Optimal Variable Selection,” Proc. AMIA Ann. Symp., 2003.
E. Arisholm and L. Briand, “Predicting Fault-Prone Components in a Java Legacy System,” Proc. ACM/IEEE Int’l Symp. Empirical Software Eng., 2006.
I. Askira-Gelman, “Knowledge Discovery: Comprehensibility of the Results,” Proc. 31st Ann. Hawaii Int’l Conf. System Sciences, vol. 5, pp. 247-256, 1998.
B. Baesens, T. Van Gestel, S. Viaene, M. Stepanova, J. Suykens, and J. Vanthienen, Benchmarking State-of-the-Art Classification Algorithms for Credit Scoring, J. Operational Research Soc., vol. 54, no. 6, pp. 627-635, 2003.
E. Baisch and T. Liedtke, “Comparison of Conventional Approaches and Soft-Computing Approaches for Software Quality Prediction,” Proc. IEEE Int’l Conf. Systems, Man, and Cybernetics, vol. 2, pp. 1045-1049, 1997.
Hall, T. Dept. of Inf. Syst. & Comput., Brunel Univ., Uxbridge, UK Beecham, S. ; Bowes, D. ; Gray, D. ; Counsell, S. “A Systematic Literature Review on Fault Prediction Performance in Software Engineering.
G. Cooper and E. Herskovits, “ A Bayesian Network Method for Induction of Probabilistic Network from Data”, Machine Learning, Vol.9
- There are currently no refbacks.
This work is licensed under a Creative Commons Attribution 3.0 License.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00409.warc.gz
|
CC-MAIN-2023-14
| 1,893
| 13
|
http://lists.mplayerhq.hu/pipermail/mplayer-dev-eng/2015-January/072822.html
|
code
|
[MPlayer-dev-eng] [patch] stricmp
george at nsup.org
Wed Jan 14 10:30:19 CET 2015
Le quintidi 25 nivôse, an CCXXIII, Ingo Brückl a écrit :
> I suppose there is a reason for stricmp() (AFAIR, Windows doesn't have
I am convinced stricmp() exists because microsoft thought strcasecmp() was
Not Invented Here, and it was probably used in MPlayer because the patch
that introduced it was made by a developer more familiar with microsoft's
> strcasecmp()). If Cygwin had strcasecmp() before - fine, but I don't know
> and have to check.
The OP said that strcasecmp() is used all over the place, and grep confirms,
so obviously strcasecmp() is not a problem.
And as I said, if strcasecmp() were a problem, there is a reliable
reimplementation in libavutil.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Size: 819 bytes
Desc: Digital signature
More information about the MPlayer-dev-eng
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646076.50/warc/CC-MAIN-20230530163210-20230530193210-00217.warc.gz
|
CC-MAIN-2023-23
| 933
| 20
|
https://forum.bestpractical.com/t/saving-sucess-logs-and-variables-content-for-scripts-testing/34910
|
code
|
Good day, I’ve been recently toying around with some custom scripts that don’t necessarily update any fields or status in our RT tickets, so I need your help in finding the best way to test they are running properly.
The two things are would like to do are:
Saving a success log once a function or a subroutine has had success. For that I need to know what’s the Perl syntax for doing that, and where does RT save logs by default. In the example I thought (shown in the next image) I would like to save a success log right after the LWP::Agent has managed to communicate successfully with the external API, so I figured the log-saving-statement should go in the line I marked.
Another thing that I would like to know is if it’s possible to check the content of a data structure or variable somehow, maybe by printing it somewhere in the RT log as well. I have this populated structure I’m passing on to an API but I want to make sure I’m saving it’s right content first.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347391923.3/warc/CC-MAIN-20200526222359-20200527012359-00594.warc.gz
|
CC-MAIN-2020-24
| 984
| 4
|
https://www.harrietmckern.com/
|
code
|
Writer, Director & Producer
a portfolio site for
an Australian/ British screen director, writer and producer, interested in projects for and about women and non-binary folk --- especially the older ones!
NEWS !!!!!! ~ Dec 2023
THREE CHORDS AND THE TRUTH dir Claire Pasvolsky
wins at the Australian Directors Guild Awards for
Best Direction in a Feature Film (Budget under $1M)
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679515260.97/warc/CC-MAIN-20231211143258-20231211173258-00680.warc.gz
|
CC-MAIN-2023-50
| 376
| 7
|
http://forums.fedoraforum.org/showthread.php?t=146496
|
code
|
I have a Delta 1010LT which works great under ALSA. I can watch movies with VLC in glorious 5.1 with no problems. However, when I playback MP3s or Oggs, (xmms or Amarok) I only get sound from the left and right, and I would really like the subwoofer to work as well, considering my mains are only 6" drivers.
I'm using the Envy24 Control GUI, but there is no routing available such as in the Windows M-Audio control panel. Under Winders, it's not a problem.
Does anyone know of a way to make ALSA route the L/R to the sub (output 6 on the 1010)?
BTW, I'm using 32 bit FC6 with all current updates, Athlon64, 1GB RAM.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368697843948/warc/CC-MAIN-20130516095043-00041-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 616
| 4
|
http://www.admin-wissen.de/en/tutorials/devops_with_vagrant_and_chef/creating_a_custom_cookbook_with_chef.html
|
code
|
You can find the files for this step in the following git hub repository:
Chef is used to describe the steps of the provisioning. This description comes from „cookbooks“ and these cookbooks contain recipes that describe the provisioning steps with ruby code.
So let’s start with a simple cookbook, that adds a user at the end of the provisioning just to get used to the structure of chef cookbooks.
First we need to download and install chef
Then we need to adapt the Vagrantfile to use chef as provisioner and trigger our first recipe in our first custom cookbook.
As you see below, we specify a cookbook_path variable, with an array of directories. In our example we have:
Technically chef is using a fallback here, cookbooks in the seconde folder have a higher priority as in the first folder. With 'chef.add_recipe "mycookbook"' we defince that "default.rb" in our cookbook ist the entry point for the provisioning process.
For our example we create the following file structure in our vagrant folder:
This is the minimalistic example of a chef cookbook. To play arround, we add the following content to „metadata.rb“
We add the following content to the Chef recipe "repices/default.rb":
Now we can create our box a usual with „vagrant up“. When the provisioning is done, we can log into the box (with „vagrant ssh“). Now check „/etc/shadow“ if our new user „testuser“ has been created.
With the blocks above you use „chef resource providers“. There are a lot of build in resource providers that help you to automize steps during the provisioning. You can find the documentation of the build in chef resources here:
As you can see chef provides very helpful „building blocks“ to configure your machine. In addition it is possible to implement these „resources providers“ in your own cookbooks and these concepts make chef very powerful.
We learned how to create an own cookbook and how to use recipes and resource providers, but to build a box it makes senses to use an combine other existing cookbooks. In the next step we will learn to use the apache cookbook to create a virtual host in our development system.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867092.48/warc/CC-MAIN-20180525121739-20180525141739-00513.warc.gz
|
CC-MAIN-2018-22
| 2,155
| 14
|
https://canadian-immigrant.ru/code-dating-food-298.html
|
code
|
Futanari video chat - Code dating food
Reading Tire Date Codes Evaluating Food and Pharmaceutical Date Codes Reading Date Codes on Computer Chips Community Q&A If you want to know when a product was manufactured, you'll need to know how to read date codes.
Whether you're looking for the manufacturing date for tires, food, or computer chips, figuring out when a product was made isn't tough as long as you know the formula.
I also wanted somebody who would weigh 20 pounds more than me at all times, regardless of what I weighed.
Code dating food
I wanted somebody who worked hard, because work for me is extremely important, but not too hard.
For me, the hobbies that I have are really just new work projects that I've launched.
Now, I like the idea of online dating, because it's predicated on an algorithm, and that's really just a simple way of saying I've got a problem, I'm going to use some data, run it through a system and get to a solution.
So online dating is the second most popular way that people now meet each other, but as it turns out, algorithms have been around for thousands of years in almost every culture. Are they going to start having children right away? So I run home, I call my mother, I call my sister, and as I do, at the end of each one of these terrible, terrible dates, I regale them with the details.
So that basically meant there were 35 men for me that I could possibly date in the entire city of Philadelphia.
In the meantime, my very large Jewish family was already all married and well on their way to having lots and lots of children, and I felt like I was under tremendous peer pressure to get my life going already.
I'm looking for a guy between the ages of 30 and 36, which was only four percent of the population, so now I'm dealing with the possibility of 30,000 men.
I was looking for somebody who was Jewish, because I am and that was important to me. I figure I'm attracted to maybe one out of 10 of those men, and there was no way I was going to deal with somebody who was an avid golfer.
So I'm at the end of this bad breakup, I'm 30 years old, I figure I'm probably going to have to date somebody for about six months before I'm ready to get monogamous and before we can sort of cohabitate, and we have to do that for a while before we can get engaged.Tags: Adult Dating, affair dating, sex dating
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703522242.73/warc/CC-MAIN-20210121035242-20210121065242-00501.warc.gz
|
CC-MAIN-2021-04
| 2,349
| 14
|
http://help.motivosity.com/application-setup/
|
code
|
There are a few areas to be set up when you first log into Motivosity. All of these areas can be changed later, so don't worry too much if there's something you need to pass up momentarily. These areas include the items pictured below.
You should be able to go through the full setup of the application in about 30-60 minutes.
For additional help, see:
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480905.29/warc/CC-MAIN-20190216170210-20190216192210-00228.warc.gz
|
CC-MAIN-2019-09
| 352
| 3
|
http://vjajvpn.xyz/archives/599
|
code
|
Boskerfiction 《Let Me Game in Peace》 – Chapter 1186 – River of Forgetfulness noisy exciting propose-p1
Jam-upfiction Let Me Game in Peace update – Chapter 1186 – River of Forgetfulness front cushion recommendation-p1
Novel–Let Me Game in Peace–Let Me Game in Peace
Chapter 1186 – River of Forgetfulness attraction acrid
Zhou Wen listened attentively and employed Fantastic Brahma to strengthen his senses. He investigated the Stream of Forgetfulness, but he didn’t uncover everything.
Zhou Wen listened attentively and utilised Fantastic Brahma to bolster his senses. He looked at the River of Forgetfulness, but he didn’t learn anything at all.
Zhou Wen believed that this official wasn’t a monster. He experienced only made use of the Mythical Serum produced from an extraordinary dimensional creature. His human body got mutated to a higher level, enabling him to be a really express.
“I don’t know, but from your appears of it, you can only imagine a solution to resolve it if we would like to obtain the Three-Everyday life Rock,” An Sheng mentioned because he stared with the stream.
“Yes, a scapegoat. Could that b.l.o.o.d.y palm be that point?” Lu Bushun nodded.
“What you mentioned is a scapegoat in beliefs. You can obtain the chance of rebirth by finding a subst.i.tute to restore yourself,” An Sheng said.
After ability to hear An Tianzuo’s get, Jia Nong retreated and left the River of Forgetfulness.
Zhou Wen listened attentively and employed Fantastic Brahma to bolster his senses. He considered the River of Forgetfulness, but he didn’t uncover something.
Just one finger on the blood flow-shaded palm was over a gauge lengthy. The huge palm seemed to be condensed from blood flow. That has a take hold of, a terrifying sonic increase sounded from the oxygen.
“Overseer, i want to test it out.” An specialist withstood up and saluted An Tianzuo.
When it comes to officer’s physique, it floated on top of the River of Forgetfulness without falling.
Nevertheless, within the blink of the eyes, the dispersed sanguine aura condensed yet again, changing towards a our blood-decorated fretting hand that drilled into the River of Forgetfulness.
Let Me Game in Peace
“Overseer, I’ll decline and take a look.” Jia Nong found that issues ended up because he acquired suspected. The River of Forgetfulness’s strange electrical power was ineffective against him, so he needed An Tianzuo’s authorisation.
Zhou Wen and Li Xuan glanced within the specialist and noticed that he was using a military services standard along with a military layer. He wore a propane cover up and a armed service cap.
“Proceed.” An Tianzuo nodded a little.
“Safety very first,” An Tianzuo reported.
“Yes,” Jia Nong addressed and was approximately to travel along the River of Forgetfulness.
“Yes,” Jia Nong addressed and was approximately to take flight around the Stream of Forgetfulness.
Just after taking hold of nothing at all, the our blood-colored palm quickly retreated to the Stream of Forgetfulness and vanished inside the blink of your eyesight.
The official saluted marginally before wandering for the River of Forgetfulness. Having said that, to Zhou Wen’s big surprise, he didn’t summon his Associate Beast. As an alternative, he walked towards the River of Forgetfulness himself.
“Yes, a scapegoat. Could that b.l.o.o.d.y fretting hand be that factor?” Lu Bushun nodded.
It absolutely was pointless for those Friend Beasts to infiltration the Stream of Forgetfulness. They all vanished the minute they joined the river. It absolutely was unknown how serious the River of Forgetfulness was.
“Yes,” Jia Nong solved and was about to fly down the Stream of Forgetfulness.
Jia Nong a.s.sumed a challenge posture, but An Tianzuo suddenly shouted, “Jia Nong, give back!”
The Alchemist’s Secret
If the blood flow-decorated fretting hand sprang out, Lu Bushun and also the other officials immediately attacked it. Heart and soul Vitality of countless properties smacked the blood stream-colored hand, scattering it.
Locating the Three-Lives Natural stone was much harder compared to they acquired thought. At the moment, really the only individual who could enter into the river was Jia Nong, but he couldn’t resist the blood flow-decorated fretting hand.
It absolutely was unnecessary for that Associate Beasts to attack the River of Forgetfulness. They all vanished the time they accessed the stream. It absolutely was mysterious how deep the River of Forgetfulness was.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949025.18/warc/CC-MAIN-20230329182643-20230329212643-00085.warc.gz
|
CC-MAIN-2023-14
| 4,554
| 33
|
https://www.googlenestcommunity.com/t5/Nest-Thermostats/Nest-wiring/td-p/32435
|
code
|
In order to utilize a separate heating (rh) and cooling (rc) power source you must use the learning thermostat. You are right that the basic thermostat model just doesn't have enough terminals to do what you need.
Is one of the R wires a jumper? Hard to tell from the picture. If one is a jumper, just use the main red wire coming up in the main thermostat wire bundle and connect to your R terminal on the new base. No need to use a jumper on Nest stats for Rh and Rc as both are connected internally through terminal R to run both heating and cooling. Hope this helps?
That is a strange thought and made me look real close, good catch sakrue... it looks like there's the default bare silver jumper back behind there. It's strange that they'd connect heating and cooling power together unless they both come from the same furnace.
I guess more info is needed. What's the source of heating? Is it a separate boiler or forced air furnace that's shared with the a/c blower?
I wanted to reach out and follow up. Please let me know if you are still having trouble wiring the device, as I will be locking the thread in 24 hours due to inactivity.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00472.warc.gz
|
CC-MAIN-2022-27
| 1,141
| 5
|
https://gitlab.cern.ch/atlas/athena/-/merge_requests/63538
|
code
|
Part of the development for !63059 (merged).
This merge request updates the
ISimulationSvc interfaces to optionally
take a second (shadow) McEventCollection as input to the
simulateVector calls. If the new
approach to quasi-stable particle simulation is being used then this shadow truth information
is used to look-up the predefined decays of quasi-stable particles when building the
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511284.37/warc/CC-MAIN-20231003224357-20231004014357-00210.warc.gz
|
CC-MAIN-2023-40
| 384
| 7
|
http://patchwork.ozlabs.org/patch/176272/
|
code
|
@@ -366,6 +366,21 @@ and protect the submitter from complaints. Note that under no circumstances
can you change the author's identity (the From header), as it is the one
which appears in the changelog.
+If you are submitting a large change (for example a new driver) at times
+you may be asked to make quite a lot of modifications prior to getting
+your change accepted. At times you may even receive patches from developers
+who not only wish to tell you what you should change to get your changes
+upstream but actually send you patches. If those patches were made publicly
+and they do contain a Singed-off-by tag you are not expected to provide
+their own Singed-off-by tag on the second iteration of the patch so long
+as there is a public record somewhere that can be used to show the
+contributor had sent their changes with their own Singed-off-by tag.
+If you receive patches privately during development you may want to
+ask for these patches to be re-posted publicly or you can also decide
+to merge the patches as part of a separate historical git tree that
+will remain online for historical archiving.
Special note to back-porters: It seems to be a common and useful practise
to insert an indication of the origin of a patch at the top of the commit
message (just after the subject line) to facilitate tracking. For instance,
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917118519.29/warc/CC-MAIN-20170423031158-00166-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 1,339
| 19
|
https://quantumcomputingreport.com/our-take/quantum-value-a-new-hope/
|
code
|
There has been a lot of discussion about Quantum Advantage to demonstrate uses of quantum computers that solve practical problems which cannot be solved at all using classical computers. There has also been a lot of discussion on hybrid classical/quantum algorithms for NISQ machines where portions of an algorithm run on a classical machine while other portions run on a quantum machine with a deep interaction between the two.
Lucas Siow of ProteinQure has proposed another way that quantum computers could provide a commercial benefit that he calls Quantum Value. This is an approach of providing a solution to a valuable problem by starting with a solution developed in a classical algorithm and augmenting it with a solution calculated by a quantum computer. Even if the quantum solution on its own is not as good as the classical solution, by combining the two together one can create an ensemble solution that is better than any classical algorithm alone.
For more details, we recommend that you can read his paper posted on Medium.com here.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00500.warc.gz
|
CC-MAIN-2019-47
| 1,048
| 3
|
http://www.webhostingtalk.com/showthread.php?t=428735
|
code
|
1and1 Dedicated: my bad experience and rescue mode tips
My brief view on 1and1 dedicated servers from my 1 year of using 1 is:
1and1 Dedicated Servers are super-fast and continuouly growing in features, they are great for just casual usage, however if you want to run anything semi-important on them I would say DO NOT. 1and1 support, simply put, sucks, their reliability is 0 to none and the customer is definetly not their priority.
Over the course of my 1 year with them, my server has gone down about 5 times for a period of over a day, once over a week.
My worst experience, which I just again reexperienced, was when they intentionally disconnected my server from the network without even bothering to notify me. They claimed there was an outgoing UDP flood from the server, which I know was not true; I have seen posts by other users where 1and1 claimed this also. Because of the "UDP flood" they believe our server was compromised and refused to bring it back up unless we reimaged the server, and they would not provide backup service unless we paid 50 bucks per 20 gigs for it.
The worst part was not that they believed we had been compromised and therefore pulled it, it was that it took their support team a day to even realize this was the case. Their support team kept telling us it was a hardware issue and they were working on it, they even supposedly corresponded directly with admins twice, and our priority was raised. They were unaware that 1and1 pulled the plug themselves, so they didn't notify us and they didn't notify their own support team.
Eventually after we found this was the case we got them to boot it into a "rescue" mode. We already had 3 days of downtime at this point. In rescue mode, the server is booted off another drive and you can mount the normal drives. It is meant to not allow you to run usual services such as Apache etc. We could not afford such downtime, so we managed to figure out howto run everything in rescue mode, which thankfully I remembered today when this happened again. I decided to document the steps for those who have the same experience:
From rescue mode:
Replace domain.com with a domain pointing to your server.
mount -t ext3 /dev/hda1 /mnt
mount -t xfs /dev/hda5 /mnt/usr
mount -t xfs /dev/hda6 /mnt/var
mount -t xfs /dev/hda7 /mnt/home
service mysqld start
service psa start
service xinetd start
service named start
service httpd start
service qmail start
service courier-imap start
Anyways eventually after about 3 more days, we had the idea of convincing them that we found the security flaw, even though there was none, and after calling three times, the third support tech. fell for it and thats how we managed to recover from that disaster. At that point we figured oh well we'll just deal with 1and1 for a few more months.
Just recently there was another downtime (a 3rd of their network), and it according to them it was just an issue with one router. Any decent ISP would be able to hotswap a router in a few minutes, and with redundant backbones, they should also have redundant routers I would think. It took them three hours to resolve the issue.
And now I faced the same thing as what happened before, them wanting us to reimage. This time knowing how 1and1 works better, we got about 18hrs of downtime instead of 3 days, and as I said before I remembered all the steps to get stuff working in recovery, so we're up and running, but a wreck.
So again I reiterate, if you need any reliability or customer service, do not go with 1and1! If you just need a cheap dedicated fast host to play with, they're wonderful.
1.) What server did you *think* you were buying in the first place? The server you discribe above?
I expect any decent host to at least notify a customer if they pull the network connection on the server and I expect their own support to be aware of it.
2.) What are you paying monthly for your service?
About $90 a month.
3.) A year of service and you just now post at WHT (bad review) as your first post where have you been? What has motivated you to come forward?
The other person who helps manage this server I believe has posted here before, I just had to post something in my frustration with 1and1. Also my original reason for the post was not to bash 1and1, but actually to provide those few shell commands in the middle to other locked 1and1 users who cannot afford a week of downtime....
Thanks for the advice, I was just about to go with 1and1 to expand my web hosting business with their very cheap servers with more dedicated servers as my current ones are almost getting over crowded. I guess I will just continue building up new servers and colocating them. My longterm plan would be to own my own datacenter where I will have immediate access to my servers physically which is better than remote access.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721174.97/warc/CC-MAIN-20161020183841-00141-ip-10-171-6-4.ec2.internal.warc.gz
|
CC-MAIN-2016-44
| 4,812
| 31
|
https://fuzzylite.com/forums/reply/1912/
|
code
|
build.bat, you need to open the console window using the Visual Studio Console (or Terminal), which will run some commands to update the paths for binaries in your system. The problem there seems that CMake is not able to find the compiler.
As for the following code:
win32:CONFIG(release, debug|release): LIBS += -L$$PWD/../../../../../../fuzzylite-5.0/ -lfuzzylite else:win32:CONFIG(debug, debug|release): LIBS += -L$$PWD/../../../../../../fuzzylite-5.0/ -lfuzzylited else:unix: LIBS += -L$$PWD/../../../../../../fuzzylite-5.0/ -lfuzzylite INCLUDEPATH += $$PWD/../../../../../../fuzzylite-5.0 DEPENDPATH += $$PWD/../../../../../../fuzzylite-5.0
The problem is that you are not finding the library. Please use the full path to fuzzylite instead of a relative
../../../ to prevent unnoticed errors. Also, some times things are funny with dynamic libraries, so you may also try using
-lfuzzylite-staticd instead of the dynamic versions.
Furthermore, you need to ensure that the path to fuzzylite is correct. For example, the INCLUDEPATH needs to point to the fuzzylite directory that contains folders
src, such that doing
cd $FL_HOME/fl is a valid path to the headers. Likewise, you need to make sure that the library files are available in the fuzzylite directory or change accordingly:
LIBS+=path/to/fuzzylite/release/bin/, such that the libraries are in
Hope this helps.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100551.17/warc/CC-MAIN-20231205105136-20231205135136-00764.warc.gz
|
CC-MAIN-2023-50
| 1,372
| 11
|
https://google-melange.com/archive/gsoc/2012/orgs/pcl
|
code
|
Point Cloud Library (PCL)
Mailing List: mailto:firstname.lastname@example.org
The Point Cloud Library (PCL) is a standalone, large scale, open project for point cloud processing. A point cloud is a data structure used to represent a collection of multi-dimensional points and is commonly used to represent three-dimensional data such as the output of a stereo camera, 3D scanner, or time-of-flight camera. With the advent of new, low-cost hardware such as the Microsoft Kinect or Asus XTionPro 3D cameras and continued efforts in advanced open source 3D point cloud processing, 3D perception gains more and more importance in many fields.
The PCL library contains numerous state-of-the art algorithms including filtering, feature estimation, surface reconstruction, registration, model fitting, segmentation, tracking, recognition, and many more. These algorithms can be used, for example, to filter outliers from noisy data, stitch 3D point clouds together, segment relevant parts of a scene, extract keypoints and compute descriptors to recognize objects in the world, and create surfaces from point clouds and visualize them -- to name a few.
PCL is released under the terms of the BSD license and is open source software. It is free for commercial and research use. Development of PCL is a large collaborative effort driven by researchers and engineers from many different institutions and companies around the world. PCL is cross-platform, and has been successfully compiled and deployed on Linux, MacOS, Windows, and Android. To simplify development, PCL is split into a series of smaller code libraries, that can be compiled separately. This modularity is important for distributing PCL on platforms with reduced computational or size constraints. PCL aims to be for 3D processing, what the Boost set of libraries are for C++: a collection of fast, modular, and community peer reviewed code libraries that can be used to create powerful and complex applications. PCL is written entirely in C++ and makes use of modern C++ libraries such as Boost and Eigen for all its internal data structures. In order to support applications that require real time point cloud processing, PCL has been designed to take advantage of SSE instructions when available, and a GPU interface is currently under development in association with NVIDIA.
The Point Cloud Library aims to unite the field of point cloud processing. By providing an extensible framework for all the geometric algorithms necessary for 3D perception, PCL enables developers to create applications limited by their imaginations, rather than their 3D geometric knowledge.
- 2D Image Drawing Operators from VTK in PCL The purpose of the project is to Implement and document various drawing functions using VTK in PCL. Drawing functions include primitive drawings such as lines, circles, polygons etc. as well as alpha blended polygons and graphs, both 2D and 3D, for visualizing histograms and other functions.
- 3D edge extraction from an organized point cloud The goal of our GSOC is to implement a 3D edge extraction algorithm from an organized point cloud. We are interested in the edges come from boundaries of either geometric structures or photometric texture. To find these edges, we will combine multiple cues from RGB-D channels. We plan to extract 2D edges from the given RGB channels and back-project these edges to the given depth channel so as to determine reliable 3D edges. We also plan to exploit edges from depth discontinuities or high curvature regions that may not be captured from the RGB channels. We want the code to provide several options to choose the types of resulting edges such as, depth discontinuity, RGB edge, high curvature, and etc.
- Additional functionalities and improvement for PCL modules My aim is to bring the knowledge acquired in my experience to help the enhancement and continuous growing of PCL library. I expect to give my contribution through the improvement of pcl modules and the development of new functionalities. My focus is on tracking, filters, keypoints, registration, and tree-based structures like kdtree and octree. However, I'm available to work on any module as well.
- Browser Implementation As it appears here, http://www.pointclouds.org/gsoc2012/ideas.html under "WebGL Development"
- Modular Interactive Application for Static and Streaming Point Cloud Data This project aims to develop a full featured GUI for visualization and application of the Point Cloud Library's algorithms. Interfaces will be provided for filtering, registration, surface reconstruction, model fitting, and segmentation of point cloud data. The project will leverage the modular nature of the library to create an application which is easy to maintain and extend as the library develops. Optional additional work will look towards developing a "streaming" mode for working with live camera sources, such as those provided using OpenNI.
- Organized Point Cloud Data Operations The project involves implementation of several algorithms and techniques which are very popular in computer vision. I have already implemented all the algorithms mentioned in the project idea for my courses. I have also been involved in many projects where I needed to implement or use these algorithms. In addition, I would also like to include implementation of some popular computer vision algorithms based on graph cuts for this project.
- PCLModeler: a PCL based Reconstruction Platform There are quite a few related libraries and tools for point cloud based reconstruction, like CGAL, Scanalyze, Meshlab and so on. The key reconstruction steps including registration, cleaning up, surface reconstruction, and visualizing are scattered in different places, making it not very easy to get an easy to use reconstruction platform. It will be great if a ready to use reconstruction platform is provided, which will save users’ efforts a lot from building up a platform from scratch themselves. We believe such a platform will stimulate more researchers and developers to try their new ideas and develop new applications based on PCL.
- Point Cloud Segmentation using Graphical Models In this project I propose to implement a state of the art segmentation approach using graphical models and a highly efficient approximate inference algorithm, which results in faster and better segmentations. Furthermore I propose to create a new pcl package ’pcl_ml’ containing the necessary machine learning techniques such as Markov network structures, inference algorithms and low level functions in a general way such they can be reused for future applications.
- Real-time 3D Applications based on PCL, Microsoft Kinect, and Tegra 3. Today, mobile platforms (such as NVIDIA Tegra 3) are equipped with multi-core CPUs and GPUs that have shown promising capabilities in creating real-time applications that involves image processing and 3D rendering. The increasing capabilities of these devices have opened possibilities of creating interactive applications that were only available to workstation desktops in the past. The goal of this project is to port over and further optimize some of the key features from the PCL library to run on Android devices. Ultimately, we would create a platform that allows developers to create interactive applications using the Microsoft Kinect, Tegra 3, and PCL all out of the box with minimum efforts. Together, we hope this would encourage developers to explore applications that utilize these hardware and provide richer and unique user experiences. Additionally, computational intense tasks such as feature extraction/recognition can be offloaded to a remote server. Then, the computed results are transferred back to the mobile devices in real-time. By decoupling the computation from the devices, we could support a large variety of platforms despite the hardware limitation on some of the mobile devices.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00500.warc.gz
|
CC-MAIN-2023-50
| 7,916
| 15
|
http://www.coderanch.com/t/163092/java-EJB-SCBCD/certification/EJB-Error-persisting-unidirectional-relation
|
code
|
This week's book giveaway is in the OO, Patterns, UML and Refactoring forum. We're giving away four copies of Refactoring for Software Design Smells: Managing Technical Debt and have Girish Suryanarayana, Ganesh Samarthyam & Tushar Sharma on-line! See this thread for details.
I have 2 tables, EMPLOYEE and ADDRESS. Table EMPLOYEE has ADDRESSID column which has a foreign key relationship with ADDRESS whose primary key is ADDRESSID. Using EJB3.0 java persisatance API I wanted to implement one-to-one unidirectional relationship. Hence I created 2 entity classes, Employee and Address. In the Employee class I putting below the relevant code:
In the Address class, there is no code to access the Employee.
I have created a session bean which tries to create an employee along with an Address. Putting the relevant code below:
As mentioned above, error is being thrown at em.persist(emp);. The error is
I am unable to make out anything from the exception stack trace. Any help in this would be greatly appreciated.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042987155.85/warc/CC-MAIN-20150728002307-00252-ip-10-236-191-2.ec2.internal.warc.gz
|
CC-MAIN-2015-32
| 1,014
| 6
|
http://www.456bereastreet.com/archive/200603/new_clearing_method_needed_for_ie7/
|
code
|
New clearing method needed for IE7?
With the release of the MIX06 build of Internet Explorer ("Internet Explorer Beta 2 Preview - released on March 20"), Microsoft has declared IE7 "layout complete". What that means is that no new CSS features will be added. This is what you get in the final IE7, though there may still be bug fixes.
So, now is the time to start testing. I haven't been able to test the latest IE7 myself, but it now supports the
max-height properties. Great! Less great is that according to Chris Wilson's answer to Peter Gasston's question, IE7 will not support the
:after pseudo-elements, and neither will it support the
Ouch. Missing support for
content may cause you problems, depending on the CSS you have used to take care of certain things.
Why is that? Well, IE used to have a box model bug which caused it to resize a box to contain any floated children it might have. The easy clearing method described at Position Is Everything uses that to create a very convenient way of clearing floats without having to add extra markup. Unless I'm wrong that method will not work in IE7, since the box model bug was fixed in the original IE7 Beta 2 Preview.
If that is the case, here's hoping that someone is able to come up with a markup-free way of clearing floats that works in IE7. Otherwise those of us who – like me – have been using the easy-clearing method will have to revisit every site that uses the trick and add clearing elements.
Speaking of clearing elements, I have never been able to come up with the CSS for a heightless clearing element that works consistently across browsers. And it looks like there will be an increased demand for those later this year.
Anyone out there sitting on a bulletproof markup and CSS clearing combo?
Update: The trick is to add
display:inline-block and then
display:block to the easyclearing rule:
/* Hide from IE Mac \*/
/* End hide from IE Mac */
And to keep IE6 and earlier happy, use your favourite method (Conditional Comments or a CSS filter) to send them the following:
Update 2: Andy Clarke asked some of the IE developers about this and other questions raised in the comments here. Find the answers in Clearing floats without structural markup in IE7.
|
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701156520.89/warc/CC-MAIN-20160205193916-00174-ip-10-236-182-209.ec2.internal.warc.gz
|
CC-MAIN-2016-07
| 2,231
| 18
|
http://stackoverflow.com/questions/16410651/linq-to-select-parent-records-without-children/16410713
|
code
|
I have a heirarchy as such:
- Order - order details - work order header - work order details
I want to select work order headers that have no work order details.
I have this so far, but it returns one level up, the order details...I want the next level down, work order headers.
IEnumerable<OrderDetail> odWithoutWoDtls = order.OrderDetails.Where(od => od.WorkOrderHeaders.Any(woh => woh.WorkOrderDetails.Count() == 0));
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394678705235/warc/CC-MAIN-20140313024505-00074-ip-10-183-142-35.ec2.internal.warc.gz
|
CC-MAIN-2014-10
| 420
| 5
|
https://www.cville.k12.in.us/domain/459
|
code
|
Welcome to the chromebook resource page for the CHS 1:1 program. Within this section of the website you'll find some helpful information including a FAQ section, Insurance information for devices, and the CCSC acceptable use policy that our students and staff agree to abide by before using technology in the school corporation.
Many of you may be asking why we are implementing a 1:1 chromebook initiative. The simple answer is that we believe it is what is best for our students at CHS. By choosing chromebooks for our students at CHS, we are able to take advantage of many apps that Google has to offer. Colleges, businesses, even the military are starting to adopt and use Google cloud services. Furthermore, Google apps works well with other tech we have in place like Canvas, and allows for greater collaboration between student to student, student to teacher, and teacher to teacher. We are excited to move forward and provide our students with new and current experiences with technology.
Please visit the resources at the left to get more information.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948867.32/warc/CC-MAIN-20230328135732-20230328165732-00591.warc.gz
|
CC-MAIN-2023-14
| 1,060
| 3
|
https://housinglab.oslomet.no/housing-lab-in-real-estate-economics/
|
code
|
The paper House price seasonality, market activity, and the December discount by Erling Røed Larsen (Housing Lab) has been accepted in Real Estate Economics.
In Norway, house prices tend to drop in December. This regularity is persistent across regions and over time. I exploit a transaction data set with high temporal granularity to document and estimate the size of the December discount. I control for a composition effect using a hedonic model and I control for unobserved heterogeneity by using repeat sales and involving ask prices and appraisal values. By segmenting into submarkets, I search for determinants of price seasonality. The evidence suggests that the December effect is linked to time-on-market for each unit and transaction volumes within each submarkets.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475833.51/warc/CC-MAIN-20240302152131-20240302182131-00308.warc.gz
|
CC-MAIN-2024-10
| 777
| 2
|
http://www.tomshardware.com/forum/286240-31-system-video-512mb-stick-inserted-fine-sticks
|
code
|
ASUS have no such motherboard on their site. Please provide a link to your most likely ASRock motherboard.
I don't know where it came from, it came in a package labled newegg, and it was asus.
I got it for my b-day last year. Wow, power just went out too, I like the idea of a laptop...
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125946807.67/warc/CC-MAIN-20180424154911-20180424174911-00343.warc.gz
|
CC-MAIN-2018-17
| 286
| 3
|
https://www.petecorey.com/blog/2015/06/08/authentication-with-localstorage/
|
code
|
In this scenario, CSWSH could occur if a user authenticated with our application visited a malicious website that attempted to establish a DDP connection to our Meteor application:
var ddp = DDP.connect(‘http://our-application.io’); ddp.subscribe(...); ddp.call(...);
DDP.connect makes a
GET request to our application’s WebSocket endpoint, passing along our session cookie. The
GET request returns with a
101 Switching Protocols response and the WebSocket is established. WebSocket connections aren’t protected by modern browsers’ same-origin policy, so the browser happily establishes the DDP connection. The malicious site is now free to view, modify, and delete all of your user’s data without their knowledge or consent. Uh oh!
localStorage To The Rescue
Rather than using cookies and implementing complicated countermeasures against CSRF attacks, Meteor opts for a more elegant solution and stores session tokens in
localStorage as our authentication mechanism also lets us do cool things like reactive authentication. Imagine a user with your web application loaded on two different tabs. If that user were to log in to your application on one tab, they would instantly be logged in on the other tab. Similarly, logging out of the application in one tab also logs the user out in the second tab. Meteor accomplishes this by listening for storage events and reactively updating the client’s authentication state. These storage events also open the door for more exciting authentication functionality, like sharing authentication state across multiple applications.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154432.2/warc/CC-MAIN-20210803061431-20210803091431-00160.warc.gz
|
CC-MAIN-2021-31
| 1,584
| 9
|
https://www.ibm.com/developerworks/community/forums/html/topic?id=5e43a708-0b5e-4e1e-a1f2-bed4da2bae53&ps=25
|
code
|
after creating components and applications, i created an environment and started the same environment for deployment process.Now deployment done successfully. lets say i performed this in DEV Environment ..since its a successful deployment so now i have taken snapshot of this deployment.
Now i want to use the same snapshot and deploy in QA Environment !
Now the dilemma is, Since some properties like end point URL for services will be different for DEV and QA ! so if i use the same snapshot created from DEV for QA , how i can i differentiate the properties like endpoint's etc between DEV and QA?
Any help will be appreciated !
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948738.65/warc/CC-MAIN-20180427002118-20180427022118-00010.warc.gz
|
CC-MAIN-2018-17
| 632
| 4
|
http://www.sevenforums.com/gadgets/430-window-vista-sidebar-windows-7-a-3.html
|
code
|
Excuse me if this sounds a litle critical or sarcastic. It isn't meant as such - only an observation.
After two years of criticism of some minor features in Vista, particuarly the sidebar and Windows Mail, sites such as this are now flooded with hints and hacks on how to recover those features in "7",
Are we humans naturally perverse? I do believe that it qualifies the views of many, incuding myself, that Vista was not the problem, but the nature of the users!
What is actually wrong with having the gadgets floating, instead of in an obtrusive window of their own. Why not use the taskbar, which is there anyway and can be "auto" hidden.
What is wrong with installing Windows Live mail (Free), instead of those fiddles to get windows Mail going again. It does everything Windows Mail does, plus a wee bit more.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164579146/warc/CC-MAIN-20131204134259-00051-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 815
| 5
|
https://ams.confex.com/ams/98Annual/webprogram/Paper334256.html
|
code
|
Tuesday, 9 January 2018: 11:45 AM
Room 7 (ACC) (Austin, Texas)
The weather is influenced by atmospheric processes operating on a wide range of spatial scales. These multiscale processes create spatial structures within gridded atmospheric data that may be linked with high-impact weather events. If statistical and machine learning models can learn how to represent these spatial structures, they may be able to extract useful predictive information from them. The field of deep learning has developed some novel approaches for creating abstract, multiscale representations of spatial and temporal data for improving the accuracy of prediction tasks. Many of these approaches require very large amounts of labeled data, but some unsupervised deep learning methods can learn spatial representations without large labeled datasets. Generative adversarial networks (GANs), an unsupervised deep learning method that optimizes a pair of neural networks using an adversarial training approach, have been able to produce realistic synthetic images. However, they have primarily been evaluated either subjectively or on their based on their accuracy in semi-supervised learning problems. GANs also have many possible parameter settings and network structures, but the choices for these parameters have not been rigorously justified. In this study, we statistically evaluate how well a large number of GAN configurations represent a variety of spatial covariance structures in both spatial random fields with prescribed spatial covariances and numerical simulations of thunderstorms and their surrounding environment. With the spatial random fields, we found that GANs can represent exponential covariances with different length scales and can represent domains with spatially-varying length scales. With the storm dataset, we have found that GANs can produce synthetic examples that replicate the correlations among different variables, such as the radar reflectivity and the surface temperature and wind fields. The choice of parameter settings and network structure does have a large impact on the quality of these representations.
- Indicates paper has been withdrawn from meeting
- Indicates an Award Winner
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374391.90/warc/CC-MAIN-20210306035529-20210306065529-00242.warc.gz
|
CC-MAIN-2021-10
| 2,203
| 5
|
https://medium.com/wolox-driving-innovation/things-that-i-have-learnt-at-iot-projects-as-software-developer-9265b2c0018d?source=---------1---------------------
|
code
|
Things that I have learnt at IoT projects as software developer
As software devs, commonly we are used to writing code in an iterative and incremental way. This usually leads us to choose an agile methodology for software development.
As Brooks exposed in his popular No Silver Bullet paper, software is complex, conform, invisible and changeable. Then, a challenge arises, not only at development but also at project management, while treating with components that are not very malleable or are severely influenced by the physical conditions where the application is running. These scenarios may arise at IoT projects.
Before continuing, I’ll introduce myself, I’m a React Native developer at Wolox and recently worked at two IoT projects — one involving a camera to record amateur football matches and trim them via an app, and another involving a Bluetooth toothbrush which record users’ sessions and let them earn points by each minute spent brushing their teeth.
How do we start?
The football project had a very recommendable first step, that help us to figure out if the project is doable or not. Although this step seems to be a bit obvious, is it worth to mention that in in this step we need to elicitate those requirements that will be hard constraints for us — those that will be related to the hardware itself (e.g. power consumption, environmental characteristics, data processing & storage, size…) and limits the use of the hardware.
Once we have the goal and our hard constraints for our project, we need to define devices in order to start making a prototype (this may be an Arduino or a Raspberry Pi, by example). We need to make sure that those hard constraints will be respected at every moment while developing our prototype so testing will become handy at this point. With this device definition, we’d like to also define which sensors and microchips will come to play at the hardware party, and the software infrastructure (e.g. storage, cloud processing) needed to put them to work.
On the other hand, projects like the Bluetooth brush one had the device predefined, so defining a device to make a prototype in these cases doesn’t make too much sense. In this case, it is preferable to deeply know how the device itself behaves at the very beginning of the project. (By example, we knew that we can enable/disable Bluetooth from our brushes at nearly the end of our project. Lack of this knowledge led us to put more effort debugging and figuring out why our brushes stop responding).
In either case, we need to find these hard constraints, the earlier, the better. Keeping in mind these constraints will make software development a bit more kind. Moreover, since the hardware is clearly less malleable than software, we need to know all requirements that are related to hardware — expected behavior, constraints, environment, and so on.
This knowledge might come up with a question: should we need an iterative, incremental process for our project? In my opinion and experience, this is not needed for hardware but it’s essential for software. This may be accomplished by separating software and hardware development, since by nature they have different development velocities — developing, debugging and testing hardware takes much more effort than developing software, since this process is affected by other factors as power supply, environment conditions (temperature, humidity…) and even unexpected interactions (such as interferences).
Remember, avoid using an iterative methodology unless we can ensure cheap iteration costs.
Know how to communicate with the prototype
A thing that in my opinion becomes critical at IoT projects, is the need to communicate with our prototypes at the time our software evolves. In order to start designing an architecture, we’d like to answer some of the following questions:
- Which will be the main communication channel? (WiFi, 4G, Bluetooth…?)
Although WiFi and 4G may be quite similar, they differ depending on our scenario. One key difference is the infrastructure needed to support one or another. For example, for our football IoT project, each camera have their own 4G module in order to adapt them to many pitches.
- How many interactions will have our prototype?
- How often our prototype is interacting?
- Which protocol will be needed to communicate with them?
In some cases, communication problem with our prototypes may be already solved with a third party SDK which manages all our low-level stuff (this was the case of our brushes project).
In other cases, since we need to ensure that we can develop a Walking Skeleton at an early stage, it’s a common practice to do Smoke Test so we can validate our channels quickly. For example, we encountered that our ISP blocked nearly all TCP requests to our Raspberries, so at our Football project, we ended up using an AMQP protocol.
These will bring an idea of how likely is to fake the IoT prototype. This is very important and often, we’d like to mock our prototype at a software level, since hardware and software have different development velocities.
If possible, avoid processes that hold hardware too frequently — like directly develop with the device. Hardware, as opposite of software, is a limited resource; this means is more difficult to share these resources with other members of our team. A possible scenario is that devs and QA need the same prototype to test things as well. If we don’t have a plan to proceed here, the prototype itself will likely become a bottleneck for our project.
Hardware has a different development velocity than software, and develop a prototype clearly requires much effort and knowledge. But by integrating hardware to UX, a world of possibilities will open.
IoT projects aren’t only entirely about how we deal with hardware, but also how we deal with collected data. IMHO that’s the heart of every IoT project.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583514879.30/warc/CC-MAIN-20181022071304-20181022092804-00469.warc.gz
|
CC-MAIN-2018-43
| 5,920
| 24
|
https://it-ebooks.dev/d/186-practical-foundations-for-programming-languages
|
code
|
Types are the central organizing principle of the theory of programming languages. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The soundness of a language design - the absence of ill-defined programs-follows naturally. The purpose of this book is to explain this remark. A variety of programming language features are analyzed in the unifying framework of type theory. A language feature is defined by its statics, the rules governing the use of the feature in a program, and its dynamics, the rules defining how programs using this feature are to be executed. The concept of safety emerges as the coherence of the statics and the dynamics of a language.
In this way we establish a foundation for the study of programming languages. But why these particular methods? The main justification is provided by the book itself. The methods we use are both precise and intuitive, providing a uniform framework for explaining programming language concepts. Importantly, these methods scale to a wide range of programming language concepts, supporting rigorous analysis of their properties. Although it would require another book in itself to justify this assertion, these methods are also practical in that they are directly applicable to implementation and uniquely effective as a basis for mechanized reasoning. No other framework offers as much.
Book Download and Read Links
- Title: Practical Foundations for Programming Languages
- Authors: Robert Harper
- Publisher: Carnegie Mellon University
- Paperback: 590 pages
- Publication date: 2012
- License: CC BY-NC-ND
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652161.52/warc/CC-MAIN-20230605185809-20230605215809-00388.warc.gz
|
CC-MAIN-2023-23
| 1,728
| 9
|
https://adm.uff.br/types-of-data-program/
|
code
|
There are several types of data software, including the ones that are designed particularly for big data. For instance , DataStax was created to help designers deploy considerable amounts of data at a quicker pace with fully uptime. This software is used simply by more than half with the Fortune 95 and 500 global enterprises. Some of their consumers include T mobile, Amazon, plus the Home Website. DataStax was awarded the NorthFace ScoreBoard Service Merit, a recognition given to firms that provide top-notch service and results.
Looker is a highly effective data computer software that enables you to link data from several sources and upload this to a central database. Also you can use Looker to transform and clean info. It helps a variety of record formats, which includes CSV. This kind of data computer software can help you examine data, build selections, and perform stats. The software allows you to control that can access info and that can view it. You can even choose a centralized info warehouse that meets you can look here your specific requirements.
RapidMiner is yet another data computer software option that could handle a lot of data. Their big graphical user interface and advanced stats make it an excellent choice for big data projects. RapidMiner supports a range of languages, which include Python and SQL, and can handle various stages associated with an advanced analytical job. Users can easily drill into data and visualize the results by using a business intelligence dashboard. It is also flexible enough to integrate with several applications. It can deal with both huge and small data sets.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100710.22/warc/CC-MAIN-20231208013411-20231208043411-00182.warc.gz
|
CC-MAIN-2023-50
| 1,629
| 3
|
http://www.wiibrew.org/w/index.php?title=Hugo-Wii&oldid=13949
|
code
|
This is a port of Hu-Go GX a Turbo Grafx 16 / PC Engine emulator originally coded by Zeograd, for the Nintendo Wii. Original Gamecube porting code is from Softdev.
- Developer : Eke-Eke
- Accessories needed: GameCube controller, SD Adapter (optional), SD card
- Button to Return to Loader: Return to Loader from the main menu takes you back to the SD Loader or HB channel
- Display mode : 480p, 480i and 240p
- Installation for Zelda Chainloader: Install as usual
- Loaders useable: Twilight Hack, Front SD ELF Loader, WiiHL, Homebrew Channel
- Software type: Emulation
- Create the following directories in the root directory of the SD card:
- /hugo/roms - Store the ROM files here.
- /hugo/saves - SRAM and save state data will be stored here.
- Start the ELF or the DOL with one of the above mentioned loaders.
When loading .BIN or .ISO image files, a System Card ROM Image must be provided. The name is syscard.pce and the location is /hugo directory.
[PCE] - sourcecode cleanup - fixed multiplayer support (max. 4 players) - added preliminary Super CD-ROM support: Data only (.ISO & .BIN image files), no CDDA track support
[NGC/WII] - fixed progressive mode support, now automatically detected
[Wii only] - added automatic TV mode detection (from SYSCONF), no more PAL60 version needed - added option to return to Wii System Menu - fixed "TP reload" option: now compatible with HB channel - removed SD-Gekko support (Wii slot becomes default slot) - added Wii SD slot support for WRAM files - added Wiimote, Nunchuk & Classic controllers support through libwiiuse (see User Manual for default keys)
- added Wii mode support (including front SD rom loading with LFN, TP reload, ...) - added 4.7GB DVD support for chip-modded WII (GC mode only) - removed MPAL TV mode, added EURGB60 TV mode support: fix display problem for Wii users (GC&Wii mode) - added original rendering mode support (240i), like on real hardware - added 480p (progressive) rendering mode support (not supported by the PAL60 version, use the other one !) - added Console Reboot option in main menu (System Reboot) - WRAM files can now be saved/loaded to/from SDCARD: located in /hugo/saves from the root of your SDCARD (no Wii front SD support) - changed initial ROMS directory for SDCARD user: now looking for /hugo/roms from the root of your SDCARD - fixed broken MCARD support - modified controls when going into the rom selection menu (DVD or SDCARD), like other current emulators: - use B button to go up one directory - use Z button to quit the file selection menu - use L/R triggers to go down/up one full page . use Left/Right buttons or Analog stick to scroll the selected entry's filename when it can't be full displayed - various menu rearrangement, minor bugfixes & source code cleanup
There are two versions included, one for Wii and other for GameCube. Homebrew Channel compatible files are also provided:
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357929.4/warc/CC-MAIN-20210226145416-20210226175416-00034.warc.gz
|
CC-MAIN-2021-10
| 2,894
| 18
|
https://community.firecore.com/t/streaming-from-mac-to-apple-tv/17282
|
code
|
Streaming works great, but the only problem I have is that the network does not disconnect between ATV and the Mac.
So my mac won’t sleep after I use infuse and sleep the ATV.
When I check terminal and type: pmset -g assertions
It shows “network client active :1”
and the mac won’t sleep. I have to shut it down.
How can I disconnect this?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500017.27/warc/CC-MAIN-20230202101933-20230202131933-00037.warc.gz
|
CC-MAIN-2023-06
| 347
| 6
|
https://www.lugera.blog/gerardkoolen/2019/06/05/key-findings-about-labour-market-enablers/
|
code
|
As WEC Economic report says, all staffing agencies together, in the entire world helped 56 million people to get a better job. The moment STAA – Sales and Talent Acquisition Application will be available all over the world, we will reach at least a tenfold: 560 million people. Because STAA is incredible fast, easy to use and most importantly, much cheaper than the traditional agency fees. Get STAA now before it is too late. STAA is an incredible cool recruitment app made by recruiters for recruiters. How does it get any better than this?
You can find out more about STAA at https://staa.agency/
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347411862.59/warc/CC-MAIN-20200531053947-20200531083947-00211.warc.gz
|
CC-MAIN-2020-24
| 602
| 2
|
https://github.com/k-takata/mp3infp
|
code
|
Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
mp3infp Unicode build
Fetching latest commit…
Cannot retrieve the latest commit at this time.
|Type||Name||Latest commit message||Commit time|
|Failed to load latest commit information.|
This is a source code repository of mp3infp/u. mp3infp/u is a modified version of mp3infp. New features: * Supports Unicode file names and tags. * Supports Windows 7/8/10. * Supports x64 Windows. (Main work is done by Rem.) Binary packages are available at the releases page: https://github.com/k-takata/mp3infp/releases (Older versions might be still available at the downloads page: https://github.com/k-takata/mp3infp/downloads ) LICENSE: You may distribute under the terms of the GNU Lesser General Public License. References: mp3infp 2.54a (The original version by T-Matsuo): http://win32lab.com/fsw/mp3infp/ mp3infp 2.54i alpha1 (The lastest version by Rem): http://win32lab.com/bbs2/index.cgi?no=10236&reno=10231&oya=10190&mode=msgview (404)
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039746061.83/warc/CC-MAIN-20181119171420-20181119193420-00539.warc.gz
|
CC-MAIN-2018-47
| 1,095
| 8
|
https://docs.extendedmatrix.org/en/1.2.0/
|
code
|
Welcome to Extended Matrix documentation!
Extended Matrix is a formal language with which to keep track of virtual reconstruction processes. It is intended to be used by archaeologists and heritage specialists to document in a robust way their scientific activities. The EM allows to record the sources used and the processes of analysis and synthesis that have led from scientific evidence to virtual reconstruction. It organises 3D archaeological record so that the 3D modelling steps are smoother, transparent and scientifically complete. It has been developed by E. Demetrescu at CNR-ISPC (Rome, former CNR-ITABC). EM is at its 1.2 version (a 1.3 version is currently under development).
In a wider perspective and due to its abstract approach, the Extended Matrix can be used as a human readable metaphor to ingest and present liquid semantic data. In other words, the nodes that compone the paradata section can be used to track and annotate in a simple but effective way several data provenance path exceeding the traditional reconstruction process it was firstly applied to.
This project is under active development.
- A stratigraphic approach
- Nodes of the EM
- Properties of the EM
- Time management in the EM
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817780.88/warc/CC-MAIN-20240421132819-20240421162819-00802.warc.gz
|
CC-MAIN-2024-18
| 1,220
| 8
|
https://jon-keatley.com/skill-Gitlab.html
|
code
|
Gitlab is my favourite project management system. At Janus I was responsible for selecting a project management system to replace Jira. I selected Gitlab as it is flexible enough to support the very different ways that Janus manages projects. Once implemented I started onboarding Gitlab's continual integration which is now used to automate the release builds, bumping the version number, and appending the change log. The next step is to integrate automated testing, I am currently looking at Cucumber to do this. Gitlab's RESTful API has also allowed me to build a number of tools around Gitlab to provide additional features and reporting.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00461.warc.gz
|
CC-MAIN-2023-50
| 643
| 1
|
http://www.expertsmind.com/questions/polymoethism-301138854.aspx
|
code
|
Already have an account? Get multiple benefits of using own account!
Login in your account..!
Don't have an account? Create your account in less than a minutes,
Forgot password? how can I recover my password now!
Enter right registered email to receive password!
Project Description: We are seeking software engineers to create a plug-in for Adobe after Effects (CS4 and above) that allows 3D extrusions for a layer. We are seeking somethin
This assignment models a simplified delivery company. It is composed of the following departments: receiving which contains a list of packages to be delivered, shipping which ship
What is Constructors? Explain with an example? A constructor forms a new instance of the class. It initializes all the variables and does any work essential to prepare the clas
Determine Why java is robust Java is very robust o Both, vs. unintentional errors and vs. malicious code such as viruses. o Java has slightly worse performance as
This project is based on the teams example of chapter 1. Instead of teams, you will consider employees working in a department in a company. (Departments and employees are analog
Explain init(), start(), stop(), and destroy() method? The init() method is known as exactly once in an applet's life, while the applet is first loaded. It's generally used to
1. The spring container searches the bean's definition from the XML file and instantiates the bean. 2. Using the dependency injection, spring populates all of the properties as
What are synchronized methods and synchronized statements? Synchronized methods are methods that are used to control access to an object. A thread only implements a synchronize
Does RMI-IIOP support dynamic downloading of classes? Ans) No, RMI-IIOP doesn't support dynamic downloading of the classes as it is complete with CORBA in DII (Dynamic Interface
What 'System.out.println()' signifies? 'System' is a predefined class. System class gives access to the system. 'out' is the output stream. 'println' is printin
Get guaranteed satisfaction & time on delivery in every assignment order you paid with us! We ensure premium quality solution document along with free turntin report!
All rights reserved! Copyrights ©2019-2020 ExpertsMind IT Educational Pvt Ltd
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989115.2/warc/CC-MAIN-20210510064318-20210510094318-00253.warc.gz
|
CC-MAIN-2021-21
| 2,254
| 17
|
http://rpg.stackexchange.com/questions/12535/do-players-get-full-xp-from-encounter-if-enemies-break-and-run
|
code
|
First, I don't know of any specific morale rules for NPC's/monsters, but I would say based on modern army tactics, a unit has lost its effectiveness at 33% losses, and at 50% its a rout. So if the NPC's break and run, do the PC's get full XP for the encounter? Any specific rules?
You get the full XP for the encounter. D&D's XP is awarded for "challenges passed." Thus if you cause your enemy to retreat you have passed the challenge of the encounter and should get he full XP for the encounter.
|
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398451648.66/warc/CC-MAIN-20151124205411-00200-ip-10-71-132-137.ec2.internal.warc.gz
|
CC-MAIN-2015-48
| 496
| 2
|
https://docs.microsoft.com/en-us/previous-versions/office/developer/office-2007/ms423975%28v%3Doffice.12%29
|
code
|
Project Server Integration
This content is outdated and is no longer being maintained. It is provided as a courtesy for individuals who are still using these technologies. This page may contain URLs that were valid when originally published, but now link to sites or pages that no longer exist.
This section includes articles about integrating Microsoft Office Project Server 2007 with Windows SharePoint Services 3.0, Microsoft Office SharePoint Server 2007, and line of business applications.
For general information about Project Server development, see Project Server 2007: Getting Started with a New Platform for Developers.
In This Section
Project Server Web Parts Learn the basic concepts of Web Parts, how to use the default Project Server Web Parts, and how to develop a custom Project Server Web Part assembly.
Integration with Windows SharePoint Services Find out how to extend the project workspace template, create a workspace and link it to a project, and how to use the Object Link Provider to link a project task to an issue or other item in a SharePoint list.
ERP Connector Solution Starter for Project Server 2007 Learn how to integrate Project Server with enterprise resource planning applications. Synchronize human resources data, create projects and tasks based on finance system orders, and export work actuals to a financial accounting system.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251783621.89/warc/CC-MAIN-20200129010251-20200129040251-00519.warc.gz
|
CC-MAIN-2020-05
| 1,367
| 8
|
http://vocabulary.odm2.org/specimentype/orientedCore/
|
code
|
ODM2 Controlled Vocabularies
Core that can be positioned on the surface in the same way that it was arranged in the borehole before extraction.
View in SKOS
Download Term (SKOS)
Download Term (CSV)
SESAR Sample type CV. See http://www.geosamples.org/help/vocabularies#object.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039476006.77/warc/CC-MAIN-20210420152755-20210420182755-00327.warc.gz
|
CC-MAIN-2021-17
| 275
| 6
|
https://blog.mozilla.org/webdev/2014/11/13/webdev-extravaganza-november-2014/
|
code
|
Once a month, web developers from across Mozilla get together to work on a sequel to The Pragmatic Programmer titled The Unhinged Technical Architect. While we argue over the use of oxford commas, we find time to talk about the work that we’ve shipped, share the libraries we’re working on, meet new folks, and talk about whatever else is on our minds. It’s the Webdev Extravaganza! The meeting is open to the public; you should stop by!
You can check out the wiki page that we use to organize the meeting, view a recording of the meeting in Air Mozilla, or attempt to decipher the aimless scrawls that are the meeting notes. Or just read on for a summary!
The shipping celebration is for anything we finished and deployed in the past month, whether it be a brand new site, an upgrade to an existing one, or even a release of a library.
New Mozilla.org Pages and The Open Standard
craigcook stopped by to share a bunch of new things that launched from the Web Productions team, including a new mozilla.org homepage and a new contribute page. He also mentioned The Open Standard, which was launched with support from the Web Productions team.
Sites using contribute.json
We heard from peterbe about a new listing of sites with a contribute.json file. The listing pulls info hourly from the contribute.json files for each site in the list. Pull requests are welcome to add more Mozilla sites to the list.
Humble Mozilla Bundle and Voxatron Snippet
To promote the bundle, jgruen and other Mozillians worked with Joseph White to make a minimal port of the Voxatron for use in an about:home snippet. All told, the snippet was about 200kb large and still managed to cram in a full 3d voxel engine that Firefox users were able to play with on their home page.
Here we talk about libraries we’re maintaining and what, if anything, we need help with for them. Except this week there was nothing shared. Never mind!
New Hires / Interns / Volunteers / Contributors
Here we introduce any newcomers to the Webdev group, including new employees, interns, volunteers, or any other form of contributor.
|careers.mozilla.org and snippets.mozilla.com
The Roundtable is the home for discussions that don’t fit anywhere else.
Tabzilla Update Bar
mythmon wanted to let people know about a new feature in Tabzilla. You can now trigger a feature called the Update Bar, which notifies users on old versions of Firefox that they should update their browser. pmac also called out the Translation Bar, which offers localized versions of the current page to users viewing your site in a language that doesn’t match their preferred locale.
Workweek at Bernie’s
I also gave a reminder about the Webdev meetup happening at the Portland Coincidental Workweek, an event now known as the Workweek at Bernie’s. Follow that link for more details, and if you’re going to be at the workweek and want to attend, contact me to RSVP.
After skimming the back cover of The Pragmatic Programmer, we came up with an outline describing how our book can teach you how to:
- Fight software;
- Not just duplicate knowledge, but infinitely copy it for massive gains;
- Write code so solid and enduring that it will run until AWS randomly kills your box;
- Encourage programming by fate;
- Nuke-proof your code using aspect-oriented programming and a few pounds of refrigerator-grade steel;
- Capture real, living requirements for sale as folk medicine in foreign countries;
- Test ruthlessly and physically punish any code that misbehaves;
- Delight your users with micro-transactions;
- Build teams of slouching young programmers wearing hoodies and jeans to attract investors; and
- Automate yourself out of a job.
If you’re interested in web development at Mozilla, or want to attend next month’s Extravaganza, subscribe to the firstname.lastname@example.org mailing list to be notified of the next meeting, and maybe send a message introducing yourself. We’d love to meet you!
See you next month!
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475833.51/warc/CC-MAIN-20240302152131-20240302182131-00759.warc.gz
|
CC-MAIN-2024-10
| 3,974
| 31
|
http://hackage.haskell.org/package/factory-0.2.1.0/docs/Factory-Data-Interval.html
|
code
|
- Dr. Alistair Ward
- Describes a bounded set of, typically integral, quantities.
- Operations have been defined, on the list of consecutive quantities delimited by these endpoints.
- The point is that if the list is composed from consecutive quantities, the intermediate values can be inferred, rather than physically represented.
- The API was driven top-down by its caller's requirements, rather than a bottom-up attempt to provide a complete interface. consequently there may be omissions from the view point of future callers.
- Thought similar to the mathematical concept of an interval, the latter technically relates to real numbers; http://en.wikipedia.org/wiki/Interval_%28mathematics%29.
- No account has been made for semi-closed or open intervals.
- type Interval endPoint = (endPoint, endPoint)
- closedUnitInterval :: Num n => Interval n
- mkBounded :: Bounded endPoint => Interval endPoint
- elem' :: Ord endPoint => endPoint -> Interval endPoint -> Bool
- normalise :: Ord endPoint => Interval endPoint -> Interval endPoint
- product' :: (Integral i, Show i) => Ratio i -> i -> Interval i -> i
- shift :: Num endPoint => endPoint -> Interval endPoint -> Interval endPoint
- splitAt' :: (Enum endPoint, Num endPoint, Ord endPoint, Show endPoint) => endPoint -> Interval endPoint -> (Interval endPoint, Interval endPoint)
- toList :: Enum endPoint => Interval endPoint -> [endPoint]
- getMinBound :: Interval endPoint -> endPoint
- getMaxBound :: Interval endPoint -> endPoint
- precisely :: endPoint -> Interval endPoint
- isReversed :: Ord endPoint => Interval endPoint -> Bool
Defines a closed (inclusive) interval of consecutive values.
Construct the unsigned closed unit-interval; http://en.wikipedia.org/wiki/Unit_interval.
True if the specified value is within the inclusive bounds of the interval.
Swap the end-points where they were originally reversed, but otherwise do nothing.
|:: (Integral i, Show i)|
|=> Ratio i|
The ratio at which to bisect the
For efficiency, the interval will not be bisected, when it's length has been reduced to this value.
|-> Interval i|
The resulting product.
- Multiplies the consecutive sequence of integers within
- Since the result can be large,
divideAndConqueris used to form operands of a similar order of magnitude, thus improving the efficiency of the big-number multiplication.
|:: Num endPoint|
The magnitude of the require shift.
|-> Interval endPoint|
The interval to be shifted.
|-> Interval endPoint|
Shift of both end-points of the interval by the specified amount.
Bisect the interval at the specified end-point; which should be between the two existing end-points.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122992.88/warc/CC-MAIN-20170423031202-00100-ip-10-145-167-34.ec2.internal.warc.gz
|
CC-MAIN-2017-17
| 2,637
| 40
|
https://blender.stackexchange.com/questions/93921/how-to-add-trailing-light-effect-to-finger-tips?noredirect=1
|
code
|
You can achieve this by using particles to trace out the path of the motion and converting each particle system into a Curve object. The curves can then be used to draw the trails and animated using keyframes to match the motion.
Start by adding particle systems to the mesh. It's important to have a separate particle system for each 'trail'. The order the particles are emitted will control the order of the points in that trail.
There are two ways of achieving this.
Create a mesh with a single vertex and add a particle system. This can then be duplicated, positioned, and parented to the moving mesh so that it follows the motion. You can create as many of these 'trace' objects as are required.
Emit the particles directly from the mesh being traced. It's important that each particle system emits from a single vertex - so use Vertex Groups to limit the emission for each particle system, adding a particle system for each.
Ensure the particle systems are set to emit the particles with no initial velocity and that the relevant Field Weights are set to zero so they are not affected by any forces present. This should result in the particles 'hanging' at the point they are emitted.
Once you have the emitted particles, open a Text Editor window and paste the following python code :
def particles_to_path(objName, tracks=1, particleSystem=0, curveResolution=0, bevelDepth=0.1):
object = bpy.context.scene.objects[objName]
particles = object.particle_systems[particleSystem].particles
trackNo = 0
trackPoint = 0
curves =
splines =
for p in particles:
if trackPoint == 0:
# Create new track
curve = bpy.data.curves.new('particlePath', type='CURVE')
curve.dimensions = '3D'
curve.resolution_u = curveResolution
curve.bevel_depth = bevelDepth
spline = curve.splines.new('NURBS')
#set first point
spline.points.co = (p.location, p.location, p.location,1)
# Add point
splines[trackNo].points[trackPoint].co = (p.location, p.location, p.location,1)
trackNo += 1
if trackNo >= tracks:
trackNo = 0
trackPoint += 1
# deselect all
for curve in curves:
curveObject = bpy.data.objects.new('particlePath', curve)
The code defines a function (particles_to_path) which will use the particles emitted by a particle system on an object to create a Curve object. Note the last line in the script which invokes the function, specifying the object 'Trace'. This will use the first particle system on the 'Trace' object and will create a Curve. To invoke the function on multiple particle systems, simply duplicate the last line and change each as appropriate.
For example, for multiple objects named 'Trace', 'Trace.001', 'Trace.002', 'Trace.003', etc. use the following lines :
If you have multiple particle systems on the same object, each emitting from a single vertex then you can use the 'particleSystem' parameter to indicate which particle system to use (0 = the first one, 1 = the second one, 2 = the third one, etc.) :
Once you have generated the curves, simply adjust the curve properties to generate the required trails. For example, set Resolution and Fill in the Shape settings to generate a curved shape...
...and use the Geometry Bevel settings to adjust the 'trail' - setting Start and End to position it along the curve :
Note that the Taper Object can be used to specify an additional curve object which defines the shape of the bevel along the curve (with Map Taper enabled to taper the bevel along the Start-End interval rather than the whole curve).
Carefully keyframing the Start and End will allow the trail to be animating. Ensure to set the Interpolation of the animation curves to Linear so it matches the motion of the particle emitting mesh.
For the material, you can use a combination of Emission and Scatter on the Volume. This will allow the trails to be partially opaque while still emitting light. Mixing in a Transparent shader based on the light path allows them to be hidden from all but the camera (so they don't reflect off of the surfaces). Including variation based on the Random object info provides automatic variation for each trail (you could assign individual materials manually in you prefer more control).
This can produce the following result :
Blend file :
Applying this to the example 'dancer' mesh can produce the following effect :
EDIT: I have since packaged the above script as an add-on as part of this answer, available from here. This makes it much simpler to run and avoids the need to create and edit a custom script.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488544264.91/warc/CC-MAIN-20210623225535-20210624015535-00181.warc.gz
|
CC-MAIN-2021-25
| 4,466
| 45
|
https://fanaticallabs.zendesk.com/hc/en-us/articles/360022146071-Setting-up-a-proxy
|
code
|
MailChimp will often ping Sugar with activities such as subscribes, bounces, and unsubscribes. If your SugarCRM instance is not publicly available outside your network this will cause MailChimp to not be able to keep your data up to date in Sugar.
To fix that, and keep your data as locked down as possible so that you don't have to completely open up SugarCRM to the world, we have created a nice proxy script. There are two simple steps to make it work:
1. Take this code and make it accessible from outside your network.
**NOTE: Be sure to change the $sugar_url variable in this script**
2. Add this script as a new webhook for each list that syncs to your CRM. Instruction on how to do that are here.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314353.10/warc/CC-MAIN-20190818231019-20190819013019-00338.warc.gz
|
CC-MAIN-2019-35
| 704
| 5
|
https://coderanch.com/t/529868/frameworks/GWT-works-Jetty-doesn
|
code
|
Hello all. I am running Jetty 7.2.0 on Ubuntu and have a webapp which makes uses of Jetty's ContinuationFilter and ProxyServlet. Part of my webapp (built using GWT - Google Web Toolkit v2.2.0) enables access to a Bugzilla installation running on Apache (on the localhost). Accessing Bugzilla through http://localhost/bugzilla runs just fine, i.e. Apache and Bugzilla are playing nicely together. Running the webapp in GWT's development mode works just fine.
However, the link to Bugzilla fails with a 404 not found error when I try and access it through my webapp at http://localhost:8080/dash ('dash' is the name of the webapp folder); so there must be some problem with my Jetty setup. The link to Bugzilla is contained in this fragment of GWT code:
Bugzilla's welcome page is index.cgi, and this is found in /var/www/bugzilla on my Ubuntu machine. As I say, accessing http://localhost/bugzilla returns this welcome page and I can use Bugzilla just fine if I go direct rather than via my webapp.
Apache's log file indicates that requests from Jetty don't make it to Apache.
The Jetty request log confirms that the URL requested is not found:
This leads me to believe that Jetty fails to proxy the request to localhost.
I would be really grateful if anyone can help on this. I've spent days on it and posted to both the GWT group on Google groups and on stackoverflow.com, but no success yet. Thank you.
My webapp's web.xml reads as follows, are there any other relevant Jetty config files I need to dig into?
When it is used for evil, then watch out! When it is used for good, then things are much nicer. Like this tiny ad:
create, convert, edit or print DOC and DOCX in Java
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572517.50/warc/CC-MAIN-20190916100041-20190916122041-00297.warc.gz
|
CC-MAIN-2019-39
| 1,677
| 10
|
http://boards.straightdope.com/sdmb/showthread.php?s=ea0d49688c453eb233ecf06ceb28dd9a&p=15107666
|
code
|
Originally Posted by Dewey Finn
Benedict Arnold is one of the most famous people from Connecticut but I doubt he'll ever be added to the state HOF. So to a certain extent, popularity is an issue.
True, but he's a special case... after all, he did try to upend the revolution, and he didn't live in the US after the Revolution was over. He ended up living in England, so I am guessing he lost is claim to the Conn. HOF.
Folks from Missouri, don't sweat it. I doubt he will drag in the masses (or his ditto-heads) Besides, you can skip this HOF and go to the Bowling HOF instead (isn't this in St. Louis?)
Now THAT"S a good time had by all. Earl Anthony, anyone?
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706153698/warc/CC-MAIN-20130516120913-00091-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 660
| 5
|
https://www.online.hu/company/technology
|
code
|
MoonSol and DigiTie are based on a 3-tier architecture in which the following functional layers are defined:
User and business data are stored exclusively in the central relational database system. The currently supported RDBMSs are Oracle 12c or PostgreSQL 9.6. The supported database management platforms have been selected in line with the next requirements. The RDBMS
The development methodology applied by Online Business Technologies makes it possible that different database platforms could be supported through making changes only in the technology layer but without modifying the business logic. Due to this feature, our solutions can be easily adapted to further database management software e.g. DB2, MS-SQL.
All business logic runs on the application server. This layer is responsible for orchestrating the display layer, this defines the layout and content of screens, manages screen controls for data entry and presentation, and performs checks associated with screens. A web server using a Java servlet keeps the connection with the client under HTTPS protocol.
The operation of the business logic is unified, no matter if the client is a Java Webstart based or a browser-based solution (see later).
The display layer is a graphical user interface implemented under the thin client principle. This means that the client application does not store data on the client workstation and does not contain pre-installed software components. The client only manages data entry and the display of data and handles purely screen elements.
The display layer can be implemented on two different platforms:
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100484.76/warc/CC-MAIN-20231203030948-20231203060948-00764.warc.gz
|
CC-MAIN-2023-50
| 1,608
| 7
|
http://www.westlasawtelle.org/get-involved/
|
code
|
Want to get involved? Here are some great ways to start:
- Sign up here toautomatically receive committee meeting notifications, agendas, minutes and other important community news
- Contact us via email at email@example.com
- Attend a committee meeting (see events)
- Attend a board meeting (see events)
- Explore the web site, then fill out the form below and let us know what interests you.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112682.87/warc/CC-MAIN-20170822201124-20170822221124-00239.warc.gz
|
CC-MAIN-2017-34
| 393
| 6
|
http://www.innovations-report.com/html/reports/earth-sciences/report-110113.html
|
code
|
Palaeontologists have discovered fossil remains in Scandinavia of parrots dating back 55 million years. Reported today in the current issue of the journal Palaeontology, the fossils indicate that parrots once flew wild over what is now Norway and Denmark.
Parrots today live only in the tropics and southern hemisphere, but this new research, which was supported by the Irish Research Council for Science, Engineering and Technology (IRCSET) and University College Dublin (UCD), suggests that they first evolved in the North, much earlier than had been thought.
The fossil parrot was discovered on the Isle of Mors in the northwest of Denmark – far from where you’d normally expect to find a parrot. It’s a new species, officially named 'Mopsitta tanta'. However, already its nick-name is the ‘Danish Blue Parrot’, a term derived from a famous comedy sketch about a 'Norwegian Blue Parrot' in the 1970s BBC television programme ‘Monty Python’. (video link to sketch below)
The Scandinavian connection makes links to Monty Python’s notoriously demised bird irresistible, but the parallels go further. The famous sketch revolves around establishing that a bird purchased by John Cleese is a dead parrot, and in dealing with these fossils, palaeontologists were faced with the same problem.
As Dr David Waterhouse, lead author of the paper, explains: “Obviously, we are dealing with a bird that is bereft of life, but the tricky bit is establishing that it was a parrot. As with many fragile bird fossils, it is a wonder that anything remains at all, and all that remains of this early Danish parrot is a single upper wing bone (humerus). But, this small bone contains characteristic features that show that it is clearly from a member of the parrot family, about the size of a Yellow-crested Cockatoo.”
Dr David Waterhouse was funded by a UCD postgraduate scholarship from 2002 to 2006. He is currently Assistant Curator of Natural History at Norfolk Museums Service. Dr Bent Lindow was an IRCSET ‘Basic Research Grant’ scholar at UCD and the University of Copenhagen from 2004 to 2007. He is currently postdoctoral researcher in palaeontology at the Natural History Museum of Denmark in Copenhagen.
At around 55 million years old, this is very much an ex-parrot. Indeed, Mopsitta represents the oldest and most northerly convincing remains of a parrot ever to have been discovered.
Waterhouse continues: “It isn’t as unbelievable as you might at first think that a parrot was found so far north. When Mopsitta was alive, most of Northern Europe was experiencing a warm period, with a large shallow tropical lagoon covering much of Germany, South East England and Denmark. We have to remember that this was only 10 million years after the dinosaurs were wiped out, and some strange things were happening with animal life all over the planet.”
“No Southern Hemisphere fossil parrot has been found older than about 15 million years old, so this new evidence suggests that parrots evolved right here in the Northern Hemisphere before diversifying further South in the tropics later on.”
So was Danish Mopsitta “pinin’ for the fjords”? “It’s a lovely image,” says Waterhouse, “but we can say with certainty that it was not. This parrot shuffled off its mortal coil around 55 million years ago, but the fjords of Norway were formed during the last ice age and are less than a million years old.”To view the famous 'Monty Python' sketch, please visit:
Greenland ice flow likely to speed up: New data assert glaciers move over sediment, which gets more slippery as it gets wetter
17.08.2017 | Swansea University
Climate change: In their old age, trees still accumulate large quantities of carbon
17.08.2017 | Universität Hamburg
Whether you call it effervescent, fizzy, or sparkling, carbonated water is making a comeback as a beverage. Aside from quenching thirst, researchers at the University of Illinois at Urbana-Champaign have discovered a new use for these "bubbly" concoctions that will have major impact on the manufacturer of the world's thinnest, flattest, and one most useful materials -- graphene.
As graphene's popularity grows as an advanced "wonder" material, the speed and quality at which it can be manufactured will be paramount. With that in mind,...
Physicists at the University of Bonn have managed to create optical hollows and more complex patterns into which the light of a Bose-Einstein condensate flows. The creation of such highly low-loss structures for light is a prerequisite for complex light circuits, such as for quantum information processing for a new generation of computers. The researchers are now presenting their results in the journal Nature Photonics.
Light particles (photons) occur as tiny, indivisible portions. Many thousands of these light portions can be merged to form a single super-photon if they are...
For the first time, scientists have shown that circular RNA is linked to brain function. When a RNA molecule called Cdr1as was deleted from the genome of mice, the animals had problems filtering out unnecessary information – like patients suffering from neuropsychiatric disorders.
While hundreds of circular RNAs (circRNAs) are abundant in mammalian brains, one big question has remained unanswered: What are they actually good for? In the...
An experimental small satellite has successfully collected and delivered data on a key measurement for predicting changes in Earth's climate.
The Radiometer Assessment using Vertically Aligned Nanotubes (RAVAN) CubeSat was launched into low-Earth orbit on Nov. 11, 2016, in order to test new...
A study led by scientists of the Max Planck Institute for the Structure and Dynamics of Matter (MPSD) at the Center for Free-Electron Laser Science in Hamburg presents evidence of the coexistence of superconductivity and “charge-density-waves” in compounds of the poorly-studied family of bismuthates. This observation opens up new perspectives for a deeper understanding of the phenomenon of high-temperature superconductivity, a topic which is at the core of condensed matter research since more than 30 years. The paper by Nicoletti et al has been published in the PNAS.
Since the beginning of the 20th century, superconductivity had been observed in some metals at temperatures only a few degrees above the absolute zero (minus...
16.08.2017 | Event News
04.08.2017 | Event News
26.07.2017 | Event News
18.08.2017 | Life Sciences
18.08.2017 | Physics and Astronomy
18.08.2017 | Materials Sciences
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886104681.22/warc/CC-MAIN-20170818140908-20170818160908-00119.warc.gz
|
CC-MAIN-2017-34
| 6,543
| 30
|
http://web.phys.ksu.edu/altpathway/lesson1.html
|
code
|
Our goal in this lesson is to explore what Newton's first law tells us about the the world. Follow the links below. If you're using the online survey system, then the forms will open automatically. If you're not using the online forms then you can view/print the questions here.
After you've completed the above activities please read the material here.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699899882/warc/CC-MAIN-20130516102459-00027-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 353
| 2
|
https://questions.x-plane.com/3717/compatible-mac-os-sierra
|
code
|
This site is being deprecated.
Please see the official X‑Plane Support page for help.
Please make sure you have the latest version of X-Plane 9 (updater located here). If you do, and it still doesn't work, unfortunately we have no plans to update X-Plane 9 any more, so it may not be possible. The new OS may just be too new.
This site is no longer being actively maintained.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100229.44/warc/CC-MAIN-20231130161920-20231130191920-00475.warc.gz
|
CC-MAIN-2023-50
| 377
| 4
|
https://daemonv.livejournal.com/161291.html
|
code
|
And I was still in the Lima airport food court, and it was still only 8:30am.
Watch dmvflickr for a couple of pictures. The shot I haven't taken yet is the "road that was just steps up" that my sister subjected me to when I arrived. She found a lovely hostel, but it was at one of the high points of the city (12000 ft). And I had felt fine, and still feel mostly fine, but this altitude exhaustion is bullshit. Your body is fine, its just that your heart quickly starts pounding, and takes forever to settle down. Fitness level means nothing.
So in 2 hours we start our camping/hiking expedition. Feel for me on Monday, when we hit the morning uphill climb from 3400m to 4200m.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585265.67/warc/CC-MAIN-20211019105138-20211019135138-00142.warc.gz
|
CC-MAIN-2021-43
| 678
| 3
|
https://scifi.stackexchange.com/questions/145909/how-effective-is-the-first-order-military-compared-to-the-imperial-military-and
|
code
|
The Grand Army of the Republic was succeeded by the military of the Galactic Empire and subsequently the First Order. Assuming these three military forces to be different incarnations of the same entity, how has this entity's warfighting capabilities changed over time?
By warfighting, I'm therefore looking at said entity's effectiveness at waging wars and winning battles. They are soldiers, not peacekeepers. Possible aspects to consider can include training, technology, qualities of the individual soldier, skill with strategy and tactics, quality of the generals and officer corps, and other aspects which I may not have considered.
In other words: How well would each incarnation fare had full-scale galactic war (Clone Wars scale) broken out within their own respective time periods?
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525187.9/warc/CC-MAIN-20190717121559-20190717143559-00247.warc.gz
|
CC-MAIN-2019-30
| 791
| 3
|
http://www.minecraftforum.net/topic/887731-best-minecraft-server-os/
|
code
|
shovenose, on 23 December 2011 - 04:11 AM, said:
I have a feeling this is going to turn into another mac os vs. windows vs. linux flamewar, but I'd seriously be interested to know what's best?
I saw this quote on another thread here:
It made me think... what is the best?
I use Bukkit, not the normal Miencraft Server, so I'm curious to know what is the best for hosting a minecraft server?
Right now I use Windows Server 2003 and I'm happy with it, just curious what I'm missing out on on another OS.
It's largely accepted that there is no 'best' OS. At least, not by simply saying 'best' and not using more descriptive qualifiers, such as 'which OS requires the least amount of configuration know-how to get working?' or 'which OS has the least amount of RAM overhead?' or 'which is actually a server OS?'
That is, any novice who has used Windows desktop his or her entire life will have no problem setting up a windows server running the minecraft server executable. Is it ideal? maybe not. But for somebody with no other knowledge in hosting and administration, running that .exe off their windows server's desktop succeeds at their needs.
When you consider RAM overheads that a GUI adds, many would point toward Linux, touting Ubuntu, CentOS, or Debian as 'the OS'--of course, when one lacks the patience and aptitude to handle command-line, the 'best OS' might instead become 'the worst OS'. Just like if somebody uses a particular distro, fails at one thing, then asserts 'WORST OS EVER'.
MacOSX, in the form that probably everybody has (which is it is their main computer), isn't a server platform. Not to say it can't host, like a windows desktop could, but it certainly couldn't be the best OS on account of it simply just not being tailored for server duties. Performance per megabyte, or by pretty much any other metric, OSX probably isn't going to be the best...unless one has no idea how to use windows or linux, and it is the most simple.
These threads (or more specifically, the voting part of this thread) is unlikely to be representative of the real 'best' OS just for these biases mentioned above, but all that aside, most people will answer 'what is the best OS'
with 'what OS did I have just the right amount of patience to stick with?'
Defakto227, on 23 December 2011 - 05:02 AM, said:
Operating systems are like religion. All good ideas. Everyone is right.
Seconded only if you include Scientology.
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00143-ip-10-147-4-33.ec2.internal.warc.gz
|
CC-MAIN-2014-15
| 2,421
| 15
|
https://phabricator.kde.org/feed/?userPHIDs=PHID-USER-vtt6mvk6exutilbeodja
|
code
|
Jun 9 2020
Jun 8 2020
Jun 7 2020
Jun 6 2020
Jun 2 2020
Jun 1 2020
May 30 2020
May 28 2020
May 27 2020
May 26 2020
May 24 2020
May 18 2020
May 17 2020
May 16 2020
May 12 2020
May 3 2020
Apr 30 2020
@jgrulich, since you are working on xdg-desktop-portal-kde, do you have an opionion on where the mobile file picker ui should go?
Options are basically:
- everything into xdg-desktop-portal-kde
- KIO (would still need special handling in the portal)
- put actual gui code in Kirigami-addons and call it from the portal
- create a new portal for mobile
Apr 29 2020
Did you possibly forget to commit the changes to the header file?
Apr 28 2020
No, the portal uses the KFileWidget directly. After posting this I also had the idea of implementing this in KIO directly.
Apr 27 2020
@davidedmundson I'm mentioning you here because I think you had the idea to base the PlaMo file dialog on xdg-desktop-portal. I have been working on this approach for some time. Now I wonder where to best put the code. I began by forking xdg-desktop-portal-kde, but it would be probably better to integrate it into the exisiting portal.
Apr 26 2020
Apr 24 2020
Apr 22 2020
Apr 18 2020
Sorry for using this diff to ask this question, I couldn't find you in kde-devel.
Would it be possible to expose the finished signal of QProcess in KIO::ApplicationLauncherJob?
I need something like this to close the startup feedback in the Plasma Mobile shell (a fullscreen overlay which shows that an app is starting) when the app crashed.
Or is there an entirely different way to implement this?
You are installing to /app, but kdevelop shows "Runtime: Host system". Is that intentional? At least as long as you didn't set up any special environment variables, files in /app will probably not be found by knotifications.
Apr 16 2020
Apr 15 2020
Apr 14 2020
Apr 11 2020
Did you like the format? Anything we can improve upon?
Apr 10 2020
Now we do R266:48a112fb64dc
Apr 9 2020
Options to fix this:
- use libtaskmanager to give focus to the existing window belonging to the application
- KDBus single instance API in every single application
Other approach (using qpa): https://invent.kde.org/lnj/plasma-integration/-/tree/feature/mobile-kirigami-ui
Options to fix this:
- Add accept button to plasma-settings
- Add apply button to KCM
- always save kcms on desctruction (plasma-settings is closed, kcm is switched)
Apr 8 2020
Actually the file here should just be deleted, and https://invent.kde.org/kde/plasma-phone-settings/-/merge_requests/1 merged instead.
Apr 7 2020
Spacebear progress update: Thanks to the help of Anthony Fieroni, spacebear now generally works (receiving and sending), work to invoke the client when a message arrives is work in progress and not fully tested, but should work as well.
Apr 6 2020
Apr 5 2020
I think for git you should use a proper @ character, so firstname.lastname@example.org, since I could not find any handling for "AT" in the audit script (https://github.com/KDE/repo-management/blob/a2bf51330a735f29efaff31e5b2d9a8342069c72/hooks/hooklib.py#L490)
Apr 4 2020
Apr 3 2020
Apr 2 2020
Apr 1 2020
I was recently working on rewriting spacebar. A few things are already better tham with spacbar:
- sending messages works reliably
- no random crashes
- mapping of incomimg and outgoing conversations to contacts
- no more hacks to find the sim accoumt
- better design (imo)
Mar 30 2020
I haven't come up with a proper name yet, but I agree that moving it to the KDE namespace would be good.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735792.85/warc/CC-MAIN-20200803083123-20200803113123-00424.warc.gz
|
CC-MAIN-2020-34
| 3,487
| 73
|
https://daily-blog.netlify.app/questions/2163456/index.html
|
code
|
How to get the contents of a subdirectory with a different name from another Git repository
Getting a subdirectory from another Git repository with the same name and the same relative path is easy, for example:
git remote add checklists https://github.com/janosgyerik/software-construction-notes git fetch checklists git checkout checklists/master checklists
The sample remote repository has a directory in the root directory
. The last command
will grab the contents of this directory and put it in the root of my local repository.
But what if I want to put the directory somewhere else? Of course, after
I could move the directory anywhere with
git mv checklists my/specs/dir/checklists
. However, this can cause problems if I already have a directory with the same name (and possibly a different destination) in a local project. I would have to move the directory to the side first. Is there a cleaner way to do this in one step? Something like that:
# grab the "checklists" dir and put its contents to my/specs/dir/checklists git checkout checklists/master checklists my/specs/dir/checklists
Btw, the local repository is a completely independent project. The project is
meant as a shared resource with a set of notes, which I just cloned in this way in several projects to use as a template for performing requirements analysis and architecture design. These independent projects don't need to keep track of project history
, I really only need the latest snapshot of the files.
source to share
git read-tree --prefix=my/specs/dir checklists/master git checkout-index -a
It reads the control tables / master tree and puts them in the index, but within a specific directory. After that, the checkout command just updates your working directory from the index and basically returns the new "be deleted" files in my / specs / dir / checklists.
If you usually agree with what git will do, you can combine both commands with
git read-tree -u --prefix=my/specs/dir checklists/master
source to share
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988793.99/warc/CC-MAIN-20210507120655-20210507150655-00406.warc.gz
|
CC-MAIN-2021-21
| 1,996
| 20
|
http://locourseworkwrrk.supervillaino.us/an-analysis-of-the-meaning-of-life-from-a-philosophical-perspective.html
|
code
|
Much of the contemporary analytic discussion has sought to articulate and evaluate theories of meaning in life, ie philosophy and the meaning of life.
Analysis has always been at the heart of philosophical method, but it has been understood and practised in many different ways perhaps, in its broadest sense, it might be defined as a. Theories on just what the meaning of life of the meaning of life in a perspective and philosophy overlook: simply living life as. Articles the meaning of ‘meaning’ stephen anderson asks what we mean when we ask if existence has a meaning in their 1983 film the meaning of life, monty python took their departing shot.
20-2-2012 buy the belief an analysis of the meaning of life from a philosophical perspective instinct: free shipping on qualifying offers. Philosophical views edit value as meaning edit and rational analysis preferably james redfield gave a new age perspective on the meaning of life in his book.
The following answers to this central philosophical if the meaning of life is wanted, a meaning that will transcend of reflective perspective on one’s life.
A philosophical analysis of monty python even though under further analysis, the meaning of life has been covered from a political philosophy perspective.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221219109.94/warc/CC-MAIN-20180821210655-20180821230655-00352.warc.gz
|
CC-MAIN-2018-34
| 1,268
| 5
|
https://discuss.codecademy.com/t/when-should-i-use-jquery-instead-of-plain-javascript/368680
|
code
|
Some good use cases for jQuery:
- When you are likely going to use a lot of jQuery’s functionality, and not just a few of it’s methods
- When you need to build sites quickly that will be used on older browsers like IE8+
- When you need to use a specific jQuery plugin
<div> element. Definitely take the time to weigh the pros and cons before adding jQuery to a project.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583513009.81/warc/CC-MAIN-20181020163619-20181020185119-00490.warc.gz
|
CC-MAIN-2018-43
| 373
| 5
|
https://aws.amazon.com/what-is/sentiment-analysis/
|
code
|
What is Sentiment Analysis?
Sentiment analysis is the process of analyzing digital text to determine if the emotional tone of the message is positive, negative, or neutral. Today, companies have large volumes of text data like emails, customer support chat transcripts, social media comments, and reviews. Sentiment analysis tools can scan this text to automatically determine the author’s attitude towards a topic. Companies use the insights from sentiment analysis to improve customer service and increase brand reputation.
Why is sentiment analysis important?
Sentiment analysis, also known as opinion mining, is an important business intelligence tool that helps companies improve their products and services. We give some benefits of sentiment analysis below.
Provide objective insights
Businesses can avoid personal bias associated with human reviewers by using artificial intelligence (AI)–based sentiment analysis tools. As a result, companies get consistent and objective results when analyzing customers’ opinions.
For example, consider the following sentence:
I'm amazed by the speed of the processor but disappointed that it heats up quickly.
Marketers might dismiss the discouraging part of the review and be positively biased towards the processor's performance. However, accurate sentiment analysis tools sort and classify text to pick up emotions objectively.
Build better products and services
A sentiment analysis system helps companies improve their products and services based on genuine and specific customer feedback. AI technologies identify real-world objects or situations (called entities) that customers associate with negative sentiment. From the above example, product engineers focus on improving the processor's heat management capability because the text analysis software associated disappointed (negative) with processor (entity) and heats up (entity).
Analyze at scale
Businesses constantly mine information from a vast amount of unstructured data, such as emails, chatbot transcripts, surveys, customer relationship management records, and product feedback. Cloud-based sentiment analysis tools allow businesses to scale the process of uncovering customer emotions in textual data at an affordable cost.
Businesses must be quick to respond to potential crises or market trends in today's fast-changing landscape. Marketers rely on sentiment analysis software to learn what customers feel about the company's brand, products, and services in real time and take immediate actions based on their findings. They can configure the software to send alerts when negative sentiments are detected for specific keywords.
What are sentiment analysis use cases?
Businesses use sentiment analysis to derive intelligence and form actionable plans in different areas.
Improve customer service
Customer support teams use sentiment analysis tools to personalize responses based on the mood of the conversation. Matters with urgency are spotted by artificial intelligence (AI)–based chatbots with sentiment analysis capability and escalated to the support personnel.
Organizations constantly monitor mentions and chatter around their brands on social media, forums, blogs, news articles, and in other digital spaces. Sentiment analysis technologies allow the public relations team to be aware of related ongoing stories. The team can evaluate the underlying mood to address complaints or capitalize on positive trends.
A sentiment analysis system helps businesses improve their product offerings by learning what works and what doesn't. Marketers can analyze comments on online review sites, survey responses, and social media posts to gain deeper insights into specific product features. They convey the findings to the product engineers who innovate accordingly.
Track campaign performance
Marketers use sentiment analysis tools to ensure that their advertising campaign generates the expected response. They track conversations on social media platforms and ensure that the overall sentiment is encouraging. If the net sentiment falls short of expectation, marketers tweak the campaign based on real-time data analytics.
How does sentiment analysis work?
Sentiment analysis is an application of natural language processing (NLP) technologies that train computer software to understand text in ways similar to humans. The analysis typically goes through several stages before providing the final result.
During the preprocessing stage, sentiment analysis identifies key words to highlight the core message of the text.
- Tokenization breaks a sentence into several elements or tokens.
- Lemmatization converts words into their root form. For example, the root form of am is be.
- Stop-word removal filters out words that don't add meaningful value to the sentence. For example, with, for, at, and of are stop words.
NLP technologies further analyze the extracted keywords and give them a sentiment score. A sentiment score is a measurement scale that indicates the emotional element in the sentiment analysis system. It provides a relative perception of the emotion expressed in text for analytical purposes. For example, researchers use 10 to represent satisfaction and 0 for disappointment when analyzing customer reviews.
What are the approaches to sentiment analysis?
There are three main approaches used by sentiment analysis software.
The rule-based approach identifies, classifies, and scores specific keywords based on predetermined lexicons. Lexicons are compilations of words representing the writer's intent, emotion, and mood. Marketers assign sentiment scores to positive and negative lexicons to reflect the emotional weight of different expressions. To determine if a sentence is positive, negative, or neutral, the software scans for words listed in the lexicon and sums up the sentiment score. The final score is compared against the sentiment boundaries to determine the overall emotional bearing.
Rule-based analysis example
Consider a system with words like happy, affordable, and fast in the positive lexicon and words like poor, expensive, and difficult in a negative lexicon. Marketers determine positive word scores from 5 to 10 and negative word scores from -1 to -10. Special rules are set to identify double negatives, such as not bad, as a positive sentiment. Marketers decide that an overall sentiment score that falls above 3 is positive, while - 3 to 3 is labeled as mixed sentiment.
Pros and cons
A rule-based sentiment analysis system is straightforward to set up, but it's hard to scale. For example, you'll need to keep expanding the lexicons when you discover new keywords for conveying intent in the text input. Also, this approach may not be accurate when processing sentences influenced by different cultures.
This approach uses machine learning (ML) techniques and sentiment classification algorithms, such as neural networks and deep learning, to teach computer software to identify emotional sentiment from text. This process involves creating a sentiment analysis model and training it repeatedly on known data so that it can guess the sentiment in unknown data with high accuracy.
During the training, data scientists use sentiment analysis datasets that contain large numbers of examples. The ML software uses the datasets as input and trains itself to reach the predetermined conclusion. By training with a large number of diverse examples, the software differentiates and determines how different word arrangements affect the final sentiment score.
Pros and cons
ML sentiment analysis is advantageous because it processes a wide range of text information accurately. As long as the software undergoes training with sufficient examples, ML sentiment analysis can accurately predict the emotional tone of the messages. However, a trained ML model is specific to one business area. This means sentiment analysis software trained with marketing data cannot be used for social media monitoring without retraining.
Hybrid sentiment analysis works by combining both ML and rule-based systems. It uses features from both methods to optimize speed and accuracy when deriving contextual intent in text. However, it takes time and technical efforts to bring the two different systems together.
What are the different types of sentiment analysis?
Businesses use different types of sentiment analysis to understand how their customers feel when interacting with products or services.
Fine-grained sentiment analysis refers to categorizing the text intent into multiple levels of emotion. Typically, the method involves rating user sentiment on a scale of 0 to 100, with each equal segment representing very positive, positive, neutral, negative, and very negative. Ecommerce stores use a 5-star rating system as a fine-grained scoring method to gauge purchase experience.
Aspect-based analysis focuses on particular aspects of a product or service. For example, laptop manufacturers survey customers on their experience with sound, graphics, keyboard, and touchpad. They use sentiment analysis tools to connect customer intent with hardware-related keywords.
Intent-based analysis helps understand customer sentiment when conducting market research. Marketers use opinion mining to understand the position of a specific group of customers in the purchase cycle. They run targeted campaigns on customers interested in buying after picking up words like discounts, deals, and reviews in monitored conversations.
Emotional detection involves analyzing the psychological state of a person when they are writing the text. Emotional detection is a more complex discipline of sentiment analysis, as it goes deeper than merely sorting into categories. In this approach, sentiment analysis models attempt to interpret various emotions, such as joy, anger, sadness, and regret, through the person's choice of words.
What are the challenges in sentiment analysis?
Despite advancements in natural language processing (NLP) technologies, understanding human language is challenging for machines. They may misinterpret finer nuances of human communication such as those given below.
It is extremely difficult for a computer to analyze sentiment in sentences that comprise sarcasm. Consider the following sentence, Yeah, great. It took three weeks for my order to arrive. Unless the computer analyzes the sentence with a complete understanding of the scenario, it will label the experience as positive based on the word great.
Negation is the use of negative words to convey a reversal of meaning in the sentence. For example, I wouldn't say the subscription was expensive. Sentiment analysis algorithms might have difficulty interpreting such sentences correctly, particularly if the negation happens across two sentences, such as, I thought the subscription was cheap. It wasn't.
Multipolarity occurs when a sentence contains more than one sentiment. For example, a product review reads, I'm happy with the sturdy build but not impressed with the color. It becomes difficult for the software to interpret the underlying sentiment. You'll need to use aspect-based sentiment analysis to extract each entity and its corresponding emotion.
What is semantic analysis?
Semantic analysis is a computer science term for understanding the meaning of words in text information. It uses machine learning (ML) and natural language processing (NLP) to make sense of the relationship between words and grammatical correctness in sentences.
Sentiment analysis vs. semantic analysis
A sentiment analysis solution categorizes text by understanding the underlying emotion. It works by training the ML algorithm with specific datasets or setting rule-based lexicons. Meanwhile, a semantic analysis understands and works with more extensive and diverse information. Both linguistic technologies can be integrated to help businesses understand their customers better.
How does AWS help with sentiment analysis?
Amazon Comprehend is a natural language processing (NLP) solution that helps businesses extract and identify meaningful insights from text documents. It uses machine learning (ML) technologies to perform sentiment analysis with automated text extraction. Companies train Amazon Comprehend with industry-specific documents to produce highly accurate results.
- Amazon Comprehend Sentiment Analysis API tells developers if a piece of text is positive, negative, neutral, or mixed.
- Amazon Comprehend Targeted Sentiment allows businesses to narrow sentiment analysis to specific parts of products or services.
- Amazon Comprehend supports multiple languages, including German, English, Spanish, Italian, Portuguese, and French.
Get started with sentiment analysis by creating an AWS account today.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100056.38/warc/CC-MAIN-20231129041834-20231129071834-00714.warc.gz
|
CC-MAIN-2023-50
| 12,734
| 62
|
https://flipboard.com/@Zakhele_
|
code
|
You might have missed it then amid the Christmas cheer. And you might have forgotten it since—thanks to an endless stream of hyperbole that's not stopped emanating from Las Vegas, even several days after
A vegan college dropout who took acid and traveled to India for spiritual enlightenment, Steve Jobs was the greatest businessman in history. But Jobs' ascent was neither simple nor straightforward. Jobs'
http://schema.org BreadcrumbList ListItem 1 https://www.dwell.com/articles Dwell Stories ListItem 2 https://www.dwell.com/article/minimal-north-carolina-home-built-for-a-tech-forward-west-coast-couple-b5d3d1b5
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578517558.8/warc/CC-MAIN-20190418101243-20190418122323-00002.warc.gz
|
CC-MAIN-2019-18
| 619
| 3
|
http://casu.ast.cam.ac.uk/surveys-projects/iphas/data-flow
|
code
|
The data are processed by the Cambridge Astronomical Survey Unit (CASU) and follows the same steps as for the processing of the Wide Field Survey data. We give here a summary of the process; more details are available from the INT WFS pages.The data are first debiassed (full 2D bias removal is necessary). Bad pixels and columns are then flagged and recorded in confidence maps, which are used during catalogue generation. Linearity tests using sequences of dome flats revealed that the CCDs have significant non linearities so a correction using look-up-tables is then applied to all data. Flatfield images in each band are constructed by combining several sky flats obtained in bright sky conditions during the twilight.
Finally an astrometric solution starts with a rough WCS based on the known telescope and camera geometry and is the progressively refined using the Guide Star Catalogue for a first pass and the 2MASS catalogues for a final pass. The WFC field distortion is modelled using a zenithal equidistant projection (ZPN). The resulting internal astrometric precision is better than 100 mas over the whole WFC array (based on intercomparison of overlap regions). The object detection is performed in each band separately using a standard APM-style object detection and parametrisation algorithm using apertures of radius 1.2 arcsec.
The derived object catalogues are stored in multi-extension FITS files as FITS binary tables, one for each image extension with a dummy primary header unit. Each catalogue header contains a copy of the relevant telescope FITS header content in addition to detector-specific information.
Each detected object has an attached set of descriptors, forming the columns of the binary table and summarising derived position, shape and intensity information. During further processing stages ancilliary information such as the sky properties, seeing, average stellar image ellipticity, are derived from the catalogues and stored in the FITS headers attached to each catalogue extension. In addition to being the primary astronomical products from the pipeline processing, the catalogues and associated derived summary information form the basis for astrometric and photometric calibration and quality control monitoring.
The standard catalogue generation software makes direct use of the confidence maps previously generated for object detection and parametrisation producing quality control information, standard object descriptors and detected object overlay files. The possibly varying sky background is estimated automatically, prior to object detection, using a combination of robust iteratively clipped estimators. The image catalogues are then further processed to yield morphological classification for detected objects and used to generate astrometric and photometric calibration information.
For classification all detector-level catalogues for each pointing and/or passband are processed independently. Objects are classified based on their overall morphological properties, specifically the curve-of-growth of their flux distribution, and their ellipticity as derived from intensity-weighted second moments. The average stellar locus on each detector in these parameter spaces is generally well-defined and is used as the basis for a null hypothesis stellarness test for use in morphological classification
The classification is primarily based on comparing the curve-of-growth of the flux for each detected object with the well-defined curve-of-growth for the general stellar locus. This latter is a direct measure of the integral of the point spread function out to various radii and is independent of magnitude if the data are properly linearised, and if saturated images are excluded. In using this property the classifier further assumes that the effective PSF for stellar objects is constant over each detector, although individual detectors are allowed to have different PSFs.
The reference stellar loci are defined from the discrete curve-of-growth of the aperture fluxes by analysing the difference in magnitude (or flux ratio) between different pairs of apertures as a function of magnitude. In practice, the aperture with radius 1.2 arcsec is used as the fixed reference and also defines the internal magnitude (flux) scale. The linearity of the system implies that the position of the stellar locus for any function of the aperture fluxes is independent of magnitude (at least until images saturate). Therefore marginalising the flux ratios over magnitude yields one-dimensional distributions that can be used to greatly simplify automatically locating the stellar locus. With the location fixed, the median of the absolute deviation from the median (MAD) provides a solid measure of the scatter about this locus as a function of magnitude, at least until galaxies dominate in number. This process is repeated iteratively for each distribution, using 3-sigma clipping to remove non-stellar outliers, until satisfactory convergence is reached.
The discrete curve-of-growth of the flux for each object is then compared to that derived from the (self-defining) locus of stellar objects, and combined with information on the ellipticity of each object, to generate the overall detector-level classification statistic. The combination (essentially a weighted sum of the normalised signed distributions) is designed to preserve information on the ``sharpness'' of the object profile and is finally renormalised, as a function of magnitude, to produce the equivalent of an overall N(0,1) measure.
In practice measures derived from real images do not exactly follow Gaussian distributions. However, by combining multiple normalised distributions (with well-defined 1st and 2nd moments), the Central Limit Theorem works in our favour such that the resulting overall statistic is Gaussian-like to a reasonable approximation and hence can be used with due care as the likelihood component of a Bayesian Classification scheme, making optional use of prior knowledge.
Objects lying within 2-3 sigma of the stellar locus (i.e. of zero) are generally flagged as stellar images, those below -3 to -5 sigma (i.e. sharper) as noise-like, and those above 2-3 sigma (i.e. more diffuse) as non-stellar.
Although the discrete classification scheme is based on the N(0,1) measure of stellarness it has several overrides built in to attempt to make it more reliable. For example, adjustments to the boundaries at the faint-end (to cope with increased RMS noise in the statistic) and at the bright-end (to cope with saturation effects) are also made, while the overall image ellipticity provides a further check.
A by-product of the curve-of-growth analysis and the classification is an estimate of the average PSF aperture correction for each detector for those apertures (up to and including 4r , which includes typically about 99%, or more, of the total stellar flux) used in deriving the classification statistic. Accurate assessment of the aperture correction to place the (stellar) fluxes on a total flux scale is a crucial component of the overall calibration. We find that this method of deriving aperture corrections contributes about 1% to the overall photometry error budget and also provides a useful first order seeing correction for non-stellar sources. Further by-products of the morphological classification process are improved estimates of the seeing and average PSF ellipticity from making better use of well-defined stellar-only sources. These parameters are required for quality control monitoring of telescope performance and ``atmospheric'' seeing.
Photometric calibration is done using series of Landoldt standard stars (Landoldt 1992) with photometry in the SDSS system. For each night a zero point in each filter is derived. For photometric nights the calibration over the whole mosaic has an accuracy of 1-2%. For the purpose of the photometric calibration, standard stars observations have been obtained each night at an interval of 2h and have been used to calibrate the r' and i' frames. The Ha frames have been calibrated using a fixed offset of 3.14 magnitudes with respect to r', corresponding to the magnitude difference in r'-Ha for a Vega-type star.
All calibration is by default corrected for the mean atmospheric extinction at La Palma during pipeline processing 0.09 in r' and Ha and 0.05 in i' ).
During non-photometric nights, in otherwise acceptable observing conditions, we find that the derived zeropoint systematic errors can be up to 10% or more. Although the pipeline usually successfully flags such nights as non-photometric it still leaves open the problem of what to do about tracking the varying extinction during these nights. For this Early Data Release we have not attempted a global photometric solution for the while survey. Data from non-photometric nights have been included in the release but flagged as such so that they do not appear in out "Best" catalogue.
Astrometric calibration is a multi-stage process and aims to provide each image, and any derived catalogues, with a World Coordinate System (WCS) to convert between pixel and celestial coordinates. This happens in the pipeline in two generic stages.
An initial WCS based on knowledge of the instrument, e.g. orientation, field-scale, telescope pointing, is embedded in the FITS headers, with telescope-specific information in the primary header and detector-specific information in the secondary headers. This serves to locate each detector image to within a few to several arcsec, depending on the pointing accuracy of the telescope and model parameters. The essential information required is the RA and Dec of the pointing, a (stable) reference point on the detector grid for those coordinates (e.g. the optical axis of the instrument), the central pixel scale, the rotation of the camera, the relative orientation of each detector and the geometrical distortion of the telescope and camera optics, which defines the astrometric projection to use.
Given a rough WCS for the processed frames, a more accurate WCS can be defined using astrometric standards. We have based our calibration on the 2MASS point source catalog (Skrutskie et al. 2006) for several reasons: it is an all-sky NIR survey; it is calibrated on the International Celestial Reference System (ICRS); it provides at least 100 or more suitable standards per pointing; it is a relatively recent epoch (mid-1990s) minimising proper motion problems; the global systematics are better than 100mas over the entire sky; and for 2MASS point sources with signal:to:noise 10:1 the RMS accuracy per source is about 100mas.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170521.30/warc/CC-MAIN-20170219104610-00013-ip-10-171-10-108.ec2.internal.warc.gz
|
CC-MAIN-2017-09
| 10,637
| 19
|
https://www.coursehero.com/file/5772791/hw6sol/
|
code
|
AMS 7, BiostatisticsWeek 7 Homework SolutionsChapter 7 (Glover & Mitchell)1. Big decrease in phosphate levels due to new laws?First determine if the variances are equal:H0:σ22=σ21, andHa:σ216=σ21.TheF-statistics22/s21= 2.7.Using theF- distribution withv1= 11 andv2= 9 degrees offreedom gives the critical valueF0.975= 3.96 leading the conclusion that there is not enoughevidence to rejectH0. So, treat the variances as equal (σ22=σ21) when testing the two-samplehypothesis for the means:H0:μ1≤μ2, andH0:μ1> μ2. Calculates2p= 1238.4 and teststatistict= 9.78, and use at-distribution with 20 degrees of freedom and obtain a criticalvalue oft0.95= 1.725 and thus rejectH0.3. Paired data for antibiotic effects.Subtract the amoxicillin column from the penicillin column and get (nd= 6) differencesd= 6,-10,4,5,-7,-2and calculate¯Xd=-2,¯s2d= 4.12. TestH0=μd= 0 againstHa:μd6= 0 using a two-sidedt-test on 5 degrees of freedom. The critical values are±to=±2.
This is the end of the preview.
access the rest of the document.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125944677.39/warc/CC-MAIN-20180420174802-20180420194802-00598.warc.gz
|
CC-MAIN-2018-17
| 1,035
| 3
|
http://wmpoweruser.com/windows-phone-the-fastest-growing-in-russia-hitting-5-market-share/
|
code
|
Windows Phone the fastest growing in Russia, hitting 5% market share
At a Google presentation in Moscow, Google produced the above graph, showing the market share of the various operating systems in Russia.
Of note for us Windows Phone fans is that between 2011 and 2012, Windows Phone grew from 2 to 5% of the market, for 150% growth, closing in on Apple, which had 6% market share, growing only for 20% YoY.
While Android took the lion’s share of the market, hitting an estimated 59%, that is in the end only 100% growth.
Given Windows Phone’s growth rate, and the lack of real low-cost iPhone options, it seems likely Windows Phone will soon overtake the iPhone in market share, and hopefully head towards the 10% market share range.
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345776257/warc/CC-MAIN-20131218054936-00050-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 750
| 5
|
http://email.ucdavis.edu/newsgroups/hierarchies.php
|
code
|
Subject: Campus to retire Usenet news service Dec. 14, 2009
Information and Educational Technology will retire the campus Usenet news service on Monday, Dec. 14, 2009. Its popularity and usefulness have plunged with the rise of Web-based discussion groups, mailing lists, blogs, and pervasive email access. In recent years, the campus Usenet news server has hosted only a few class discussion groups. The campus has therefore decided to retire the Usenet news service. Please see below for available alternatives.
1) SmartSite, the online coursework and collaboration system at UC Davis, http://smartsite.ucdavis.edu. Tools that might help include:
Academic Technology Services, part of IET, offers free, regularly scheduled training for SmartSite and other educational technologies. It also offers drop-in clinics, plus one-on-one and group trainings, by request. Contact firstname.lastname@example.org or email@example.com.
2) Class mailing lists, http://email.ucdavis.edu/eml/class-faq.php. These class lists are created to help instructors reach their students via email. They are not part of SmartSite.
Read more at http://xbase.ucdavis.edu/1996 (login and Kerberos password required).
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368710366143/warc/CC-MAIN-20130516131926-00026-ip-10-60-113-184.ec2.internal.warc.gz
|
CC-MAIN-2013-20
| 1,190
| 6
|
https://altexploit.wordpress.com/tag/mono-source/
|
code
|
A sink is a pair ((fi)i∈I, A), sometimes denoted by (fi,A)I or (Ai →fi A)I consisting of an object A (the codomain of the sink) and a family of morphisms fi : Ai → A indexed by some class I. The family (Ai)i∈I is called the domain of the sink. Composition of sinks is defined in the (obvious) way dual to that of composition of sources.
In Set, a sink (Ai →fi A)I is an epi-sink if and only if it is jointly surjective, i.e., iff A = ∪i∈I fi[Ai]. In every construct, all jointly surjective sinks are epi-sinks. The converse implication holds, e.g., in Vec, Pos, Top, and Σ-Seq. A category A is thin if and only if every sink in A is an epi-sink.
Every epi-sink (= jointly surjective sink) is an extremal epi-sink in Set, Vec, and Ab. In Top an epi-sink (Ai →fi A) is extremal if and only if A carries the final topology fi with respect to (fi)i∈I. In Pos an epi-sink (Ai →fi A)I is extremal iff the ordering of A is the transitive closure of the relation consisting of all pairs (fi(x), fi(y)) with i ∈ I and x ≤ y in Ai. In Σ-Seq an epi-sink (Ai →fi A)I is extremal iff each final state of A has the form fi(q) for some i ∈ I and some final state q of Ai.
Every separator is extremal in Set, Vec, and Ab. In Pos the separators are precisely the nonempty posets, whereas the extremal separators are precisely the non-discrete posets. Top has no extremal separator.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033925.2/warc/CC-MAIN-20220625004242-20220625034242-00702.warc.gz
|
CC-MAIN-2022-27
| 1,394
| 4
|
https://discourse.nixos.org/t/eliminate-screen-tearing-with-intel-mesa/14724
|
code
|
Hi, I’m relatively to Nix and setting up a Lenovo X1 Carbon Gen 9.
I’d really appreciate help setting up the graphical side of things. With my current setup, I experience a lot of screen tearing. At least, that’s what I think it is. When switching from one program to another (e.g. alacritty to firefox), some portion of the screen will remain unchanged. Sometimes this is brief, and other times, I have to move the mouse for the screen to completely refresh.
Hey! I’m afraid I don’t have any good tips for you with this setup, but I appreciate the mention I’ve also got a Yoga on the way which should roughly match your specs, so I’ll be watching this thread with a lot of interest.
Finally, as someone told me in my thread: thanks for providing all the info! It makes it much easier to help you debug the issue. Hopefully someone with more experience with your Mesa systems will drop by and be able to help you out.
Just had a quick look at the docs you linked, so I’m sure you’ve covered it already, but have you tried setting
I’ve your same laptop but I never experienced your issues, I’ve switched to wayland since a month and before I’ve never used full desktop environments like you. Anyway, I think that I’m using a much recent kernel release, the 2.13.12, as of now. For that, I’ve this line in my config:
boot.kernelPackages = pkgs.linuxPackages_latest;
You can find more of my config here, look for the config of the bean host.
Still no solution, but here’s an update on what I’ve tried.
I set services.xserver.displayManager.gdm.wayland to false to force the use of X11 and then tried the manual’s suggestion of using intel, DRI2, TearFree, etc. That made things much worse with all sorts of visual artifacts and very slow graphics performance (basically unusable).
Oh, bummer! the other things i do usually is to update all the firmwares using fwupdmgr utility and pay attention to install all the updated in the “lenovo thinkvantage” utility in windows… some are the same that fwupdmgr would install but some don’t.
I’ve a 5K 34" LG connected to it.
Yes, you can see its configuration in the module azazel/wayland.nix inside my repository. I must say that this may not be a good moment to switch to sway on unstable because one its main dependencies, wlroots, seems to be undergoing some major refactorings and this seems to be the root of some instabilities and issues with some apps, like firefox.
One other thing that comes to mind: are you connecting the display directly via HDMI or USB-C or are you using some dongle or other devices that does some conversion maybe? In such case that may be the culprit? (I’m connecting the display directly via USB-C-Thunderbolt)
Modern gnome uses wayland by default. This is probably a good thing, once you figure out screen sharing, which IME is the main snag.
This might be considered a bug, since you use services.xserver, so you’d think it would launch an X server for your user, but gdm is what handles your actual user session. That is, I think currently you’re starting gdm in an X server, which then launches wayland.
Just as a quick experiment, I turned off X11 + Gnome, copied the suggested configuration details for sway from the NixOS wiki page, rebuilt, and rebooted. I now have a host of new tools to configure, but I’m no longer seeing any graphics artifacts
My best guess is that the artifacts I was seeing came from Gnome, though I’m not entirely sure why.
I believe that this is specific to this particular iGPU (it’s also documented on the Arch Wiki for my laptop: Dell XPS 13 (9310) - ArchWiki). But even with PSR turned off I didn’t witness any noticeable increase in power draw, so I think the returns of this feature are diminishing and if it causes issues is safe to turn off.
Just wanted to chime in after getting my gen 6 X1 Yoga (which is essentially the same as the Carbon, just foldable)
I use X and EXWM and I had screen tearing with the default settings. However, switching to using the latest kernel packages and using the secondary option (intel video driver) seems to have fixed it. The docs mention that this may cause some performance issues, but I don’t think that’s going to cause much of an issue. At least I hope not.
I do have a follow-up question, though: could the screen tearing that appears with the modesetting option be fixed by a late set of kernel packages? That is, if I switch back to using modesetting, could it one day magically be fixed by updating the system?
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510967.73/warc/CC-MAIN-20231002033129-20231002063129-00320.warc.gz
|
CC-MAIN-2023-40
| 4,532
| 22
|
https://forum.inductiveautomation.com/t/perspective-icons-svg-program/37157
|
code
|
What program should I use for creating or editing a svg library? I’d like to add more icons from material.io and other svgs to the material library or a new library. I use Inkscape for svg editing right now but when opening the libraries I’m not seeing how to add individual svgs to the library.
Look at the Using Icons section in the docs here. There’s a specific way you need to have your SVGs so that Perspective will display them. I’ve used Inkscape successfully in tandem with the instructions in that link to have my own icons. You may need to open the SVG in a text editor like notepad++ or something to make sure the format is correct.
I use inkscape. Love it!
Easiest way to do it i have found is open the material.svg in Notepad++ or something.
Delete all you don’t need. And save a copy as. This is handy if you’re using a bunch of icons. Especially through a repeater or something.
Leave the header of the document. In tact.
I have a bunch of “Cards” that are displayed through flex repeaters. Each card represents a machine. There are 30 cards. With 10 svgs a piece.
I made a new material.svg called it machinestatus.svg. and deleted all the icons out of the big material file except for the ones i needed
It decreased the loading time of that page by order of magnitude.
<svg viewBox="0 0 24 24"> <g class="icon" id="watch_later"> <path d="M12 2C6.5 2 2 6.5 2 12s4.5 10 10 10 10-4.5 10-10S17.5 2 12 2zm4.2 14.2L11 13V7h1.5v5.2l4.5 2.7-.8 1.3z" /> </g> </svg>
Using inkscape. Set the document size to 24x24px do your work. save as PLAIN svg.
Open it in notepad.
add the new SVG from “< svg >” to " < /svg > " make sure indentation is correct. and that is it.
Its easier to save a new file. I have several dozen. When testing. Cause saving to a file already read by perspective is slow to update. So either save as a new name. Or restart perspective module to see results.
make sure. that you only include path unless you for some reason want to hard code fill color or stroke width or some such.
Thank you! That filled in the gaps I needed between a svg and a svg library.
I use Inkscape too. This thread may be helpful too
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362571.17/warc/CC-MAIN-20211203000401-20211203030401-00171.warc.gz
|
CC-MAIN-2021-49
| 2,156
| 17
|
https://www.warriorforum.com/offline-marketing/816541-freshbooks-vs-paypal.html?utm_source=internal&utm_medium=discussion-list&utm_campaign=feed&utm_term=read-more
|
code
|
I just got my first client yay!!! Well now I need to send them an invoice. They asked to use American Express Credit card. I am curious what do you guys use to invoice clients. I love paypal as I have a business account and creating invoices is so simple especially recurring ones. Whats so special about freshbooks?
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825512.37/warc/CC-MAIN-20181214092734-20181214114234-00595.warc.gz
|
CC-MAIN-2018-51
| 316
| 1
|
http://ux.stackexchange.com/questions/tagged/header+mega-menu
|
code
|
User Experience Meta
to customize your list.
more stack exchange communities
Start here for a quick overview of the site
Detailed answers to any questions you might have
Discuss the workings and policies of this site
Collapsing header / mega-menu
I have a website with a mega-menu. On the homepage we're looking for maximum exposure of the website categories (and we can spare the room) while the internal pages have plenty of content. So I've ...
Nov 29 '11 at 20:56
newest header mega-menu questions feed
Hot Network Questions
Which is his name: the Fat Friar, Fat Friar?
Summation of Infinite Geometric Series
Was the Shamir worm used for the second Beis Hamikdash?
embedding listing code into tables?
Best Landsat-5 TM band combination for detecting fire scars
How do you calculate this sum?
Unit Testing for isPrime function
The Rock, Paper, Scissors, Lizard, Spock Tournament of Epicness
For a German website: how to translate "call me back"?
What are Ketephys' subdomains?
If statements in "generic" assembly?
How accurately can I expect to measure the gravitational constant with a club of college students?
A commutative ring with 1 whose elements satisfy a particular equation
How to survive the heat in regions with very high temperature?
Caveman Duels (or: Me poke you with sharp stick)
Should a progressbar go both ways?
Which is faster: while(1) or while(2)?
Using ReplaceAll to replace a head
What kind of microscope for ML/biological research?
sketching semicirles and horizontal lines
Using a static variable inside a lambda
Is the future already determined?
Simple random number generator
What does "DxD" in "Highschool DxD" stand for?
more hot questions
Life / Arts
Culture / Recreation
TeX - LaTeX
Unix & Linux
Ask Different (Apple)
Geographic Information Systems
Science Fiction & Fantasy
Seasoned Advice (cooking)
Personal Finance & Money
English Language & Usage
Mi Yodeya (Judaism)
Cross Validated (stats)
Theoretical Computer Science
Meta Stack Exchange
Stack Overflow Careers
site design / logo © 2014 stack exchange inc; user contributions licensed under
cc by-sa 3.0
|
s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510256737.1/warc/CC-MAIN-20140728011736-00121-ip-10-146-231-18.ec2.internal.warc.gz
|
CC-MAIN-2014-23
| 2,095
| 53
|
http://www.tomshardware.com/forum/255819-32-make-image-file-work
|
code
|
I have a DVD of a program I would like to use without having the DVD in the drive, is there anyway to "crack" it. I have created an image of it with power iso and mounted it to the fake drive and that doesn't work.
thanks in advance,
chris smoove ha hhaa
Well....if you have to "crack" it then it has copy wright protection on it. It is illegal to do what you are asking for and not an acceptable question here at THGF.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948592202.83/warc/CC-MAIN-20171217000422-20171217022422-00121.warc.gz
|
CC-MAIN-2017-51
| 419
| 4
|
https://community.skype.com/t5/Features-Archive/No-Click-to-Call-button-in-Sage-CRM-access-via-HTTPS/m-p/844948
|
code
|
Thanks for reporting. First i suspected that addons are not allowed to manipulate content in HTTPS pages (for security and integrity reasons). But I just verified accessing several HTTPS in IE9 and Chrome browser that it's actually working. Can you try upgrading the Internet Explorer to IE9?
Found a helpful message? Give it a Kudo below to say "thanks" ↓ Did my reply answer your question? Accept it as a solution to help others, Thanks. ↓
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463612327.8/warc/CC-MAIN-20170529130450-20170529150450-00081.warc.gz
|
CC-MAIN-2017-22
| 445
| 2
|
http://rodnik39.ru/new-bitcoin-slot-machine-games-bitcoin-slot-react/
|
code
|
New bitcoin slot machine games
However, the world of bitcoin slots is rising as extra on line casino operators and sport providers are growing a variety of unique bitcoin slot machine games declare btc bot telegram withdraw players to use.
Another attention-grabbing twist to bitcoin slot machines was the introduction of the choice to withdraw bitcoins to a local bitcoin handle, new bitcoin slots at grand bitcoin casino hinckley.
Bitcoin Casinos As Money Services Businesses
As a cash providers enterprise the operator of a bitcoin casino must first get hold of a license from the Financial Crimes Enforcement Network (FinCEN), to find out whether or not the money transmitting exercise is topic to anti-money laundering rules and to register if the operator meets these criteria.
However, a regulated operator doesn’t necessarily need to observe the laws of the jurisdiction the place their casino is situated nor should they ever be compelled to adjust to anti-money laundering laws, new bitcoin casino 10 no deposit. Some jurisdictions have created exceptions for cash services companies and bitcoin casinos operate in a few of the legal jurisdictions where money companies companies aren’t required to adjust to the legal guidelines of the jurisdiction where they’re located, machine slot bitcoin new games.
However, many states and U, new bitcoin slots at grand bitcoin casino hinckley.S, new bitcoin slots at grand bitcoin casino hinckley. territories do have laws which require regulated operators to report suspicious activity and adjust to anti-money laundering legal guidelines and reporting necessities, new bitcoin slots at grand bitcoin casino hinckley.
However, whereas not essentially unlawful, unregulated bitcoin on line casino operators could also be subject to scrutiny by the IRS or different regulatory our bodies. Many state and federal laws additionally apply to regulated entities, new bitcoin slot machine games. As an instance, states might require regulated entities to register with the Secretary of State, which requires periodic reporting to the United States Tax Office.
In the current past, extra gambling businesses have made the news as they’re trying to comply with state laws, such because the Nevada Gaming Commission’s crackdown on the Wynn Resorts which resulted in dozens of the Wynn Resorts’ workers being arrested and the closure of the casinos’ enterprise accounts, and the just lately handed legal guidelines in New Jersey prohibiting the usage of any digital currency as a way of cost, new bitcoin casino sites no deposit 2020.
In response to the crackdown, the New Jersey legislature is expected to cross a law that may further penalize operators of licensed casinos and prohibit their use of virtual foreign money for payment, new bitcoin slots games 2020.
The Future of Bitcoin Casinos
As there’s only very limited regulation on bitcoin casinos right now, it would not be shocking if we see an increase within the variety of jurisdictions who attempt to manage bitcoin casinos within the coming years, new bitcoin casino sites no deposit no card details.
Bitcoin casinos have the potential to help legitimize a virtual currency that already has a reputation as unlawful and questionable. As a cash different, a Bitcoin on line casino permits folks to conduct transactions in a legal way without exposing themselves to the risks that come with utilizing fiat currencies, new bitcoin slots las vegas 2020. Additionally, Bitcoin casinos would give gamers a larger degree of control over their Bitcoin wallets.
Bitcoin slot react
Top 10 bitcoin slot bitcoin casino video games, top 10 bitcoin casino games for bot mining telegram legit 2020 Slot games have advanced dramatically over the years. The major reason for that is the speedy development of bots. Bots have been taking part in slots for a really long time, although they do not appear to be allowed to deposit any bitcoins, bitcoin slot machine online bitcoin casino uganda.
10 finest bitcoin casinos with newest bitcoin slots evaluations Top bitcoin playing websites 2018 bitcoin casino review record on line casino casino evaluations finest bitcoin slot video games 2018
How a lot will I win in a bitcoin casino game? Bitcoin playing web sites are one of the best methods to win money on-line. Here are 5 ways to play an exciting bitcoin on line casino recreation: deposit bonus
on line casino video games
skins slots Bitcoin casino video games are getting stronger every season as a end result of gamers are in a place to guess on-line. In order to win the successful cash, the Bitcoin foreign money needs to be deposited and used at bitcoin casino sites, bitcoin react slot. To play a slot, players have to register. At the registration stage, the participant makes a deposit, bitcoin slot games online bitcoin casino. After that the participant pays a slot payment, bitcoin slot games 11. The deposit and the slot payment are refunded after the slot is completed.
Top 10 bitcoin slots reviews Top bitcoin slots critiques 2018, bitcoin slot of you. Slot machine critiques, bitcoin slot games online bitcoin casino. Casino slot evaluations. Bitcoin on line casino video games evaluate, bitcoin slot games winning strategy. Top 5 bitcoin slots 2018. Best games 2017. Best free on-line bitcoin slot web site 2018, bitcoin slot spin gratis0. 2018.
How to play free bitcoin casinos free of charge slots slots gaming free slots casino evaluate 2018 free slots free bitcoin casino evaluate greatest free bitcoin slots 2018 top bitcoin slot video games 2018
Top 20 bitcoin on line casino websites, top 20 bitcoin casino critiques, high 20 crypto coin slots 2017 free online bitcoin slots 2018 free slots free slots 2018, bitcoin slot spin gratis2. 2018, bitcoin slot spin gratis3.
Top 5 bitcoin casino sites 2018 evaluate article prime bitcoin on line casino slots 2018 free slots 2018 slot on line casino 2018 free casinos 2018
The top free casino slot video games 2018 free slots on line casino 2018 free slots online 2018 free slots free slots in play 2018
10 best bitcoin on line casino reviews 2017: what is a bitcoin on line casino
Most popular bitcoin casino slots. Best bitcoin slots 2018, bitcoin slot spin gratis6. Free online playing for real money 2018. Top bitcoin playing slots 2018. 2018, bitcoin slot spin gratis7. Online slot video games 2018 free bitcoin slots evaluation 2018. 2018, bitcoin slot spin gratis8. 2018, bitcoin slot spin gratis9.
10 greatest slots video games 2018 2018 best free slots online 2018 finest online bitcoin on line casino 2018 slot games 2018 free online slot slots on-line 2018 evaluations 2018. 2018, bitcoin slot react0. 2018, bitcoin slot react1. 2018. 2018, bitcoin slot react2. 2018. 2018.
Top 10 greatest bitcoin casino reviews 2018 Top 10 finest slot video games 2018. 2018 bitcoin slot evaluations 2018. Best Bitcoin Casino 2018: free slot video games 2018. 2018, bitcoin slot react3. 2018.
Casino joy sign up bonus
ZIGZAG777 casino provides a unique join bonus to all new clients that sign up with the hyperlink on this website, you will get 20 free spins no deposit on Vampires vs Wolves slotgame. The web site is situated at www.vampiresvswolves.com and does not use real money.
This casino provides the following bonus video games: Bonus Poker – 7+ jackpots + 100 free spins Casino Bonus – 10% as a lot as 500 free spins + 15% over 500 free spins
This on line casino is affiliated with: Playcasino.com.au
Casper Casino: Free Play 10-20% Bonus — 20+ slots + Bonus + Bonus a hundred free spins free spins free spins no deposit play on-line
Cagney’s Casino: Get $150 FREE on your first deposit. $100 cash bonus on each $1000. Play online for just $9.99. Play for a free 15% bonus then get a $100 cash bonus on every deposit of up to $750 (limit 2-3 free spins).
Camelot Casino: $125 money bonus on all deposit accounts. First Deposit Bonus is $250. Play at any time for simply $9.99. Free spins for all winnings.
Cards: 15% $25 Bonus. Play and win. Plus get $75 money bonus every time you play.
Castagney Casino: $250 FREE on first deposit. New customers only. Get $200 FREE on your first deposit over $500.
Carrot Jacks Casino: Win the most important bonuses of your life at this online Vegas on line casino. We need EVERYONE to expertise what we’ve achieved along with your first deposit at Carrot Jacks Casino — it’s an expertise like no other! Every day of this promotion, you’ll be able to earn a bonus of $10 for every $250 you play! Just deposit $750 or extra to have extra chances, extra spins and more cash within the financial institution. Win a grand bonus of $250. The bonus is good until November eight. Play for free. Visit our Website to view our latest promotions.
Celtic Sands Casino: Double our money, everytime! 10x Cash Back and 100x Plus Bonus on your first deposit. 20% for each $3, $5 and $10 paid with any type of Visa, MasterCard and American Express.
Cheri’s Casino: Receive 25%, 70% or one hundred pc on your first deposit of any amount — 50% of the first $2,000 paid with any Visa, MasterCard or American Express!
Cherry Hill Casino: 100 percent bonus on first deposit of $10 or more and $20 bonus with no
Bitcoin casino winners:
Desert Drag — 541.2 usdt
Trick or Treats — 461.6 dog
Atomic Age — 88.1 dog
The Great Ming Empire — 424.8 bch
Gates of Persia — 55.8 dog
Queen of Atlantis — 479.8 ltc
Mustang Money — 181.6 dog
Agent Jane Blonde — 143.3 btc
Bangkok Nights — 18.5 btc
Booming Bars — 205.5 usdt
Dragons Myth — 113.1 ltc
Golden Lotus — 745 dog
Epic Gems — 131.2 btc
Island — 617.8 eth
Judges Rule the Show — 739.3 btc
Best Slots Games:
BitcoinCasino.us Da Vinci Diamonds
CryptoGames Titan Thunder
Vegas Crest Casino Wildcano with Orbital Reels
BitcoinCasino.us Super 7 Stars
Syndicate Casino Win And Replay
Syndicate Casino 7 wonders
1xSlots Casino Super 7 Stars
FortuneJack Casino Pink Panther
Cloudbet Casino In Jazz
FortuneJack Casino God of Wealth
Diamond Reels Casino Cats
BitcoinCasino.us Gonzos Quest
OneHash Sea Underwater Club
Betchan Casino Chibeasties
BitStarz Casino Golden Fish Tank
Payment methods — BTC ETH LTC DOGE USDT, Visa, MasterCard, Skrill, Neteller, PayPal, Bank transfer, paysafe card, Zimpler, Webmoney, Euro, US Dollars, Canadian Dollar, Australian Dollar, New Zealand Dollar, Japanese Yen, Renminbi, Polish Złoty, Russian Ruble, Norwegian Krone, Bitcoins, Bitcoin Cash, Ethereum, Dogecoin, Tether and Litecoin.
#2 – 7 bit casino · #3 – katsubet · #4 – super slots · #5 – cloud bet · what are slot machines? · advantages of bitcoin slot machines. Satoshi slots apk, red rock casino reviews, where is the closest casino to sun city west arizona, poker welcome package. New data on the amount of bitcoin. Bitkingz casino is a new bitcoin casino and offers 55 free spins (no. The good girl bad girl is arguably the most popular bitcoin slot, developed by betsoft. You can choose between good girl and bad girl mode. If you choose the
It will create a folder called bet-eth in your working directory, with some react. React animation slot machine. Why bitcoin casinos are under the microscope. Traditional online casinos transacting in traditional currencies have defined. Indeed, different jurisdictions react differently to online gambling in general, new bitcoin casino square monaco. Crown casino share price asx, new bitcoin slots wins. If we know how to create react web apps but want to develop mobile apps, we can use the ionic framework. Css’;const isauth = true;const app: react. How i tripled my return on bitcoin using mathematics, algorithms, and python. Stephen referred you so you can get a slot and be attended to quickly. The past 3 months has been hell for me. How altcoins react ichimoku clouds — cryptocurrency settings. Bitcoin slot react, bitcoin slot machine gratis da bar
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335254.72/warc/CC-MAIN-20220928113848-20220928143848-00489.warc.gz
|
CC-MAIN-2022-40
| 11,896
| 76
|
https://mytrial365.com/2023/01/09/using-standard-csv-data-import-using-power-automate-flow/
|
code
|
Data import using CSV or Excel is a feature that is been there since day one. It is a good end user feature that helps to import data quickly in to Dynamics. Using predefined data maps and template the data can be quickly imported without much effort. The log is also a great addition to know the import status.
If you want to import data using Power Automate, read the CSV or Excel, parse and create the record one by one. The drawback on that approach is that, API request limitation will make the flow throttle if the data volume is larger. But if we use the standard data import using Power Automate Flow that will be robust and have more logging on the status of the import like the failure, success, error message etc.
Now, we will see how we can do the data import process using Power Automate Flow. Firstly we need to read the CSV file using some source, either using a SharePoint location or anything. Once the CSV is parsed and read, the first thing we need to do is to Create a Data Imports record as below.
The next step is to add the Imports record using the below step. The content attribute needs to get populated with the file content. The Import Job ID is the id that is created in the above step. Data Map can be an id of the Data Map config that is created for the CSV template. If you want to auto map then specify Use System Map as Yes (beware this will auto map by the system which you won’t have control over). Target Entity needs to be the entity schema name that you are importing.
Now that by creating the Imports record the import get to submitted status. The next step in the import process is parsing the CSV. This can be invoked using the bound action for entity Data Imports named ParseImport. The Row ID will be the id of the Imports record that is created in the above step.
The next step is the transform step. This can be accomplished using another unbound action named TransformImport. ImportId is the id that is created in the second step.
The next step is to start the import process. It can be triggered using the bound action ImportRecordsImport for the entity Data Imports.
Note, each of the action invoke will have a system job running in the backend. Make sure you check the system job get succeeded before you invoke the next action. This can be accomplished using the do until step.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224646652.16/warc/CC-MAIN-20230610233020-20230611023020-00520.warc.gz
|
CC-MAIN-2023-23
| 2,329
| 8
|
https://www.coursera.org/learn/django-build-web-apps?specialization=django
|
code
|
In this course, you will learn how Django communicates with a database through model objects. You will explore Object-Relational Mapping (ORM) for database access and how Django models implement this pattern. We will review the Object-Oriented (OO) pattern in Python. You will learn basic Structured Query Language (SQL) and database modeling, including one-to-many and many-to-many relationships and how they work in both the SQL and Django models. You will learn how to use the Django console and scripts to work with your application objects interactively.
This course is part of the Django for Everybody Specialization
About this Course
What you will learn
Describe and build a data model in Django
Apply Django model query and template tags/code of Django Template Language (DTL)
Define Class, Instance, Method
Build forms in HTML
Skills you will gain
- Django Template Language
- GET & POST
- Object-Oriented Programming (OOP)
- Cross-Site Scripting Forgery (CSRF)
- Django (Web Framework)
Syllabus - What you will learn from this course
Django Generic Views
Forms in HTTP and HTML
- 5 stars79.36%
- 4 stars14.46%
- 3 stars3.72%
- 2 stars1.14%
- 1 star1.28%
TOP REVIEWS FROM BUILDING WEB APPLICATIONS IN DJANGO
A great introduction course while if the PPT can be a bit more dynamic rather than plain images as some of the wordings on the ppt are too small or too many to read.
I just completed the second course of the specilization, really loved it. Can't wait to start the next one.
The best ever, python and django. I don't know if I'll one day use any diffent language and framework with so much longe that i have with it.
Great professor, but you cannot get full understanding of Django from the lectures and assignments, gives you good foundation to continue learning Django from other sources
About the Django for Everybody Specialization
Frequently Asked Questions
When will I have access to the lectures and assignments?
What will I get if I subscribe to this Specialization?
Is financial aid available?
More questions? Visit the Learner Help Center.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945030.59/warc/CC-MAIN-20230323065609-20230323095609-00535.warc.gz
|
CC-MAIN-2023-14
| 2,065
| 33
|
https://forum.hostingcontroller.com/tm.aspx?m=22069
|
code
|
BIG BUG: linux websites stop working - config is commented out
im thinking its some sort of suspension from the system. It happens a lot compared that we are not using HC in production yet and therefor only running something like 10 sites on two servers - happend today also on the 2. server. This site was over disklimit, but there was no warning mail send at 80% as configured by the reseller and the action is set to "do nothing". Im only guessing as for the reson in this case as there is, as always no information about why this happens.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103036176.7/warc/CC-MAIN-20220625220543-20220626010543-00784.warc.gz
|
CC-MAIN-2022-27
| 542
| 2
|
https://escholarship.org/uc/item/8gs6d572
|
code
|
Information Theoretic Measures and Estimators of Specific Causal Influences
- Author(s): Schamberg, Gabriel
- Advisor(s): Coleman, Todd P;
- Kim, Young-Han
- et al.
The need to measure causal influences between random variables or processes in complex networks arises throughout academic disciplines. In four parts, we here develop techniques for measuring and estimating causal influences using tools from information theory, with the explicit goal of providing context for how information theoretic perspectives on causal influence fit within the vast and interdisciplinary body of work studying causality. Throughout the dissertation, we demonstrate the utility of the proposed methods with applications to physiologic, economic, and climatological datasets.
Beginning with a focus on time series, we present a modularized approach to finding the maximum a posteriori estimate of a latent time series that obeys a dynamic stochastic model and is observed through noisy measurements. We specifically consider modern signal processing problems with non-Markov signal dynamics (e.g., group sparsity) and/or non-Gaussian measurement models (e.g., point process observation models used in neuroscience). Importantly, this framework can be leveraged in the estimation of the latent parameters specifying the probability distribution of a time series, which is a fundamental step in the estimation of causal influences between time series.
Second, we study the conditions under which directed information, a popular information theoretic notion of causal influence between time series, can be estimated without bias. While the assumptions made by estimators of directed information are often presented explicitly, a characterization of when we can expect these assumptions to hold is lacking. Using the concept of d-separation from Bayesian networks, we present sufficient and almost everywhere necessary conditions for which proposed estimators can be implemented without bias. We further introduce a notion of partial directed information, which can be used to bound the bias under a milder set of assumptions.
Third, we present a sample path dependent measure of causal influence between time series. The proposed measure is a random sequence, a realization of which enables identification of specific patterns that give rise to high levels of causal influence. We demonstrate how sequential prediction theory may be leveraged to estimate the proposed causal measure and introduce a notion of regret for assessing the performance of such estimators which we subsequently bound.
Finally, we extend our focus to general causal graphs and show that information theoretic measures of causal influence are fundamentally different from mainstream (e.g. statistical) notions in that they (1) compare distributions over the effect rather than values of the effect and (2) are defined with respect to random variables representing a cause rather than specific values of a cause. We leverage perspectives from the statistical causality literature to present a novel information theoretic framework for measuring direct, indirect, and total causal effects in natural complex networks. In addition to endowing information theoretic approaches with an enhanced "resolution," the proposed framework uniquely elucidates the relationship between the information theoretic and statistical perspectives on causality.
|
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300573.3/warc/CC-MAIN-20220129062503-20220129092503-00021.warc.gz
|
CC-MAIN-2022-05
| 3,397
| 10
|
https://forums.phpfreaks.com/topic/18505-in-help-in-posti1/
|
code
|
in help in $_POST[$i."1"]
Posted 24 August 2006 - 05:24 AM
i have to modify my sql query.following is my sql statement
insert into quartperformfigures (iQuater,dValue) values ('1','".$_POST[$i."1"]."')";
what does '".$_POST[$i."1"]."' hold is it array type value? need help to analyse this.thankyou
Posted 24 August 2006 - 04:42 PM
what does '".$_POST[$i."1"]."' hold is it array type value?
It is an array element. It can hold any number of things, but it would appear in your case to most likely be either a string or number.
Posted 24 August 2006 - 04:46 PM
Posted 24 August 2006 - 04:48 PM
What value the $_POST (example) holds can be anything. Look at the form you got the posted info from. it's being inserted into your dValue column in your db so that might help you.
Please, take the time and do some research and find out how much it would have cost you to get your help from a decent paid-for source. A "roll-of-the-dice" freelancer will charge you $5-$15/hr. A decent entry level freelancer will charge you around $15-30/hr. A professional will charge you anywhere from $50-$100/hr. An agency will charge anywhere from $100-$250/hr. Think about all this when soliciting for help here. Think about how much money you are making from the work you are asking for help on. No, we do not expect you to pay for the help given here, but donating a few bucks is a fraction of the cost of what you would have paid, shows your appreciation, helps motivate people to keep offering help without the pricetag, and helps make this a higher quality free-help community
0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814493.90/warc/CC-MAIN-20180223055326-20180223075326-00189.warc.gz
|
CC-MAIN-2018-09
| 1,636
| 14
|
http://www.aiyshaalsane.com/about
|
code
|
ARCHITECT | DESIGNER | WRITER
My work demonstrates my curiosity.
I’m currently a candidate for Master in Landscape Architecture at Harvard GSD. I obtained a Bachelor of Architecture from Virginia Tech, and spent a couple years after graduating practicing in the Middle East. This website is meant solely for academic and personal endeavors. To learn about my professional work, please contact me for a resume/portfolio.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315750.62/warc/CC-MAIN-20190821022901-20190821044901-00468.warc.gz
|
CC-MAIN-2019-35
| 421
| 3
|
https://www.indogencapital.com/inno-x-jogja-2020/
|
code
|
10 Nov Inno x Jogja 2020
Calling all the tech and innovation enthusiast!
Join the first and the biggest virtual showcase of the high impact of tech and innovation in Yogyakarta, that is organized by @block71yogyakarta supported by @nusenterprise @innofactory.id , and powered by @abpincubator @chubfisipol @centrino.ukdw o @iaugm @ibisma_uii @telkomjdv and @kominfodiy to get the exclusive access of :
– 4 days of curated content from business and technical experts at the main stage & Innovation stage with 30+ speakers
– Meet the Venture Capitalist (VC) program
– Virtual showcase booth
– Networking with 2000 attendees from across Asia who influential in the world of tech and innovation
Join the event now only at innoxjogja.block71.co, secure your spot at bit.ly/innoxjogja2020
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703533863.67/warc/CC-MAIN-20210123032629-20210123062629-00272.warc.gz
|
CC-MAIN-2021-04
| 790
| 8
|
https://notepd.com/idea/value-investing-course-588fi
|
code
|
Value Investing Course
I've been thinking of doing an investing course. Doing a deep dive on every strategy of investing. Because of my background running a "fund of hedge funds" plus my initial books on hedge fund investing (in the 00s) I had to familiarize myself with every investing strategy out there. I don't find many investors who have done that.
So I've been thinking of doing a course on all of the strategies. but there's so much material I might treat this as a bunch of courses: one course per strategy.
So I am breaking down the components of each strategy.
1. What is value investing?
Many people mistakenly think it's about buying stocks with low Price / Earnings ratios.
2. Ben Graham vs Warren Buffett
Between Graham and Buffett we see two different approaches to value investing. And what everyone gets wrong about Buffett.
3. Risk versus Reward
At the heart of value investing is defining what is "risk" and what is the appropriate "reward" given a certain degree of risk.
4. Deep Value or "Cigar Butt" investing
overlaps with the Ben Graham chapter.
5. Dividend investing
A specific subset of value investing
6. The best value investors
Stories of some of the historically best value investors out there.
7. Piggyback value investing
My specific approach to how to do value investing
8. Arbitrage and Value investing
Separately I'll do a course on just arbitrage but here I explain the relationship between arbitrage and value.
9. Growth and value investing
Many think these are opposites. But growth investing is just a subset of value investing.
10. Examples and case studies.
How I personally find and then do due diligence on value investing situations.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506646.94/warc/CC-MAIN-20230924123403-20230924153403-00892.warc.gz
|
CC-MAIN-2023-40
| 1,678
| 24
|
http://www.trekbbs.com/showpost.php?p=6894746&postcount=29
|
code
|
Re: post Fate of the Jedi trilogy, OT era series announced
I wouldn't be surprised if they did tie in novels, if they did more comics in the era. We have gotten a Knights Errant, and now a Dawn of the Jedi novel, so they appear to be doing comic/novel tie ins as a regular(ish?) thing now.
Over the course of many encounters and many years, I have successfully developed a standard operating procedure for dealing with big, nasty monsters. Run away. Me and Monty Python.
Harry Dresden - Blood Rites (The Dresden Files #6)
|
s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164120234/warc/CC-MAIN-20131204133520-00051-ip-10-33-133-15.ec2.internal.warc.gz
|
CC-MAIN-2013-48
| 521
| 4
|
https://en.fifty.do/blog/le-nudge-et-la-formation
|
code
|
This method theorised by the 2017 Nobel Prize winner in Economics, Richard Thaler, brings together all the mechanisms that help move from intention to action.
In this #CultureTech episode filmed by the Grande Ecole du Numérique, Alexia Cordier, co-founder and CEO of Fifty, explains
- why it is so difficult to move from knowing to doing
- the role of cognitive biases
- and how to apply the Nudge method in practice after a training course.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100227.61/warc/CC-MAIN-20231130130218-20231130160218-00405.warc.gz
|
CC-MAIN-2023-50
| 442
| 5
|
https://www.ancestry.com/boards/topics.software.famtreemaker/6647.2.2.1.1.1/mb.ashx
|
code
|
I've never seen a problem with the info in columns going to the wrong column. That's not to say that a certain combination of circumstances could make that happen.
The only problem I've seen is breaking down data inside the columns, but that is easily handled (usually) with the Excel "Text to Columns" capability. For example, the export puts city, county, state, and country in the same column. You have to use text-to-columns capability of your spreadsheet to break those columns down - which is a problem because that capability usually goes left to right - which means Brooklyn, New York in first two columns, while Brooklyn, Kings Co, New York will be in first three columns from left; which means New York state will be in different columns.
|
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187828134.97/warc/CC-MAIN-20171024033919-20171024053919-00638.warc.gz
|
CC-MAIN-2017-43
| 748
| 2
|
https://forums.unrealengine.com/t/how-to-make-level-change-after-scoring-right-amount-of-points/226742
|
code
|
I’m new in Unreal and Im creating small game just to learn things. I made endless runner where you pick up coins. I wanted to make a new level that would showed up after picking up 100 coins for example but I don’t know where to start. I know how to make teleport to another level with box trigger but map is generated randomly and its impossible to place it. Is there is a function to count points and if there is 100 = next level?
Any tips or ideas?
You can just use the same code you used to switch levels on box trigger, to switch on 100 coins:
To be on the safe side, I would use a >= rather than a ==. If the score happens to be 101 or greater, the level won’t change.
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585653.49/warc/CC-MAIN-20211023064718-20211023094718-00194.warc.gz
|
CC-MAIN-2021-43
| 680
| 4
|
https://dvic.devinci.fr/en/resource/tutorial/modular-bot-with-ros3/
|
code
|
Objective: Setup a custom car in Gazebo, that drives autonomously through a gate.
Duration: 1.5 weeks. Depends mainly on time spent in prep and setup time.
Learning outcomes: Merging technologies with ROS and the foundations of behaviour planning.
STEP 0 | PREP | 2 days
STEP 1 | SETUP | 2 days
STEP 2 | PROTOTYPE | 2 days
STEP 3 | DEMO | 1 day
Use resources from https://github.com/ThomasCarstens/cours-de-robotique
STAGE 3: SETTING IT UP FOR A DEMO
LAUNCH FILE FOR ALL SCRIPTS AT ONCE + COMMON LOG FOR ALL SCRIPTS
Learning how to launch different nodes for project launch.
RELOADING THE CAR AS SOON AS IT COLLIDES WITH AN OBJECT
A recap of service calls will be useful.
This requires collision detection.
For respawning the robot, learn how to use Gazebo service calls explained below.
HOW TO STOP THE PYTHON SCRIPT: WRAPPING CODE UP IN AN ACTION SERVER
We’re on the last stage! Finally, a central decision planner is used in preparation for scaling up the network. To do this, a ROS Action Server wraps the actions. Using a mission_state ROS node, you can run a full Python Server that in itself runs an action in its callback.
First make sure you understand how to perform an action. Follow this tutorial. You can later adapt it to your specific action.
Using this server, you can create a custom message within a client that returns the amount of time it takes for the robot before a collision.
Custom actions can deliver specific information to the user once the action has completed!
GOING FURTHER (NOT PART OF PROJECT):
USING A STATE MACHINE TO CONTROL THE SEQUENCE
Splitting the robot motion into states is a technique to be able to create various robot applications by revisiting states at will.
We make use of a state machine to be our ‘decision-maker’ (state transitions) between the sensor input (callbacks) and specific motions of the bot (actions).
SMACH is a simple Python implementation of state machines, and for the purposes of developing various usecases, you can get up and running easily.
This is the subject of a separate tutorial.
|
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141737946.86/warc/CC-MAIN-20201204131750-20201204161750-00279.warc.gz
|
CC-MAIN-2020-50
| 2,060
| 26
|
https://blog.gunherogame.com/2017/01/15/Weapon_Bubbles.html
|
code
|
I don’t feel like writing an intro paragraph, so right into the action: Each weapon in the game now has an unique bubble. I received feedback about how the weapons aren’t really distictive enough, and given my very limited weapon sprite resolution, I thought I’d make it easier to spot which weapon you’re about to pick up with a bubble sprite!
I also composed a new song, an alternative winter theme. The names of the tracks are mainly for keeping track of what’s done, as I plan to just play all the tracks on a nice level based loop. Given that I’ve planned basically two tracks per environment, I think that’s too few to keep those two tracks for a single environment. Anyways, here’s the track:
I also did some code cleaning this week. I fixed some errors pointed out by a static analyzer. I also replaced my old filesystem code that was implemented using WinAPI calls with the new and shiny std::filesystem. It was a really pleasant experience and the whole conversion took maybe ten minutes!
Anyways, thanks for dropping by and sorry for the short post.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100551.2/warc/CC-MAIN-20231205140836-20231205170836-00861.warc.gz
|
CC-MAIN-2023-50
| 1,076
| 4
|
https://www.themuse.com/jobs/goldmansachs/senior-c-software-engineer-secdb-architecture
|
code
|
Senior C++ Software Engineer - SecDB Architecture
The SecDb Platform team is responsible for the engineering and management of a business critical pricing and risk calculation platform used by all the revenue generating and federation divisions. As a developer in the team, you will be responsible to partner with developers in strategies and business-aligned technology teams globally, to create solutions, develop new features or improve existing services to meet business requirements. Our team is actively looking into migrating services from the proprietary platform to modern solutions for scalable microservices architecture. You will have the opportunity to join a rapidly expanding team at the early stage and to make decisions that would affect the shape of the platform for the next 5-10 years, creating innovative technology solutions to support the evolving needs of the global business.
We are looking to grow our team with a motivated technologist who is:
- A self-learner, curious, inquisitive, always striving to improve one's skill and knowledge.
- Willing to learn new technologies, such as those related to the cloud, databases or programming languages implementation.
- Not afraid to dig into 20-year old code to refactor and uplift it into the new century.
- Unwilling to compromise on code quality, test coverage, or controls.
WHAT WE DO
At Goldman Sachs, our Engineers don't just make things - we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets.
Engineering, which is comprised of our Technology Division and global strategists groups, is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here.
WHO WE LOOK FOR
Goldman Sachs Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment.
At Goldman Sachs, our culture is one of teamwork, innovation and meritocracy. We often say our people are our greatest asset and we take pride in supporting each colleague both professionally and personally. From collaborative work spaces and mindfulness classes to working from home and flexible work options, we offer our people the support they need to reach their goals in and outside the office.
RESPONSIBILITIES AND QUALIFICATIONS
HOW YOU WILL FULFILL YOUR POTENTIAL
- An opportunity to work on the core of a strategically important system at GS, used by both revenue generating and federation divisions
- Interact directly with the business and other tech teams to build innovative solutions to support the GS business.
- Considerable growth potential for highly motivated developers
- Play a big part in design and implementation in a team oriented environment
SKILLS AND EXPERIENCE WE ARE LOOKING FOR
- A Bachelor, Master, or PhD in a math or science degree
- Solid understanding and algorithms and data structures
- C++ or any other OO for 7+ years or willing to learn to high level of expertise
- A dynamically typed language (e.g. Python) for 1+ years
- Experience in working with large software systems, distributed computing, and databases (SQL and NoSQL)
- Experience in leading projects and team management
- Strong organizational skills; ability to multi-task and work under pressure and prioritize
- Team player
- Motivation to learn about either of programming languages, clould or database platforms
ABOUT GOLDMAN SACHS
The Goldman Sachs Group, Inc. is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world.
© The Goldman Sachs Group, Inc., 2019. All rights reserved Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Vet.
Back to top
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575541307797.77/warc/CC-MAIN-20191215070636-20191215094636-00232.warc.gz
|
CC-MAIN-2019-51
| 4,619
| 33
|
https://visualbi.com/blogs/sap/sap-analytics-cloud/sap-analytics-cloud-application-design-series-18-passing-filter-parameter-story/
|
code
|
In the previous blog in this series, we learned about the Navigation Utility function. In this blog, let us take a closer look at passing filter parameters to Story.
* * *
Navigation Utility function openStory() has an optional argument where you can pass URL parameters to Story. This URL parameter can be of three types – display, filter and variable. Here we are specifically exploring the option to pass filters.
There are six types of parameters that can be passed via URL to any Story when you want to create a Story Filter. The following list shows the type of parameter, syntax and whether it is optional or required.
- Model ID (Syntax: f<XX>Model, Required)
- Dimension ID (Syntax: f<XX>Dim, Required)
- Hierarchy Name (Syntax: f<XX>Hierarchy, Optional)
- Include / Exclude (Syntax: f<XX>Op, Optional)
- Unbooked (Syntax: f<XX>Unbooked, Optional)
- Filter Value (Syntax: f<XX>Val, Required)
In the syntax mentioned above <XX> can be any natural number less than or equal to 99. This is used when you want to create several Story filters. The parameters with the same number will act as a single set.
One important information to note is that every filter passed to a Story act as a Story Filter. Page-Level Filters or other selections like Input Control cannot be passed. If there is any existing Story Filter for the dimension that you pass in URL, then the value is overwritten.
Finding the IDs
Before appending the parameters in the Navigation utility function, let us see how to find the required IDs. Model ID should be in the format <package_name>:<object_name>. You can find the object_name of the model in the browser URL when you open a model. To find the whole model ID, you can visit the URL – https://<TENANT>/api/v1/stories?include=models where <TENANT> should be replaced with your SAC tenant URL. This URL will show all the Stories and the Model IDs used in them.
Here you can search for the Story or the object_name and then find the whole ID of the model including the package_name.
IDs of Dimensions can be found within Model Summary when you open a model or when you hover over the dimension within Builder panel while creating a Story. Similarly, you can find the Key value of a member within dimension details when you open a model or when you enable display settings to show member ID while creating a Story.
Now that you know about different parameters and how to find the required IDs, let us see how to append various parameters in Navigation Utility function. Filter parameters need to be appended as an array and then passed as an argument i.e. URL parameter. In the example below, 01 is the unique number that replaces <XX>.
- ‘f01Model’ is the parameter name for model ID and ‘t.T.CB3ZK1MOW2R6KAME6HRNTDE4N4:CB3ZK1MOW2R6KAME6HRNTDE4N4’ is the ID of the model (in format <package_name>:<object_name>).
- ‘f01Dim’ is the parameter name for dimension ID and ‘Segment’ is the ID of Dimension.
- ‘f01Val’ is the parameter name for filter value and ‘Consumer’ is the ID of the member to be filtered.
- ‘f01Op’ is the parameter name that mentions whether to include (‘in’) or exclude (‘notIn’) the filter value.
The above function opens an existing Story and creates a Story Filter for Dimension Segment by only including the member Consumer.
Combining URL Parameters
You can combine multiple URL parameters when using the openStory() function. The following example combines the display and filter parameter. Filter Parameter includes multiple Story filters passed as two different sets identified using the natural numbers 01 and 02.
The above example passes a display parameter that mentions the Story to be opened in the present mode. The first Story Filter is passed to create a filter on Segment by only including ‘Consumer’. The second Story Filter is passed to exclude the Cities – Dallas and Newark. The last argument false mentions that the Story should be opened in the same browser tab.
Analytics Designer automatically URL encodes the parameter values. The generated URL is shown below for reference.
You can see the above example in action below.
Apart from using Navigation Utility function, you can also use the same URL parameter format when you embed a Story within an HTML page or even within an Analytic Application when you use the Web Page widget.
In the next blog, we will discuss in detail about Data Analyzer.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099514.72/warc/CC-MAIN-20231128115347-20231128145347-00879.warc.gz
|
CC-MAIN-2023-50
| 4,412
| 29
|
https://marketbusinessnews.com/how-ai-based-paraphrasing-tools-are-helping-writers-in-2021/264202/
|
code
|
Paraphrasing Tools have completely changed the way writers used to generate unique content.
With so many versions of the same topic on the internet, it gets quite hard for you to rewrite it again in your own words.
You just can’t seem to find the unique words that are not already out there.
AI-Based paraphrasing tools are helping writers break these writing barriers by enabling them to create rewritten content in a fast and effective manner.
Let’s go over some of the most prominent ways AI-Based paraphrasing tools are helping writers in 2021.
How AI-Based Paraphrasing Tools are Helping Writers?
Paraphrasing tools before the AI were not good at rewriting. These tools used a linear approach where they just used to change the words with their synonyms.
This rewriting approach is not good because it creates content that feels unnatural.
AI technologies and especially the NLP have taken the performance of Paraphrasing tools to a whole new level.
Paraphrasing tools are now much better at creating content that looks natural.
This content feels much closer to the way humans write a piece of writing for academic papers or web copies.
Writers often have to deal with tough deadlines where they have to deliver multiple articles each day.
Writing many articles every day is quite difficult. It is extremely hard to maintain the quality of the written content when you have to write in bulk every day.
This is where AI-Based paraphrasing tools prove to be useful. These tools help writers create unique and human-friendly content from an existing piece of writing with ease.
Rewriting Made Easy
Rewriting a piece of content sometimes is just as hard as coming up with fresh content. You are just not able to find the right words to restate the topic at hand.
With paraphrasing tools, the rewriting job becomes much easier and simpler. You can input the text in the tool and these tools use their AI algorithm to rewrite content automatically for you.
Online paraphrasing tools are easy to use which makes them a great choice for rewriting and rephrasing tasks.
Rewrite Bulk Content Fast
Writers often have to come with a lot of content in a short amount of time. You can either write the entire content manually or you can make use of the paraphrasing tools.
AI-Based paraphrasing tools process bulk content fast and generate new and fresh content from it in a short amount of time.
Manual paraphrasing is a difficult task that takes a lot of time. And if you have to rewrite content in bulk, writing manually can prove to be quite troublesome.
With AI-Based paraphrasing tools, you can rewrite as much content as you want in a much faster way.
Removing Plagiarism from Content
AI-Based paraphrasing tools are great for removing plagiarism from the content.
For writers, it is quite common to be faced with accidental plagiarism in their content.
This is because no matter how much you try to come up with unique content; there is always the risk of accidentally writing the content in the same way as a random source on the internet.
If you find plagiarism in your writing, you can put the content in a paraphrasing tool. It will rewrite the content for you to make it plagiarism-free.
Best Paraphrasing Tools for Writers
There are a lot of paraphrasing tools out there that you can check out. Below are your best options for AI-Based Paraphrasing tools that you can find online
1. Paraphrasing Tool – Prepostseo
Paraphrasing Tool by Prepostseo is a valuable tool for writers that can help them rewrite content in a simple and easy way.
This free-to-use paraphrasing tool is great for academic writers as well as for people who write web content.
This tool offers four different paraphrasing modes.
You can paraphrase using any of these modes based on your paraphrasing requirement.
This paraphrasing tool comes with an easy-to-use interface and its paraphrasing performance is amazing.
This AI-based paraphrasing tool delivers remarkable paraphrasing performance that takes the content quality to the next level.
Prepostseo Paraphrasing Tool is available for users 24/7. You can use this tool use the desktop as well as mobile web browsers.
Paraphraser.io is one of the best paraphrasing tools for writers as well as students.
You can rewrite web content as well as academic papers and assignments with this tool.
Paraphraser.io works using the latest AI algorithms to rephrase the content in a way that feels completely natural to humans.
This tool improves the quality of the input content by using a rich vocabulary while maintaining the natural tone and fluency of the content.
There are 3 paraphrasing modes with this tool that you can go for.
You can rewrite using any of the 3 modes to find the one that works the best for you.
3. Paraphrasing Tool by Check-Plagiarism
The user interface and the navigation approach of this paraphrasing tool are simple and interactive.
With its mobile-friendly design, you can use this tool on mobile web browsers to rephrase any piece of writing with ease.
Paraphrasing Tool by Check-Plagiarism offers 2 different modes of paraphrasing for writers.
Simple Mode: This mode allows the user to paraphrase complete essays and articles of any length in one go.
AI Mode: This mode offers advanced, AI-Based paraphrasing. With this mode, you can only paraphrase about 500 words in one session.
This paraphrasing tool is quite great for writers because its rephrasing performance is quite fast and accurate as compared with most paraphrasing tools that you find out there.
AI-Based paraphrasing tools have revolutionized the way writers used to generate unique and fresh content.
These tools have evolved a lot over the years to add the latest AI and NLP features that make content rewriting simple and easy for writers.
Whether you write academic content or you are a web content writer, AI-Based paraphrasing tools can help you rewrite and paraphrase with ease while maintaining the main idea and the quality of the content.
Interesting Related Article: “Apps to Boost Productivity for Writers“
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510100.47/warc/CC-MAIN-20230925215547-20230926005547-00719.warc.gz
|
CC-MAIN-2023-40
| 6,048
| 55
|
https://www.answers.com/Q/Did_Eminem_date_Mariah_Carey
|
code
|
Did Eminem date Mariah Carey?
There is no confirmed answer to this question because there
seems to be two different sides to the story. Eminem has been
public and stated that he had a relationship with pop diva, Mariah
Carey, but Mariah denies these allegations. Eminem has referred to
his 'sexual' relationship with Mariah in a number of his songs
including 'Superman' off of 'The Eminem Show'. There is no
confirmed answer to this question PERIOD. And Mariah could've gone
out with Eminem in 2001 because Eminem was married to Kim at the
Mariah claims no but Eminem claims he did. There truly is no confirmed answer to this question.
He's probably reffering to the name Mariah Carey. Nick Cannon is Mariah Carey's husband and Eminem claims to have had a relationship with her before Nick, although she denies it. Eminem and Mariah Carey have been feuding recently, starting with Eminem's 'Bagpipes from Baghdad". It is believed that Mariah's hit 'Obsessed' is about Eminem. Eminem believed this, and released a track titled 'The Warning'.
Eminem said that Mariah Carey and him dated, and she said that he didn't, which is bs. If it wasn't true, why did he say that? He has a lot of women chasing him, so why would he "claim" that he saw her when it wasn't true? And, Mariah Carey made a song called Obsessed, which was clearly about Eminem ( watch the music video ). Then Eminem made a rap called the warning saying…
They use to date. Mariah was denying it, and wrote the song called "Obessed." Eminem got pissed and wrote a diss song to her called "The Warning." She made the song "Obessed" and "Clown". He made the songs "Superman", "Bagpipes from Baghdad", "Jimmy Crack Corn". And "The Warning" where she was mentioned.
|
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314752.21/warc/CC-MAIN-20190819134354-20190819160354-00510.warc.gz
|
CC-MAIN-2019-35
| 1,726
| 13
|
https://www.cuaa.edu/admissions/international/deadlines.html
|
code
|
Select a year range below to see the deadlines.
- All deadline information is on the CUW website. The links above will direct you to that site.
- Other graduate programs follow the undergraduate calendar
- Plan to apply 3-4 months prior to your intended start date
- Visa processing times may vary depending on the country
- Once your completed application has arrived it can take 2-3 weeks to process the documents. Feel free to check the status of your application here!
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890187.52/warc/CC-MAIN-20180121040927-20180121060927-00726.warc.gz
|
CC-MAIN-2018-05
| 472
| 6
|
https://bicycles.stackexchange.com/questions/36787/does-brake-modulation-matter-for-efficiency-beyond-having-enough-to-avoid-lock
|
code
|
Edit: To Clarify this is regarding going as fast as possible, i.e. racing, off-road.
I can't see why "modulation" matters so long as you have enough to apply near maximum braking force without locking the wheel.
Surely the brakes should always be used to near maximum stopping power, while using the duration of application to give the desired deceleration in as little time as possible (as late as possible)?
Wouldn't any other approach be inefficient and result in riding slower?
I have held this (logically derived) belief for a while and been around similarly minded people, but recently I have come across a surprising amount of people who always go on about modulation when brakes are being discussed and how they need great modulation to get "just the braking power they want".
Is their preference a matter of what they want trumping what they actually need for maximum efficiency?
I was always led to believe (with sensible reasoning) that dabbing or feathering brakes or in any way applying them, to anything other than near maximum capacity, is simply incorrect brake use (when efficiency is at stake.)
Am I missing something?
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066981.0/warc/CC-MAIN-20210416130611-20210416160611-00458.warc.gz
|
CC-MAIN-2021-17
| 1,136
| 8
|
http://cytham.blogspot.com/2013/11/november.html
|
code
|
3 07 pm
well here goes, october is over and november is here, will pass by faster than you think, somehow i feel that some heavy piece crap is about to hit me very hard and I have no idea when, i think it's already did, and I am slowly feel it. i just don't know what is causing it yet.
so whats the problem chien? can you even figure out what it is? or just to afraid to ask what is the cause of it? what heavy crap do you anticipate later and how do i feel is affecting me? more like my current situation now?
ouh wth, this kind of crap make me cannot think straight, anyway screw it, my worker invited me to his church this sunday and I'm thinking whether shud go or not, the previous times I had encounter with a friend who was catholic and I ady have problems like that, I'm not sure I want to go through this same problem again, everytime with a christian there is always a problem! @#$!
i just wonder why I got to think of this, it seems that everytime I met a christian bad things happen to me, cut of friendship obstacle that takes away something for me, it's like they are main reason for my curse! but then again it's just me, i just feel everytime i come across them something shitty always happens.
air cond is in the office is also kind of cold, makes me feel sleepy sometimes, it's alot to complain about but i have my reasons, anyway next checkpoint is CF 2013, i guess shud expect some reunion over there other than that nothing else to expect.
shud I stop complaining? i might sounds immature for me at my age but what can i do about it? I will think about it
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590362.13/warc/CC-MAIN-20180718232717-20180719012717-00310.warc.gz
|
CC-MAIN-2018-30
| 1,577
| 7
|
https://www.sktime.net/en/v0.22.1/index.html
|
code
|
Welcome to sktime#
A unified framework for machine learning with time series.
an easy-to-use, easy-to-extend, comprehensive python framework for ML and AI with time series
open source, permissive license, free to use
openly and transparently governed by the user and developer community, with a charitable core
a friendly, responsive, kind and inclusive community, with an active commitment to fairness and equal opportunity
an academically, commercially neutral space, with an ecosystem integration ambition and neutral point of view
an educational platform, providing mentoring and upskilling for all career stages, especially early career
unified API for ML/AI with time series, for model building, fitting, application, and validation
composite model building, including pipelines with transformations, ensembles, tuning, reduction
interactive user experience with scikit-learn like interface conventions
In-memory computation of a single machine, no distributed computing
Medium-sized data in pandas and NumPy based containers
Modular, principled and object-oriented API
Using interactive Python interpreter, no command-line interface or graphical user interface
Get started using
Find user documentation.
Understand sktime’s API.
Find out how you can contribute.
Information for developers.
Learn more about
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816832.57/warc/CC-MAIN-20240413180040-20240413210040-00013.warc.gz
|
CC-MAIN-2024-18
| 1,315
| 21
|
https://forums.macrumors.com/threads/ipad-4th-gen-is-the-best-gaming-device-ever.1475461/
|
code
|
As a game developer I'm happy to see iPad 4gen comes with 2x graphics formance by the power of new A6X clip. Now it's the only device that can gaming experience to a beautiful 10-inch retina display without compromise on performance. And it's the only device that could possibly deliver console quality games to mobile devices. The ipad 3rd gen is suffering fillrate bound because of the retina display. Although it's 2x faster than ipad2 the resolution is 4x lager. For this reason many 3D games choose to running at 1024 * 768 to avoid framerate drop. Here's a demo of our working title BEATS OF FIST. The game is running at 2048 * 1536 resolution at average 35 fps, under the condition we limit to 2xAA and reduce the realtime godray light source form 6 to 1. But with the new iPad 4gen I can predict I will running constantly over 50fps with 2xAA and up to 6 light source. Which would match the result we have tested with iPad2, iphone4s and iphone5.
|
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376829568.86/warc/CC-MAIN-20181218184418-20181218210418-00226.warc.gz
|
CC-MAIN-2018-51
| 954
| 1
|
https://mspoweruser.com/google-is-finally-trying-to-tame-the-chrome-resource-beast/
|
code
|
Google’s popular Chrome browser rightly has a reputation for sucking up all your RAM and processor resources, and it seems Google has finally heard the screams of users, as the company is working on a new mode which would limit the damage the browser can do to your system.
The so-called ‘Never-Slow Mode’ was found on Chromium Gerrit, and would put a hard limit on the amount of resources a page can use.
Developer Alex Russell explains the function of the new mode as such:
“Adds `–enable-features=NeverSlowMode` to enforce per-interaction budgets designed to keep the main thread clean (design doc currently internal).
Currently blocks large scripts, sets budgets for certain resource types (script, font, css, images), turns off document.write(), clobbers sync XHR, enables client-hints pervasively, and buffers resources without `Content-Length` set. Budgets are re-set on interaction (click/tap/scroll). Long script tasks (> 200ms) pause all page execution until next interaction.”
While the feature will likely result in a much faster browser, completely blocking larger scripts will likely also break a lot of web pages.
Chromium will soon be powering Edge, and Microsoft’s developers have already started contributing to the rendering engine.
Hopefully, with Microsoft’s help, Google can work at making the Chrome browser less of a drag on our PCs without breaking the increasingly complex and prevalent web apps.
|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100583.31/warc/CC-MAIN-20231206063543-20231206093543-00893.warc.gz
|
CC-MAIN-2023-50
| 1,438
| 8
|
https://mirror.xyz/decentlandlabs.eth/poy7TgLzS7d0Bdy3Ga9iAlEjxeKr5Q6OkhXa1miHd4U
|
code
|
With preparations now in place for ETH Lisbon (come meet us!), we turned our focus this week towards improving user and developer experience with decent.land tooling. This update also has a few subtle hints of big things to come, if you know where to look. 🔮
The APIs behind ar.page and other applications that rely on ANS were overhauled this week to bring load times down to 1-3 seconds - from roughly 10 seconds previously - improving the load time of ar.page by at least 300%.
ans-testnet API has since been deprecated in favor of
Read more in the docs.
The design’s final, the POAP is ready to go -- all that’s left for you is to meet our team at ETH Lisbon and claim it. There’s just 100 editions to go around, each entitling a holder to claim an ANS domain as part of the private beta.
🎟️ Register for the Arweave events here.
Support for Evmos -- the EVM compatible app chain for the Cosmos ecosystem -- was added to the Ark backend this week in preparation for a full integration. This will mean Ark Protocol can link your Arweave master identity to your Evmos address and return data about token and NFT holdings on the chain.
The integration is our entry to the Evmos-Covalent #OneMillionWallets Hackathon, which starts on the 7th of November.
One of the biggest challenges for Arweave ecosystem developers is the time it takes for contract interactions to reflect back to the user. In other words, it was slower than you’d expect or tolerate a web3 app to be -- until EXM came along.
EXM is a lightning fast execution layer for Arweave contracts that offers instant finality for users and greater flexibility for the developers. From today, all identity links triggered through ark.decent.land use EXM on the backend, and Ark is just the first step. In the coming weeks, we’ll be rewriting all vanilla SmartWeave contracts to use EXM, so expect serious performance improvements very soon.
Improvements to Ark this week include:
EXM integration and state migration
API performance upgrades (gzipped endpoint, in-browser compression)
NEAR oracle gas optimization
The ANS API is now ready to be integrated into wallets (some big news on that soon maybe 😎)!
Instead of filtering the whole ANS state and searching back to tie a label to an address, developers can now resolve an address just by knowing the .ar label --
🛠️ Try it out
Watch out for a huge announcement around the decent.land social layer soon -- get to know first and be an early adopter in our Discord. 🛸
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474660.32/warc/CC-MAIN-20240226130305-20240226160305-00802.warc.gz
|
CC-MAIN-2024-10
| 2,507
| 18
|
https://www.dublincore.org/news/2015/08-19-dcmi-bylaws-revisions/
|
code
|
DCMI Bylaws Revisions
The DCMI Governing Board announces its GB2015-2 decision to revise the DCMI Bylaws. The major focus of the revisions is on the refactoring of roles of the Advisory Board and Directorate with regard to DCMI conferences, meetings, educational programming and Initiative outreach. The revisions are are part of the ongoing fine-tuning of the Bylaws following the major restructuring of DCMI governance in 2014. The revised Bylaws can be found at http://dublincore.org/about/bylaws/.
|
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473472.21/warc/CC-MAIN-20240221102433-20240221132433-00402.warc.gz
|
CC-MAIN-2024-10
| 501
| 2
|
https://agenda.community/t/inserting-text-through-x-callback-url/70791
|
code
|
Hey! Im playing with shortcuts and im finding that when I append text to a note via identifier instead of text it doesn’t paste the text, is a know bug?
Could you post the x-callback-url you’re trying to use? Also, how did you obtain the identifier?
I take the identifier with reminders, but i have find that reminder’s id is different than note id
That’s correct, the two are not related. Unless you have created the note through x-callback-url there is no way to programmatically obtain the ID at the moment, so you have to use the note and project name instead, making sure you keep them unique.
A bit of a workaround to obtain the ID would be to, in Agenda, copy a note as an Agenda link, it will look something like:
agenda://note/B012BFE8-1C5B-43EB-87A3-7345D2E4032B the last bit is the unique identifier of a note in this case.
Yes I have done that. I have created a shortcut to append the URL to the note. Then I can create a principal reminder with a which have the URL of the note (not reminder) with another shortcut, and I can play around with it. So I have a shortcut that complete a reminder and attach a new checkbox with reminder to the note. I have been playing with them all the weekend
|
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178369553.75/warc/CC-MAIN-20210304235759-20210305025759-00372.warc.gz
|
CC-MAIN-2021-10
| 1,212
| 7
|