url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://www.windowscentral.com/readit-updated-windows-phone-battery-fix
code
Readit updated to fix battery and heating issues, new features also added Readit has been out for Windows Phone since early November. The app quickly become one of the best Reddit apps in the Windows Phone and become our personal favorite. It’s a stunning app that has seen rapid success and updates since its launch. There’s even a Windows 8 version currently being developed. Version 1.4 of the popular app is now out, so let’s check out what’s new in this update for Readit. Readit for Windows Phone was last updated to version 1.3 in mid-December. That update was notable for including things like battery/heat improvements, low memory optimization, and various tweaks to the usability of the app. Today we’re looking at version 1.4 available in the Windows Phone Store with the following features: - The Battery Issue should now be solved, the bug that was eating up CPU time is now gone - Heat should no longer be an issue because CPU is not running 100% the whole time - App bar in all submit/edit dialogs - provides a submit button so you no longer have to exit the textbox before submitting - Gfycat integration = smoother/faster gifs - uses a ton less memory and much less power. The gifs are now converted to an h264 encoded mp4 and played as a looping video in HQ - what an awesome service. (We will soon include options to pause and reverse gifs too!) - Landscape now supported in subreddit view, post view & comment dialogs - Rotation Lock - only persisted for one session until you close the app completely - set when viewing a subreddit - You can now publish blank self-text posts - Album viewer performance/general improvements - Completely new markdown parsing engine - Comment images, albums and gifs now load inline - Images are now saved with the condensed post title as the name - Hide read posts - now respected in swipe view - Context menu doesn't open when you tap a hyperlink now in a comment - Post doesn't zoom/zoom out when you tap a hyperlink now - Improved performance with video player - Critical - Fixed bug with self-text posts only saving one character - Fixed bug with taking extra back buttons to exit a post - Fixed bug with swipe view getting stuck after releasing your swipe gesture - Fixed bug with swipe view not disabling correctly - Fixed bug with hierarchy indicator appearing when there are 0 comments - Fixed bug with multi-Reddit refresh causing a crash code - Swipe view now disabled correctly when viewing a linked post - You can now view comments on first launch without having to restart the app - A ton of other general enhancements/bug fixes/UI tweaks As usual, the developers behind Readit provide a highly detailed changelog for us to all parse through. That also means that the actual update to version 1.4 of Readit is pretty substantial. We normally joke that apps “feel faster” when we don’t see a changelog with a given update, but this version of 1.4 does feel faster. No joke. The other updates to the app are important and worth highlighting. For example, the biggest complaint against Readit was that the app would kill batteries and heat up devices. The team has found the bug that was eating up the CPU and killed it. For those who experienced any heat/battery problems that should be fixed in this update. Readit also picked up a lot of smaller updates that enhance the app overall. For example, you now have landscape support in the subreddit view. Comment images, albums and gifs now load inline, which should lead to a better reading experience. Speaking of gifs, the app includes Gfycat integration should should lead to faster and smoother gifs in the app. The app will use less memory and power by converting the gif into an h264 encoded mp4 and playing as a looped video. A future update will include the option to pause and rewind gifs. Readit 1.4 includes a ton of other changes and you should read the changelog above. It’s a big update and we’re happy to see the direction this already excellent app is heading. However, the Windows Phone 7.x version of Readit will no longer see features added as the code has forked from Windows Phone 8. The team will still fix any major bugs in Readit for Windows Phone 7.x. Want Readit for Windows Phone? It’s our favorite Reddit app on Windows Phone. Grab it in the Windows Phone Store for $1.99 (trial included). Windows Central Newsletter Get the best of Windows Central in your inbox, every day! The more updates the better Yes, when the app came out, I really enjoyed it aesthetically. However, it quickly became clear that it wasn't as practical as Baconit, as it had trouble rendering images with poor service and caused lots of battery drainage. Glad to see these fixes. Perhaps I can switch back to it now. It's still just a tad to laggy unfortunately...if compared with Baconit. another of those epic changelogs :) Best app for reddit/news out there Fast app resume is still bugged. If I leave the app from a post and come back to it later it comes up with a blank post and I have to back out. This was the biggest reason I went back to baconit as it makes pinned subreddits useless. I'll come back to if that bug is fixed because its fantastic otherwise. As far as I understood the developer, this is related to WP memory management and isn't easy to fix. Also, it happens almost exclusively on low memory devices (read: 512 MB). Gonna be fixed eventually, of course. Soooo....this update completely broke gifs for me. Like, they won't even display now. They just sit there and load. They were at least working before and they're definitely not "smoother", unless "not at all" is considered smoother.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00721.warc.gz
CC-MAIN-2023-14
5,650
42
https://bytescout.com/blog/4-javascript-shortcuts.html
code
This is actually an inefficient and time-consuming way of declaring variables. The quicker way to do it involves using the ‘var’ keyword just once, followed by the variables that need to be declared, separated by commas. The code would look something like this – var Michael, Rick, Bob This method is not only useful for declaring the variables, but can also be used to initialize the declared variables, like this – var Michael=“15yr”, Rick=“28yr”, Bob=“56yr” So, the next time you prep yourself to write code, try some (or maybe all) of these shortcuts and watch your work get infinitely smoother. You can check for SQL tips needed for your coding. About the Author ByteScout Team of Writers ByteScout has a team of professional writers specialized in different technical topics. We select the best writers to cover interesting and trending topics for our readers. We love developers and we hope our articles help you learn about programming and programmers.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738864.9/warc/CC-MAIN-20200812024530-20200812054530-00004.warc.gz
CC-MAIN-2020-34
980
8
http://forums.cgsociety.org/archive/index.php?t-703293.html
code
Well, I recently finished work on the new version of my pose store scripted helper plugin. This is a helper object designed to be used similar to the morpher modifier, but for bone based rigs. It allows creating, animating and editing poses, or any position / rotation of any objects in your scene (Excluding some max objects, such as target cameras) It was created so that one may have the benefits of a bone based rig (plenty of control) along with the speed of the morpher modifier. INSTALLATION: Copy the "posestore-plugin-v0.20.mse" file to your <<3ds Max>>\Scripts\Startup directory. This will load the script on startup. Download Pose Store v0.2 (http://www.jeremymassey.com/scripts/posestore-v0.2.zip) Please read the included readme file. I have also attached a small SWF video file with a short demo on using the pose store helper. This has been tested on Max 2008, but should work in earlier versions of Max. Feedback is appreciated, and if you have any questions or problems feel free to post here. Hope you all enjoy! EDIT: Screenshot of UI attached.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794865468.19/warc/CC-MAIN-20180523082914-20180523102914-00254.warc.gz
CC-MAIN-2018-22
1,063
7
https://www.daniweb.com/community-center/daniweb-community-feedback/threads/48803/which-way-in
code
My browser homepage used to always be the forum index, until you started blogging ;) Now it's the main portal page. However, regardless of what my browser homepage is set to, my first click is always Today's Posts. The tutorials I can add. The blogs, I was thinking of adding, but I figured it was sorta redundant because when you click on the a category, blog entries appear right there at the top. If anyone else agrees this would be a good idea, I'll reconsider. The code snippets, on the other hand, are another matter. Because there are virtually an unlimited number of languages that could appear in the code snippets, I can set the dropdown to list the top ten languages and then have a link to "all languages". I don't want the dropdown to be infinitely long, obviously. I just want to add that I bought a new computer which arrived today, so it's going to be a little bit before I can get started on this because this machine doesn't quite feel like home yet. I need to install all of my dev tools and customize my settings before I can get into programmer mode.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591455.76/warc/CC-MAIN-20180720002543-20180720022543-00119.warc.gz
CC-MAIN-2018-30
1,071
4
https://ashinyworld.blogspot.com/2011/11/social-media-is-for-boys.html
code
Yesterday I went to a conference for the voluntary sector on tech and social media. The audience was an even split, in fact possibly more weighted towards female attendees. The scheduled speakers for the first half of the day? Entirely male. The leaders of the sessions in the afternoon which were unconference style? An almost even mix, taking into account the workshop sessions. Now, there were what felt to be subtle intimations that I should not be complaining if I were not prepared to stand up at the front myself. There was equally, I think, some assumptions that I was in some way upset to not be asked. I'm nothing to do with the voluntary sector right now and I am not a social media expert. I am not famous for anything. I have nothing to speak about and I didn't see the call for speakers, more importantly. Had I had something to say, and seen the call for speakers, I absolutely would have offered. I'm not much scared of a room full of people any more. The fact still remains that the other women in the room weren't at the front either. And as I commented at the time, this was not an accusation, the conference yesterday was no more guilty than every single other conference or unconference I have been to, though unconferences tend to be slightly better, especially govcamps for some reason. I'm really bored of it. But most of all, very most of all, I'm still boggling at the irony of one of the female organisers asking an entirely male panel why they thought there weren't more female speakers. So, I'm going to ask you, because I know there are many women who read this. Why don't you offer to speak at conferences? What is it that holds you back from running unconference sessions? Why are we massively unrepresented at the front and yet have plenty to say online and across social media?
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817200.22/warc/CC-MAIN-20240418061950-20240418091950-00348.warc.gz
CC-MAIN-2024-18
1,811
7
https://www.tr.freelancer.com/job-search/vb6-active-directory/4/
code
I want Music App Promotion and Many Active Users for my app preciso de uma dll, para utilização em VB6 e PHP, que a partir de uma chave de acesso, faça o download do xml da nf-e, sem necessidade do certificado digital. Temos um sistema de gestão, onde nossos clientes dão entrada nas compras, atraves da leitura do xml. Alguns clientes usam o certificado A3 e na maioria das vezes este certificado é utilizado p... I am a Verizon Fleet customer. Lots of data is available online. VZ Fleet has an app using Postman, which I am not familiar with. I want to be able to get current odometer readings for ~40 vehicles into a spreadsheet that will constantly have updated (within a 15 minunte period) data. I need an existing website rebuilt. I need you to design and build a landing page. Hello, I am looking for an expert VB6,VB.NET Developer who are expert in migrating the vb6 project to vb.net and redesign the application. Hey All, I am needing my website which is run on Word Press to have a few minor edits. The chosen candidate will be an expert with Word Press. We expect you to know: -How to add products/change product pricing. This includes product variations. -How to add/remove lots of text from a Word Press page -Adding images to a Word Press page -Making widget that can be updated with new statistics -Making... Setup and configure Microsoft EMS Mobility Suite. Setup and configure Azure active directory premium. Enable Azure rights management. Setup and configure Microsoft in tune. Knowledge transfer and deployment support documentation Need a script which let chrome browser stay active in the background DESIGN PROJECT ON SOFTWARE: AUTOCAD :-Assembled View of ...of lever Safety Valve, Assembled view of Feed Check Valve, Plummer Block Assembly. CREO 2.0 :- Exhaust Manifold, Open jeep concept, bike concept, Active Loadmatic Bag Loading. CATIA V5 :- Radiator Tank Cover, Aircraft Design. UGNX 9.0 :- Milling fixture assembly. I am looking for one or two English writing - no native proficiency, but you need to be able to chat in English - freelancers. Your responsibility is going into facebook groups for amazon sellers and promote a specific re view service. The specific action depends on the group's rules and on individual postings in the group. I will explain different settings in detail once the right candidates... We are looking for a modern minded person with patience and ability to walk our clients through our onboarding process. Strong bandwidth video calling required. Meaning solid internet connection is required. Job is focused around leading and guiding our clients though the application process on our company website. Strong grasp of english required. Solid availability preferred. We heve an app with 4000~5000 daily active users in Taiwan. They have to watch a 30-second ad before using this app. How can we generate more app revenue through this 30-second ad? How can we define our target market? We are looking for someone who can advise on the above issues. I need a proficient person with Kajabi, Clickfunnels, Active Campaign and Zapier? A specialist and not a generalist. I’ve already integrated Clickfunnels to Zapier to ActiveCampaign. I’m simply trying to confirm if someone else knows it, perhaps better than I do so that it can be maintained and updated if needed. I haven't started on Kajabi at this I want a freelancer who can make a project for me in which he has to install active directory domain services in server 2016 in my computer through teamviewer. I want to build an iptv app linked with xtream codes panel and Stalker IPTV platform - compatible with a...xtreamcodes panel and Stalker IPTV platform - support of vod subcategory anyone who never done this before please dont contact me, if anyone doesnt have skill of iptv with active code and they will develop iptv app for first time will be ignored. I have developed VB6 desktop application for Attendance & Leave management system. Its working fine. Now I want to convert this application to Web Application in Asp.Net. I want same functionality and forms that are currently in VB6 Desktop application. I don't want from you to do more work on UI, just use default UI of latest version. I have attached
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221214713.45/warc/CC-MAIN-20180819070943-20180819090943-00394.warc.gz
CC-MAIN-2018-34
4,235
15
https://community.airtable.com/t/number-field-maximum/46490
code
I’m in the process of creating a form the interested clients can fill out and submit. One of the fields is a Number (Integer) and I wanted to know if there was a way to give a maximum value. We don’t want to accept the entry if the value in this field is over 100. As far as I know, there is no way to pop up a message or to prevent the entry from being submitted. We would not want the client to proceed with all of the actions driven by the form if their numeric value is greater than 100. I know I can use a formula to determine if the value is greater than 100, but I don’t know how I could use that formula in a form. Thanks in advance. Data validation isn’t available in Airtable, unfortunately. If you have a Google account (or other survey/form service), you can use an automation to pull the form data: Thanks for the info, Andy. Your best bet is to use a third party form tool for data validation. However, it is possible to configure an Airtable form so that it cannot be submitted if a number is too high. This involves the use of a required, conditional single select field that has no options. - Conditionally show the single-select only if the number field is over 100. - Make the single-select required, but give it no options so that the form cannot be submitted. - Display the single-select as a list to hide the drop-down. - Edit the label name and/or help text for the single select to provide feedback to the user. - If you also have a minimum value, adjust the conditions for the single-select accordingly. This system is not nearly as robust as a proper form tool with actual data validation. The error text that Airtable shows when the user tries to submit the form when the number is too high will be confusing to the user. However, this system does work if you cannot use a 3rd party tool. This topic was solved and automatically closed 3 days after the last reply. New replies are no longer allowed.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104215790.65/warc/CC-MAIN-20220703043548-20220703073548-00577.warc.gz
CC-MAIN-2022-27
1,934
14
https://rust-lang.github.io/wg-async/vision/roadmap/polish/lint_large_copies.html
code
- Identify when large types are being copied and issue a warning. This is particularly useful for large futures, but applies to other Rust types as well. |Lang team initiative proposal||💤| This is already implemented in experimental form. We would also need easy and effective ways to reduce the size of a future, though, such as deliv_boxable.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100603.33/warc/CC-MAIN-20231206194439-20231206224439-00469.warc.gz
CC-MAIN-2023-50
347
3
https://wiki.hyperion-project.org/threads/hyperion-with-chromebox-pmp.70/
code
I have a Chromebox with Plex Media Player installed (embedded x86_64). I'm also using Lightberry USB. Compiled Hyperion from source with frame buffer grabber ON. Problem is, it only works for boot effects as well as controlling via hyperion mobile app. It does not work for video playback and menu. Am I using the correct grabber? Do I need X11 or QT5? Here's the json config: Here's the log file. Doesn't contain much. I have hyperion working with RPi2 before moving to this setup, that was with PMP and spidev.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703505861.1/warc/CC-MAIN-20210116074510-20210116104510-00416.warc.gz
CC-MAIN-2021-04
512
1
https://lists.zeromq.org/pipermail/zeromq-dev/2010-April/002993.html
code
[zeromq-dev] Assertion failed sustrik at 250bpm.com Thu Apr 8 15:00:04 CEST 2010 > As one of my subscriber has received all my messages, I think the > network delivered all the messages. I have attached to this email, the > dump of Wireshark of the network transfer. What surprised me in this > graph, is the "holes" we can see in the network usage. If I follow the > sequence number in the epgm packets stream, > I notice "holes" in packet sequence number at the moment where according > to the graph there is no traffic. > Everything looks like some network packet has not been transferred to > wireshark. However, these packets has been transmitted on the network > otherwise none of my subscribers will have reported a correct transmission. > If some packets have not been transmitted to wireshark, I guess the same > things happened for my 4 subscribers which has lost messages. I have no idea how dispatching the packets to individual applications is implemented. Steven may have a better understanding of the problem (I'm cc'ing OpenPGM mailing list). > In one of your previous e-mail, you talked about > kernel implementation of multicast packet dispatching. > My kernel is linux 2.6.28-18. Do you think that this multicast packet > dispatching done by the kernel could be the reason of the behavior I am > If yes, I guess the solution is to decrease the RATE until the kernel is > able to reliably dispatch all the packets received to all the involved I don't think the rate is problem. By default it's 100kb/sec which is extremely low and the system should experience any problems with it. More information about the zeromq-dev
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817184.35/warc/CC-MAIN-20240417235906-20240418025906-00882.warc.gz
CC-MAIN-2024-18
1,637
27
http://www.yfztech.com/forum/index.php?s=4a35c15c2058c5a1626f42d5bb1b54c1&showtopic=8976&page=3
code
Cal.... why not bring your yfz to the dyno? Test the PC huh? Hmmm.... not sure, even though I would like to test it, since no one can really buy one, or the insuing replacement parts I like that pipe, just would like to keep the testing to pipes people can still get or get parts for. Think of it like this with the jetting and testing each pipe will require if I get 3 pipes done in 1 hr I am doing great. Still I counted 9 pipes for sure I will test but looks to be around 11. That is atleast 5hrs just in that, not to mention the time I will be charged to set up the dyno and bring the motor up to temp. Could be a 7-8hr day. Sounds expensive to me. My prediction atleast about the GYTR is that for the sound (which I can't test strapped down inside a trailer) will have the best power. I believe Nmotion 9" and the shorty big core Sparks will be the best for peak/ all around pipe. The pug and GP out of frames? Good question. A very short exhaust pipe (pug) against a long out of frame..... might have two very different curves to them. The GYTR best for power to sound ratio. Oh ya forgot another prediction: :jerry: :jerry: :jerry: Edited by tersejr, 10 December 2006 - 01:36 PM.
s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948515165.6/warc/CC-MAIN-20171212041010-20171212061010-00765.warc.gz
CC-MAIN-2017-51
1,186
9
https://ggirelli.info/tos
code
Behind the curtain We do not handle cookies, but in the future there might be exceptions where cookies are handled by third-party service providers. We will add more details here if and when this will be the case. Still, please, note that we always use only GDPR-compliant service providers. For more details on what “GDPR” is, please refer to this page. For clarity, here is the list of the cookies we use: Here is a list of remote services used by this website: - We use cdn.jsdeliver.netto retrieve the latest versions of bootstrapscripts and stylesheets. - We use GoogleFonts on this website, and import them directly from fonts.gstatic.com. On the topic of fonts, we also use FontAwesome for some icons (via - This website is generated via Jekyll with the jemojiplugin, which serves emojis by GitHub (via - For user interactions, we adopted the Webmention recommendation. See here for more details about it. Filopoetica by Gabriele Girelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Based on a work at https://ggirelli.info.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662587158.57/warc/CC-MAIN-20220525120449-20220525150449-00744.warc.gz
CC-MAIN-2022-21
1,091
13
https://mail.python.org/pipermail/python-dev/2004-November/050018.html
code
[Python-Dev] python 2.3.5 release? bac at OCF.Berkeley.EDU Fri Nov 19 05:09:31 CET 2004 Anthony Baxter wrote: > Tim Peters wrote: >> Anthony announced his intent to produce a 2.3.5 release, after 2.4 >> final. Brett announced his intent to send a "one-month warning" about >> 2.3.5 so people could lobby for patches. Since I haven't seen such a >> warning yet, 2.3.5 must be at least a month away <wink>. > At the current time, it looks likely that this will be in early January. I will probably make some announcement in early December to remind people to get stuff in, especially with people having Winter vacation and thus either having less time because they are off galavanting around or more because they get to relax in front of the fire coding. Anthony, can you keep me posted if you think it will slip much past early January? More information about the Python-Dev
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860557.7/warc/CC-MAIN-20180618125242-20180618145242-00250.warc.gz
CC-MAIN-2018-26
873
16
http://askubuntu.com/questions/330493/how-to-set-up-persistence-in-ubuntu-usb-live-boot
code
I have Ubuntu 13.04 running off of a live USB which was created using PenDriveLinux. It works great, but I'm having a problem. When creating the USB, I selected the data persistence option, and set it to the highest value (~4GB), since I have a large flashdrive. However, whenever I boot up, and select the USB as the boot device, it takes me to a screen that gives me the option of installing. It makes sense to ask the first time, but this keeps happening everytime I boot into Ubuntu, and the data I saved to test, is gone. Is there a step I missed to configure the persistent disk inside Ubuntu? Thanks for any help!
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115869264.47/warc/CC-MAIN-20150124161109-00108-ip-10-180-212-252.ec2.internal.warc.gz
CC-MAIN-2015-06
620
3
https://www.vn.freelancer.com/projects/php-perl/shipping-interface-tool/
code
Looking for a programmer who is familiar with the DHL developer tools. You can find out more information at: [url removed, login to view] I would like a site to be developed similar to ipsparcel (dot) com. This site interacts with DHL via their interface using the developer tools to provide quotes and to print shipping labels. DHL will provide a quote via their system and I would like to add a markup and display the quote to users of the site. Users can then pay for the shipping label via my merchant account and the shipping label will be presented for the user to print. Here is the flow of the system: 1-User enters shipping information 2-Site exchange information with DHL for the quote 3-Site adds markup and display quote to user 4-User pays for the shipping label 5-User then print out the shipping label generated by DHL system 6-User can track shipping No web design services needed. Scriptlance escrow required. Please let me know if you have any questions before placing a bid. Thank you.
s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794867041.69/warc/CC-MAIN-20180525043910-20180525063910-00068.warc.gz
CC-MAIN-2018-22
1,004
13
http://android.stackexchange.com/users/25407/kush?tab=summary
code
|visits||member for||2 years, 3 months| |seen||8 hours ago| a dreamer, over-thinker, short-tempered person, sometimes a weird writer, values emotions, respects intelligence more than anything, a Barney's follower but actually has Ted with in and, professionally a software engineer.. 0 Votes Cast This user has not cast any votes
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131298819.92/warc/CC-MAIN-20150323172138-00289-ip-10-168-14-71.ec2.internal.warc.gz
CC-MAIN-2015-14
329
5
https://naturesmace.com/best-snake-control/
code
Best Snake Control Best Snake Control: What are the Best Ways to Get Rid of Them? For you to get rid of snakes within your premises there is a need for you to become knowledgeable on how you will handle the situation. Many ways on how you can control the snake in which you will not be bitten by it is accessible. Below is a list of best snake control that you may want to learn: If the snake will just pass through in your backyard and seems like it is harmless, then there is no need for you to worry. In this situation, you can either call a professional expert to catch it, use snake repellent, or keep snakes away by sealing up your home or buy snake trap. However, if you know how to remove snake physically, remove them in your area. However, this kind of action is quite risky and will need expertise, experience, and special tools. Some of the expert uses some tools like snake tongs or hook to catch snakes and put them in sack then relocate them away from your home. With this technique, you will be no longer in problem. However, if you have no courage to catch it by yourself and no one can help you around about your problem, then here are some other techniques you can use. One of the best things you can do is to identify the things that attract snakes in your area like debris, forage, and cover. If you have lots of these in your area, more snakes will get attracted to come in your home where they can live or hide. Therefore, you can eliminate all the weeds, debris, or clutters that make them feel free and comfortable to make it as their shelter. You can also perform rat control in your home to ensure that snake food will be lessened. It also helps to have perimeter fence. Snake trapping is another effective way to get rid snakes in your yard. Setting it in your area will give you assurance that snake will be caught even without your presence. It is one of the safest option you can put in your pool area, garage or inside your home. There are also good snake repellent that is available in the market. However, you need to ensure that you will choose the best product that will offer true effectiveness that will suit in your budget. These are just few of best snake control you can do once you spot some snakes in your yard or home. With these techniques, you will not harm the snake and at the same time, you will be safe in eliminating them. These options are commonly effective to use in your yard. Whatever option you will choose from the list, always take safety and security as the utmost priority. Now you can make your property safe and will no longer make your children get scared to play or do some activities inside or outside of your home.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710962.65/warc/CC-MAIN-20221204040114-20221204070114-00478.warc.gz
CC-MAIN-2022-49
2,681
9
https://developer.jboss.org/thread/43740
code
Normally JAAS requires the username and password to complete the authentication/authorization, what if the additional login information needed, for instance company name, ssn. the search in the forum found that the additional login information could be concatenated with either of the username/password. are there any other ideal approach to satisfy the requirements? thanks in advance for any recommendations Disable container authentication and use your own authentication.
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516194.98/warc/CC-MAIN-20181023132213-20181023153713-00485.warc.gz
CC-MAIN-2018-43
475
2
https://bitfab.io/blog/prusa-slicer/
code
Although a few years ago the slicer landscape was focused almost exclusively on Cura by Ultimaker, there are now plenty of free slicers available. Among these, one of the best and most popular at the moment is Prusa Slicer. Even if it is by Prusa, it is suitable for almost any printer on the market. Features such as monotonic filling, ironing of the top layers, correction of the elephant foot, manual placement of supports, variable layer height … We will show you all this and much more today at Bitfab. What is Prusa Slicer? Prusa Slicer is a slicer for 3D printers. A slicer is a program that converts our 3D models into a G-code file, the language that 3D printers (and other machines such as CNC or milling machines) “speak”. This software, owned by Prusa, was originally based on an open source slicer called slic3r, although this last one has been discontinued for years and Prusa slicer seems to be its “heir”. Prusa is dedicated to actively develop this software, with an internal team of 7 developers, keeping it open source through its github. In this github we can find the most recent versions with the latest functionalities, even without a complete validation. What features does Prusa Slicer have? Prusa Slicer is one of the most developed software, which means that it is updated a lot and very often. It has assimilated the best features of other software and, in its latest version to date, 2.3 (available on its github), has many features and functionalities. We are going to tell you about the features that we like the most so that you can see all that Prusa Slicer has to offer, whether you are a pro maker, a novice or a professional in the sector. Modes for all: from experts to novices The developers of Prusa Slicer know that their software has many features, so many that it can be overwhelming. That is why it offers us the possibility to choose between 3 modes: simple, advanced and expert. Each of these modes unlocks more and more functions so you can choose the one that best suits your level of experience to avoid touching what you should not. This slicer is structured in several sections, each one having parameters that are unlocked depending on the level of complexity that we choose: Preconfigured profiles for printers and filaments One of Prusa Slicer’s greatest virtues is immediacy. If you have a Prusa printer (we have several) you can use its pre-configured profiles to print in a very refined way without having to touch practically anything. Luckily, each update includes new pre-configured profiles for other brands’ printers and filaments. In addition to pre-configured profiles for very popular printers such as Creality or Anycubic, there are also profiles for many of the most popular filaments on the market. All these profiles that we can take for granted represent a lot of work, since each printer and filament has had to be tested and tweaked to obtain parameters that give optimum printing quality. For us, as users, it offers a lot of value, since it allows us to focus our time on designing and printing our parts, without having to spend time designing from scratch a personalized profile for our printer. On a personal level, in Bitfab we have thousands and thousands of hours printed on our MK3S using only the pre-configured profiles available. In short, this is the closest thing to plug and play there is. In addition to these more general features of Prusa Slicer, we wanted to tell you about some of the features it includes to make our lives easier and to fully customize our prints. Variable layer height We can change the layer height manually or automatically to minimize printing time while maintaining an outstanding finish. As you can see, using variable layer height we have the best of both worlds: it takes almost as little as printing with large layer height and it looks almost as good as printing with small layer height. You do not need a special printer to print multi-colored models. If we design our pieces in the right way we can make, through skillfully planned filament changes, a multi-color piece, easily and quickly. Prusa slicer allows us to create these g-codes with pauses to change the filament at the height we want very easily. Being a little careful when designing our parts, we can achieve results like these with a normal single extruder/fuser printer. This feature, which was exclusive to Simplify3D years ago, has been incorporated into the latest version of Prusa slicer. By means of a simple menu we can add supports by “drawing” on the zones that we want of our part. This is especially useful in models with a particularly complex geometry where the automatic supports are not precise enough. We can customize how certain parts of our models are printed by using modifiers, which can easily be added and then used to modify the printing parameters of that area of the model. Here is a very interesting video where the guys from Prusa explain how powerful this feature can be. Think of all the things you can do. We, as professionals, love it, because it allows us to “fix” certain aspects of 3D models, such as: adding horizontal expansion in just one area of the model, increasing the number of perimeters in a particularly thin region, increasing the filling of areas that will have to bear more load… Prusa Slicer at Bitfab In Bitfab we use Prusa Slicer for many reasons. The ease of use, the ability to use modifiers, the possibility of using it through the command console… You could say that we are real experts using the software by the Prusa fellows. That is why, if you have any doubt or you want to learn more about this or any other slicer, do not hesitate to contact us. We are experts in digital manufacturing and 3D printing so we can certainly help you.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506559.11/warc/CC-MAIN-20230924023050-20230924053050-00263.warc.gz
CC-MAIN-2023-40
5,804
27
https://quantlabs.net/blog/tag/how-to/
code
I have compiled a summary of the highlights of this new ‘How to beat the Market’ course. This includes: these sections: How to buy Bitcoin on eToro with advanced charts How to install Qt Designer onto Ubuntu Linux for Python Note important update about QT Designer below!! This is the easiest way using an Ubuntu Linux Virtual Machine with Virtual Box https://apps.ubuntu.com/cat/applications/qt4-designer/ https://quantlabs.net/blog/2015/12/running-qt-designer-with-python-pyqtgraph/ Join my FREE newsletter to learn more how I plan to build PYTHON front ends with this tool for automated […] How to get started in quant finance, HFT, automated algorithmic trading Here is a set of tips that makes this posting really popular. It explains how to get a quantitative job or a career. Do realize it is quite old but things may have radically changed. As I hinted yesterday, I wonder if it is […] How to install ZeroMQ with C# and .NET binding and example on Microsoft Visual Studio 2012 or 2010 See towards the end which seems to be a decent tutorial from CodeProject.com Download Zero MQ from: http://www.zeromq.org/distro:microsoft-windows C# bindings: http://www.zeromq.org/bindings:clr or https://github.com/zeromq/clrzmq You could also install the C# bindings (clrzmq) with NuGet: http://visualstudiogallery.msdn.microsoft.com/27077b70-9dad-4c64-adcf-c7cf6bc9970c Just download and […] How to create Forex tick historical database in free MYSQL for Windows, Linux, or Apple Mac OSX I have successfully built a historical tick database for forex. It contains nearly 300 million records of millisecond ticks. I have done all this is using MYSQL since it can be easily ported over to Linux from Windows […] How to use Metatrader forex or stock data streaming for FREE! Export tick data to CSV text file to work with Matlab! How to use MQL 4 / Metatrader 4 forex or stock data streaming for FREE! Export tick data to CSV text file to work with Matlab! Here are the entire resources to get you started. This is very powerful for any cheap ass like me that wants quality data for nothing to be dumped into […]
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506420.84/warc/CC-MAIN-20230922134342-20230922164342-00668.warc.gz
CC-MAIN-2023-40
2,120
8
https://www.allinterview.com/company/451/cisco/interview-questions/366/instrumentation.html
code
Why we using 250ohm resistance during lab calibration and why not using in field in one plz define deeply???1 1360 What is windows server 2012 r2 used for? What is database abstraction layer ? Write a query to fetch duplicate records from a table using mysql? hi friends,im an instrumentation and contol engg student,can u hlp me of suggsting barc written question paper,my id is firstname.lastname@example.org, cel no.9563591747 What is the best way to do multi-row insert in oracle? if the source have total 15 records with 2 records are updated and 3 records are newly inserted at the target side we have to load the newly changed and inserted records What is the use of "stderr()"? What is import javax swing? discuss about myisam key cache. : Sql dba What are the topics in pl sql? hai friends, I need clarifications for some doubts in testing terminology. 1.What is thread testing. 2.What is bucket testing and which automated tool is used to do this test. 3. ERP testin automation testing. 4.What is Data Warehousing testing? 5.What is Implementation testing? 6.What is Shake out testing? please let me have the clarifications in detail Tell me what is htaccess? What is the provision? What happens to child records when a parent record is deleted? How to Get the longfilename from a file?
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703514121.8/warc/CC-MAIN-20210118030549-20210118060549-00369.warc.gz
CC-MAIN-2021-04
1,296
16
https://www.groundai.com/project/estimating-socioeconomic-status-via-temporal-spatial-mobility-analysis-a-case-study-of-smart-card-data/
code
Estimating Socioeconomic Status via Temporal-Spatial Mobility Analysis – A Case Study of Smart Card Data The notion of socioeconomic status (SES) of a person or family reflects the corresponding entity’s social and economic rank in society. Such information may help applications like bank loaning decisions and provide measurable inputs for related studies like social stratification, social welfare and business planning. Traditionally, estimating SES for a large population is performed by national statistical institutes through a large number of household interviews, which is highly expensive and time-consuming. Recently researchers try to estimate SES from data sources like mobile phone call records and online social network platforms, which is much cheaper and faster. Instead of relying on these data about users’ cyberspace behaviors, various alternative data sources on real-world users’ behavior such as mobility may offer new insights for SES estimation. In this paper, we leverage Smart Card Data (SCD) for public transport systems which records the temporal and spatial mobility behavior of a large population of users. More specifically, we develop S2S, a deep learning based approach for estimating people’s SES based on their SCD. Essentially, S2S models two types of SES-related features, namely the temporal-sequential feature and general statistical feature, and leverages deep learning for SES estimation. We evaluate our approach in an actual dataset, Shanghai SCD, which involves millions of users. The proposed model clearly outperforms several state-of-art methods in terms of various evaluation metrics. Socioeconomic Status (SES) is an economically and sociologically combined overall measure of an individual or family, typically based on income level, education level, and occupation [1, 2]. SES can be seen as one’s economic and social position in relation to others and typically divided into three levels (high, middle, and low). An individual with a higher SES means he/she earns more, has a better job or higher education than those with a lower SES. SES nowadays plays an important role in many areas like sociology, economics, public administration, and education. It can help governments to design and evaluate social policies, especially for welfare policy. Recently, companies become more and more interested in assessing people’s SES because it is a valuable demographic feature to many emerging applications, such as customized marketing, personalized recommendation, and precise advertisement [3, 4, 5, 6]. Especially, in personal credit rating, SES is an important factor that helps online banks (e.g., Lending Club111lendingclub.com, one of the largest peer-to-peer lending platform.) to decide the volume of loans they will lend to an individual . Given its importance, various approaches have been developed to measure SES, most of which need to collect at least one kind of the following information: individual income, education or occupation , typically through real-world contacts with the individuals under investigation. For a large-scale investigation covering millions of people, it is usually conducted through household interviews by National Statistical Institutes. Some researchers or professional investigation companies also try to collect SES information through methods like online questionnaires or telephone surveys. However, most of them can only cover a small group of people. Although traditional methods can get very detailed information, the investigators usually publish regional-level statistics instead of individual SES information (which is much more important to many companies). Also, the time gap between two successive large-scale surveys could be very long, which may even be several years. If companies decide to collect SES by themselves, they find that the cost is unbearable and many citizens are also quite reluctant to expose their real income or job information. Even governments of some developing countries are also facing the same problem . Due to the prohibitive costs and time required to collect large-scale individual-level SES information, researchers try to estimate individual-level SES using some easily accessible big data sources like mobile phones call records [8, 7, 9, 10] or online social networks [11, 12, 13], Although most existing big data-based methods can only get a rough income level (low, middle, high) of people. they are still valuable to many companies and researcher, owing to their substantially lower cost and time in estimating SES for a large user population. Further, to better support targeted applications it becomes necessary to improve the accuracy of big data-based SES estimation via better algorithms or different data sources with lower costs or privacy concerns. This paper attempts to answer the following question: Can SES be roughly estimated based on human mobility-related data alone? Data-based SES estimation methods are actually based on an observation that different SES levels of people may have different lifestyles. Lifestyle depicts typical routine lives of people. Large-scale human mobility data like smart card data (SCD) or online check-in data can act as an approximation for human lifestyle. Previous methods [8, 10, 7] based on cellphone discussed some general statistical mobility features. However, these features are simply complemented to specific cellphone features like the numbers of calls and telephone fares. These mobility features may not be enough for organizations (e.g., public transit agencies) which only have human mobility data. In this paper, we study whether we can get a satisfactory estimation of user-level SES when we only get users’ mobility data. As a case of mobility data source, we take SCD generated by smart card automated fare collection systems, which are now widely used by public transit agencies. Essentially, SCD is administrated by a city municipality and records a large number of individual-level, time-stamped and geo-tagged trip data of its citizens [14, 15]. Although a large and growing body of work has studied SCD in different contexts, little attention has been paid to estimate SES based on SCD. We develop S2S (Smartcard to SES), a method for estimating SES based on SCD and other related public information. The main challenges in designing S2S are: Designing effective features related to SES based on smart card data. Designing a model which can utilize different types of features to improve the performance of estimation. To the best of our knowledge, this paper is the first attempt to estimate user-level SES using SCD data. Our main contribution is summarized as follows. We propose a deep neural network (DNN)-based learning approach (S2S), which considers both temporal-sequential features and general statistical features of human mobility. Especially, the sequential aspects are considered in S2S, representing more salient nature of an individual’s behavior in socioeconomic context than traditional general statistical features. We evaluated our approach using actual large-scale SCD data of totally 7,919,137 cards of Shanghai City for 16 consecutive days. The results demonstrate our approach significantly outperforms several baselines. The rest of this paper is structured as follows: related work is reviewed in Section II. Section III introduces the datasets. Section IV discusses the features. The S2S model is proposed in Section V. Experimental results on Shanghai SCD are presented in Section VI. The paper is concluded in Section VII with a brief discussion of limitations and directions of future research. Ii Related Work SES is a widely studied concept in the field of social sciences, especially in health and education analysis . In recent years, companies and researchers pay increasing attention to SES estimation because of its potential in numerous high-value applications like the personalized recommendation and online banking. Though there has been a great improvement in estimating other demographic attributes like age, ethnicity and gender [16, 17], SES estimation still needs more effort. One of the main obstacles is that SES ground truth data (covering a large group of people) is much harder to get than attributes like age and gender. Normally users are more reluctant to disclose their education, occupation and income information. The organizations, which have such data, also seldom open it to the public for privacy reasons. Recently, researchers begin to use indirect SES indicators from some big data sources. These data sources may cover millions of people, recording different aspects of their lifestyles. Ii-a SES Estimation based on Mobile Phone One important data type is mobile phone data. shows that information derived from the aggregated use of cell phone records can be used to identify the socioeconomic levels of a population. provides an analytical model to formalize the relationship between cell phone usage (including mobile phone consumption, social information, and mobility patterns) and socioeconomic indicators (including income and education). estimates Rwandans’ SES based on their mobile phone usage. They design a composite wealth index for Rwandans based on whether they have refrigerator, electricity, television and other belongings. Then they extract features from the mobile phone data. The experiments show that the distribution of wealth estimated from mobile phone data has a strong correlation with the distribution of wealth measured by the Rwandan government. This paper considers multiple factors of phone usage including communication, the structure of contact network and mobility pattern. Different from them, we mainly rely on mobility features and use a different kind of data source (SCD). constructs a simple model to produce an accurate reconstruction of district-level unemployment from people’s mobile communication patterns alone. analyses the relationship between multiple mobility features and SES based on mobile phone datasets of two cities: Singapore and Boston. In Singapore, they take the housing price of living area as SES. In Boston, they use the census tracts as SES.They find that the relationship between mobility and SES could vary among cities, and such a relationship is quite complicated. It may be influenced by several different factors like spatial arrangement of housing, employment opportunities, and human activities. For example, phone user groups that are generally richer tend to travel shorter in Singapore but longer in Boston. Our work is different from in the following ways: 1) we examine the extent to which SES can be estimated from SCD, while they try to figure out the relationships between SES and mobile phone mobility data; 2) we mainly focus on SCD instead of mobile phone; 3) besides the living area, we also consider the work area while labeling people’s SES. Ii-B SES Estimation based on Social Network The social network is another important data source which Researchers pay a lot of attention to. [11, 12, 13] all explore how to estimate people’s SES based on their tweets. They use the job information from the users’ profile as ground-truth. use features like topics, emotions to estimate peoples’ income. Their predictions reach a correlation of 0.633 with actual user income, showing that tweets can be used to predict income. [12, 13] further improve the features and significantly increase the accuracy. analyzes the relationship between SES and people’s activity patterns extracted from Twitter. They find out that while SES is highly important, the urban spatial structure also plays a critical role in affecting the activity patterns of users in different communities. Ii-C Relationship Study between SES and SCD Although SCD records mobility characteristics of a great number of people, the work about the relationship between SES and SCD-based mobility is quite limited. represents each passenger through a sequence of activities (purely inferred from SCD records) and cluster them using k-means. They survey a part of users about their demographic attributes and then analyze the demographic attributes of each cluster. They find that the average income of some clusters is high than other clusters. So it indicates that income may be related to people’s smart card records. introduces an approach to cluster passengers living in Rennes (France) based on their temporal habits. They study how fare type proportions are distributed in different clusters. The Rennes SCD dataset includes fare types like Young subscribers, Regular subscribers, Elderly subscribers and etc. They find out there are some mobility differences between different fare type categories. For example, the clusters mainly consisting of students who tend to get back home early in Wednesday since course hours on Wednesdays end early in France, while other clusters do not have this pattern. This also indicates SCD records may be related to users’ age and occupation. These works show there is some possible relationship between SCD-based mobility and SES. In this paper, we aim to explore whether and how SCD can be used to estimate SES. Iii-a Data Collection We exploit three related datasets in this paper: smart card, POI and housing price. We describe them respectively below. Smart card: The smart card dataset is opened by the Shanghai Open Data Applications contest. The dataset contains all the subway records in Shanghai between April 1st and April 16th, 2015. The example format of a subway record is shown in table 1. One single subway trip consists of two successive records. The first one is created when the user gets into the boarding subway station and begin to travel in the subway system. The second record is created when the user gets out of alighting station. If the fare is 0.0, then the user is getting aboard a metro train, or they are getting off. There are 7,919,137 IDs which can be correctly recognized after data cleaning. When users apply for a smart card in Shanghai, they do not need to provide any personal information. So IDs do not have any relationship with real-world identification, avoiding possible privacy leakage. POI: POI dataset of Shanghai is crawled based on GaoDe Map API Service222lbs.amap.com, one of the major online map providers in China. The categories include Public Facility, Domestic services, Education, Business Residence, Hospital, Hotel, Car services, Sport&Leisure, Scenery, Restaurant, Public Transportation and Financial Services. Housing price: Housing price dataset is crawled from Lianjia.com 333sh.lianjia.com, one of the biggest real estate agency service providers in China. website, which records the house prices and location information of most apartments/houses for selling in Shanghai. We crawl the average housing prices of all communities (a community usually includes many similar houses in one area). There are 1,804 communities. The cheapest one is 10,453 CNY/m. The most expensive one is 99,941 CNY/m. Iii-B Ground Truth Construction There are two problems in Ground Truth Construction. First, some users may only use subway for very few times (1 time) during all 16 days. We need to filter users with too less records. Second, there is no SES information for millions of smart card holders. Automated fare collection (AFC) systems are just designed for billing purpose, so they do not collect socio-demographic information of the card holders in most cities. This Shanghai dataset (appr. 8 million smart cards) is also totally anonymous without any SES-related information, such as occupation, education and income. And we cannot manually relate smart card IDs with volunteer users because IDs have been hashed before opened for researchers. So it is hard to get actual SES label for each ID. We need to find a reasonable SES label for millions of users. Iii-B1 Selecting Frequent Users As shown in Fig 1, although there are millions of subway users, most of them take very few subway trips. the largest group of users (33.04%) only takes subway in 1 day. More than half of people takes subway in less than 2 days. Only 22.8% of users have subway trips more than 7 days. And we also checked the trip numbers, 36.9% of users only took 1 trip. These infrequent users just use the subway occasionally. Subway is not an important transportation method for them. Their mobility data in subway system may be just a random and unimportant action in their regular life. In this paper, we focus on users who have taken subways for at least 7 days. In this way, we selected about 700 thousands frequent users. Though the number of frequent users is much smaller than infrequent users, the total number of trips they take is much more than the others. As shown in Fig 2, more than 60.1% of trips are taken by frequent users, who take subway more than 7 days. Iii-B2 Labeling Frequent Users Getting SES label is a common problem when estimating SES for a large number of people[7, 21, 22]. Many works use the housing price of people’s living place as a proxy to represent people’s possible SES [15, 10, 23, 24, 25, 26, 27]. And finds out that the average housing price and the income level at the corresponding area are strongly correlated (0.88). As shown in Fig. 3, we also held an online survey 444http://wj.qq.com/s2/3598293/4053/, which collect 78 Shanghai inhabitants’ monthly income and housing price. To protect the privacy and get more successful responses, we use income levels (e.g, 5,000-10,000 CNY) instead of accurate numbers. So some answers may overlap in Fig. 3. We use the size of the bubble to show the overlapped number. Bigger bubble means more same answers. We can see, the income level generally increases along with the housing price. Pearson’s correlation is 0.68. The correlation is not so strong as in . This may be partially caused by the phenomenon in China that some low-income young people buy high-priced houses with the help of their families. However, high family income may still also be a “bonus” to people’s SES. So in general, we think housing price is a good indicator of the people’ SES. In this paper, we use people’s house price as an approximation of frequent users’ SES. First, we use the method in to find frequent users’ home station (the station nearest to their home). Then, we select the communities around the home stations (less than 2 km), to calculate the average housing price of the home station. SES is usually divided into 3 levels: high, middle and low. We divide frequent users into 3 levels based on the average housing price of their home station. There are 19.4% of users at high level (housing price 70000 CNY/m), 36.2% in middle level and 44.4% at low level. Iv Feature Engineering A user’s smart card records can be seen as a list of tuples . and denote the subway station and the time of the -th record. denotes whether the user is getting aboard and off at -th record. Given users’ smart card records, we aim to estimate users’ SES levels. The overall research design is shown in Fig. 4. One of the key challenges is feature engineering. We mainly utilize two types of features in this paper: general statistical features and temporal-sequential feature. General features (shorten form of general statistical features) usually consider the statistical features of a user’s whole mobility data. They have been discussed by previous works like [8, 29, 10]. However, previous papers largely neglect the temporal and function information related to each station, which will be discussed in following section. Iv-B General Feature Iv-B1 , Radius of Gyration is defined as follows: Here, â denotes the location (latitude and longitude coordinates) of . denotes the geographic center of all . is the geographic distance between two locations. A large value of indicates the user mobilize in a large area. Iv-B2 , K-Radius of Gyration Let be a counting function, which is equal to the number of in a user’s whole mobility record. A large value of means the user often visit the subway station . is a radius of gyration calculated using only top visited stations. proposed it to measure how a user’s top stations determine his/her radius of gyration. is defined as: The aim of is to find out returners and explorers. suggested that, k-returners are those whose and k-explorers are those for whom . We can simply think that k-returners are those who tend to spend most of the time between k the most important locations, while k-explorers are those whose activity space cannot be well described by only top locations. And in this paper, we set . In this way, 2-returners are likely to be a common commuter between home and working place. Iv-B3 , Number of Different Stations is defined as follows: measures the total number of different stations visited by a user during all 16 days. A larger value of means that the users tend to visit more different subway stations. Iv-B4 , Activity Entropy Given a vector , where and . denotes the proportion of visiting numbers of station , the activity entropy is calculated as: A large value of means that the spatial diversity of a user’s daily activities is high. Iv-B5 , Travel Diversity Travel diversity measures the regularity of a user’s movements among his/her subway stations. We define an origin-destination trips as a trip between two consecutive stations. Let E denote all the possible origin-destination pairs (without considering direction) extracted from , all stations a user visit. Then the travel diversity is defined as: where is the probability of observing a trip between the -th origin-destination pair. A large value of means that a user’s tend to travel between quite different origin stations and destination stations. Iv-C Sequence Feature People may tend to follow regular and stable patterns during their everyday lives. And people in different SES-level may visit different places and have different commute schedules. For example, cleaners usually need to go to company earlier while IT engineers may have to work at company until very late at night. Here we use sequence feature (shorten form of temporal-sequential feature) to describe these phenomenons. We divide all 16 days into 1536 (16x24x4) time bins by every 15 minutes. For each time bins, we need to find the location where a user stay, and calculate a feature vector based on the location. Given that a user’s sequence feature is , where and denote the feature vectors of location at the -th time bins. consists of three kinds of features: the ID of time bins (, from 0 to 1535), function of station for most citizens (,) and function of station for current user (,). To find the location where a user stay, first we take the stations as the location of the corresponding time bins. For example, if during the first time bins, a user get aboard on station A, then we take station A as the user’s location of the first time bins. Then for time bins which there is no corresponding station, we use following method to find their approximate locations: 1⃝ Among the time bins with a station location, find out those when the user is getting aboard and the others when the user is getting off, based on . The former time bins are denoted as . The latter time bins are denoted as . 2⃝ If a series of time bins are between two consecutive stations, and ( the first for getting off and the second for getting aboard), the locations of the first half time bins are the station of while the second half are the station of . 3⃝ If a series of time bins are between two consecutive stations, and ( the first for getting aboard and the second for getting off), we do not need to find their locations. The detail of how to calculate the feature vectors for these time bins will be discussed in following sections. 4⃝ For the time bins before , the locations are the station of . 5⃝ For the time bins after last getting off station (i.e, ), the locations are the station of . Iv-C1 , Function of station for Most citizens The step of urbanization leads to different functional regions in a city, e.g., residential areas, business districts, and entertainment areas . People show in the different functional areas may have different social attributes. For example, housewives may mainly stay inside residential areas while regular office worker may travel between the residential area and business districts during the weekday. And different kinds of people may spend different time in some special functional regions. For example, a rich family may spend more time in entertainment areas during the weekend than an ordinary family. Here we use two features called to describe this phenomenon. Here we explain how to determine the function for each subway station. There are different functional regions in one city, supporting different needs of people’s urban lives. And similarly, each subway station also has a different function. People tend to use the subway station which is nearest to their starting location and ending location. For example, if a subway station is inside a residential area, then most people using this subway should be the people who live near this station. During the weekday, most users of this subway station would get into the subway in the morning to go to work and get out of the station in the evening to go back home. On the other hand, if a subway station is inside a work area, surrounded by a lot of companies, then most people using this subway should be the people who work near this station. During the weekday, most users of this subway station would get out the subway in the morning to go to work and get into the station in the evening to go back home. So the function of one subway station is actually the function of the area near it. In this paper, we use the same method in to divide all Shanghai subway station into 3 kinds: residential, entertainment and work. This method needs to consider the human mobility and poi data of each station. The distribution of function stations is shown in Fig. 5. The blue points represent residential stations, the red points represent entertainment stations and the yellow points represent work stations. For most , is “residential”, “entertainment” or “working”. However, if is between two consecutive stations, and ( the first for getting aboard and the second for getting off), is “transfer”. It means the user is traveling from one function area to another function area. Iv-C2 , Function of station for current User For some users, the function of a specific station may be different from most users. For example, someone may work in a supermarket in a living area. Though for most people, the station is a “residential” station. However, for this person, the station is more like a “working” station. In this paper, we use the same method in to divide a user’s stations into 3 kinds: “home”, “work” and “others”. For most , is “home”, “work” or “others”. However, if is between two consecutive stations, and ( the first for getting aboard and the second for getting off), is “transfer”. V S2S Model The goal of the proposed model is to estimate a user’s SES level, denoted as , where is the id of a smart card user. Fig. 6 shows the architecture of the proposed model, which is comprised of two major components. The sequential component processes sequence features and outputs . The general component processes general feature and outputs . and are fused and fed into the softmax layer to estimate the SES level of input user. V-a Sequential Component People of different SES level may have different lifestyles, like visiting different places and having different commute schedules. We need to capture the temporal dependence of people’s mobility. The recurrent neural network (RNN) is an artificial neural network which is widely used for capturing the temporal dependency in sequential learning, such as the natural language processing and speech recognition . When processing the current time step in the sequence, it updates its memory (also called hidden state) according to the current input and the previous hidden state. The output of the recurrent neural network is the hidden state sequence at all the time steps in the sequence. The sequential feature we design considered the transition of different function stations, which can be effectively handled by RNN. Sequential component is composed of an embedding layer, a single RNN layer, and two fully-connected layers, as shown in Fig. 6. In this paper, we denote the feature at time bin as . In our experiments, RNN performs not so well in processing the long time bins due to vanishing gradient and exploding gradient problems. Therefore, instead of the RNN layer, we adopt the Long Short-Term Memory (LSTM) layers. In short, LSTM adds an input gate and a forget gate to alleviate the gradient vanishing/exploding problem. is fed into an embedding layer first. Because , and are both categorical values which can not feed to RNN layer directly . The embedding layer transform , and into three low-dimensional real vectors (, and ), respectively. The , and are concatenated to get . is fed into the LSTM layer, which output a hidden state . We concatenate all of the hidden state fragment as . Then is fed into the fully-connected layer as: is then fed through the fully-connected layer to output the , defined as: where and are the learnable parameter matrices used in the fully-connected layers. V-B The Structure of General Component Besides the sequence feature, the general mobility feature may also reflect a part of lifestyles. We discussed these features and their possible relationship with SES-level in Section IV. We stack two fully-connected layers to model the general factors that affect SES. . The first layer processes the feature vector and outputs a hidden state : Then is fed into the second layer and get the output of the general Component : where and are the learnable parameter matrices used in the fully-connected layers. V-C Fusion and Training We here combine the output of the two components as shown in Fig. 6. The fusion layer assigns the weights to two components. Finally, the softmax layer estimates the SES level of a user denoted by . is defined as: where o is element-wise multiplication, and are the learnable parameters that adjust the contribution of sequence and general features to . The model can be trained by minimizing the cross-entropy between the ground truth and the estimated SES level : where are all learnable parameters of S2S model and means the user number for training. We first construct the training dataset from a part of users’ actual SES level and corresponding features. Then, S2S model is trained via back-propagation and Adam . The details of datasets and ground truth are already introduced in Section III. Finally, We picked 729,859 users who take the subway for at least 7 days (during 16 days). These users are divided into 3 SES levels: high, middle and low. 80% of picked users are for training and 20% for testing. The results are mainly measured by classification precision, recall, and F1-score. To the best of our knowledge, there exists no model directly estimating SES from users’ SCD. We use the following baselines to test the effectiveness of our model: 1⃝ Random Guess just randomly classifies the user to an SES label. 2⃝ STL. This method predicts twitter users’ demographics based on their online check-ins . Online check-ins are another kind of mobility data. They are uploaded to online social networks by people to show where and when they are. STL organizes usersâ check-ins into a three-way tensor representing features based on spatial, temporal and location information (e.g, location category, keywords, and reviews of a POI). Then a support vector machine (SVM) is trained for estimate usersâ demographics (e.g., gender, blood type). We treat station records as users’ check-ins when using STL. However, we have to omit some location information like reviews. Because subway stations just do not these kinds of data. 3⃝ Gradient boosting decision tree (GBDT). The gradient boosting model is famous for its outstanding performance and efficiency for estimation. The LightGBM is an open source gradient boosting library . It has been widely adopted in many data mining competitions like Kaggle. We use sequence feature and general feature to train LightGBM model. Besides the above baselines, Sequence model (S2S-S model) and General model (S2S-G model) are also tested to find out the most effective feature categories. S2S-S model only uses sequential features with sequential component. S2S-G model only uses general features with general component. We refer our method which involves both sequence and general feature as S2S-SG. Parameter Setting. The main parameters of our experiment are as follows: In the embedding layer, we embed to , to and to In the general component, the neuron number of two fully-connected layers are both 24 neurons. In the sequential component, the size of the hidden vector is 64. In the fusion component, the size of the hidden vector is 24. The learning rate of Adam is 0.001 and the batch size during training is 12000. Our model is implemented with Keras. We train our model on a 64-bit server with 12 CPU cores, 64GB RAM and NVIDIA 1080Ti GPU with 12G VRAM. Vi-B Performance Comparison Table 2 shows the performance of baselines and S2S, and note the averages of 3 classes are used as the main comparison metric. From the result, we can see that all the metrics of S2S-SG performs better than all baselines, achieving 69% in precision, 67% in recall and 68% in F1-score. Table 3 shows the performance of S2S-SG in each SES class. As shown in Table 2, STL is clearly better than Random Guess while less accurate than LightGBM. The reason why STL does not perform well on smart card dataset might be caused by two reasons. First, STL did not design features or methods specifically for SES estimation. Also, the subway station does not have one of the important information which STL relies on, i.e., peopleâ reviews and keywords. Reviews and keywords of locations may also contain useful information about SES. However, unlike restaurants in STL, subway station did not have similar review information. LightGBM is better than STL, showing the proposed features are more suitable to estimate SES based on SCD. Lightgbm underperforms S2S-GS, likely due to the fact Lightgbm underperforms LSTM on understanding long sequential features. We can also see that S2S-SG outperforms the other S2S models. S2S-S is clearly better than S2S-G, demonstrating the value of sequential features. And the performance of S2S-S is even better than LightGBM with full features. There may be two reasons why general statistical features are not so useful as sequential features. First, the dataset covers only 16 days. The cellphone datasets which previous works studied usually last for months. So the general feature here may be not suitable for short time. Second, general features are not good at capturing some subtle differences in people’s lifestyles. For example, some high SES-level people like to go for entertainment instead of going back home after work, while some low SES-level people also visit such an area for part-time work. It is hard to distinguish them based on general features because they may all have a larger mobility area than others, like home-work commuters. However, sequential features can help in these scenarios, e.g., checking whether one goes to a station for work or for entertainment, or checking whether one is going to an entertainment area during usual working time (e.g, 9am-5pm every workday) or after work (e.g, after 8 pm). Also, people who go to entertainment areas during work time are more likely to be a service staff than a consumer. We also manually check some error estimations. We find out that many users in high SES-Level are mislabeled as middle SES-level. This may be because most frequent SCD users are not so “rich”. Actually, most subway-frequent users are middle and low-income levels among the city’s population, so their difference may not so clear. Besides, we just differ high SES-level or middle SES-level people based on their housing price (70,000 CNY/m). However, there is a large group of users who are around the 70,000 CNY/m. We checked their home stations. Many middle and high price-level home stations are quite near to each other. So the difference of mobility feature between them is also not so clear. It means we still need to improve the features in our future work. This paper examines whether people’s SES can be estimated only based on their smart card mobility data. We take the Shanghai smart card data as a case study. Because individual-level income information is hard to get for millions of people, we hypothesize that people’s income level is related to the house-price level of their home. In this way, we get the SES label of about 700 thousand users who frequently take subways. Mobility features and a DNN model, S2S, are proposed to estimate their SES-level. In the end, experiments show that these SCD-based features can be used to estimate the SES level (much better than random guess), wherein the sequential features are clearly better than traditional general features. This method can be used to quickly give a rough individual-level SES estimation for millions of people, when companies or researchers can only get people’s mobility data. This paper is the first try to estimate SES from SCD, validating the predictive power of SCD-based mobility data on SES. There are still problems we need to solve. For example, because we use the house price of people’s living area as the ground truth. We cannot leverage some important features (e.g., favorite locations and housing price level of their working area) in estimating SES. We plan to conduct a detailed SES survey of reasonable scale to build a more precise model between SES and mobility as future work. The research work was partly funded by the European Unionâs Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 824019, National Natural Science Foundation of China (No. 61802140) and Hubei Provincial Natural Science Foundation (No. 2018CFB200). Also, we would like to acknowledge Xu Wang for her support in the data access and discussions on mobility patterns, and Jar-Der Luo for his insights on socioeconomic status. - R. H. Bradley and R. F. Corwyn, “Socioeconomic status and child development,” Annual review of psychology, vol. 53, no. 1, pp. 371–399, 2002. - S. R. Sirin, “Socioeconomic status and academic achievement: A meta-analytic review of research,” Review of educational research, vol. 75, no. 3, pp. 417–453, 2005. - T. S. Szopiński, “Factors affecting the adoption of online banking in poland,” Journal of Business Research, vol. 69, no. 11, pp. 4763–4768, 2016. - D. Chen, D. Jin, T.-T. Goh, N. Li, and L. Wei, “Context-awareness based personalized recommendation of anti-hypertension drugs,” Journal of medical systems, vol. 40, no. 9, p. 202, 2016. - L.-p. Hung, “A personalized recommendation system based on product taxonomy for one-to-one marketing online,” Expert systems with applications, vol. 29, no. 2, pp. 383–392, 2005. - Y. Wu, N. Carnt, and F. Stapleton, “Contact lens user profile, attitudes and level of compliance to lens care,” Contact Lens and Anterior Eye, vol. 33, no. 4, pp. 183–188, 2010. - J. Blumenstock, G. Cadamuro, and R. On, “Predicting poverty and wealth from mobile phone metadata,” Science, vol. 350, no. 6264, pp. 1073–1076, 2015. - V. Soto, V. Frias-Martinez, J. Virseda, and E. Frias-Martinez, “Prediction of socioeconomic levels using cell phone records,” in International Conference on User Modeling, Adaptation, and Personalization. Springer, 2011, pp. 377–388. - A. Almaatouq, F. Prieto-Castrillo, and A. Pentland, “Mobile communication signatures of unemployment,” in International conference on social informatics. Springer, 2016, pp. 407–418. - Y. Xu, A. Belyi, I. Bojic, and C. Ratti, “Human mobility and socioeconomic status: Analysis of singapore and boston,” Computers, Environment and Urban Systems, 2018. - D. Preoţiuc-Pietro, V. Lampos, and N. Aletras, “An analysis of the user occupational class through twitter content,” in Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), vol. 1, 2015, pp. 1754–1764. - D. Preoţiuc-Pietro, S. Volkova, V. Lampos, Y. Bachrach, and N. Aletras, “Studying user income through language, behaviour and affect in social media,” PloS one, vol. 10, no. 9, p. e0138717, 2015. - V. Lampos, N. Aletras, J. K. Geyti, B. Zou, and I. J. Cox, “Inferring the socioeconomic status of social media users based on behaviour and language,” in European Conference on Information Retrieval. Springer, 2016, pp. 689–695. - M. Bagchi and P. R. White, “The potential of public transport smart card data,” Transport Policy, vol. 12, no. 5, pp. 464–474, 2005. - K. Mohamed, E. Côme, L. Oukhellou, and M. Verleysen, “Clustering smart card data for urban mobility analysis,” IEEE Transactions on Intelligent Transportation Systems, vol. 18, no. 3, pp. 712–728, 2017. - Y. Zhong, N. J. Yuan, W. Zhong, F. Zhang, and X. Xie, “You are where you go: Inferring demographic attributes from location check-ins,” in Proceedings of the eighth ACM international conference on web search and data mining. ACM, 2015, pp. 295–304. - G. Antipov, S.-A. Berrani, and J.-L. Dugelay, “Minimalistic cnn-based ensemble model for gender prediction from face images,” Pattern recognition letters, vol. 70, pp. 59–65, 2016. - V. Frias-Martinez and J. Virseda, “Cell phone analytics: Scaling human behavior studies into the millions,” Information Technologies & International Development, vol. 9, no. 2, pp. pp–35, 2013. - Q. Huang and D. W. Wong, “Activity patterns, socioeconomic status and urban spatial structure: what can social media data tell us?” International Journal of Geographical Information Science, vol. 30, no. 9, pp. 1873–1898, 2016. - G. Goulet-Langlois, H. N. Koutsopoulos, and J. Zhao, “Inferring patterns in the multi-week activity sequences of public transport users,” Transportation Research Part C: Emerging Technologies, vol. 64, pp. 1–16, 2016. - C. Smith-Clarke and L. Capra, “Beyond the baseline: Establishing the value in mobile phone based poverty estimates,” in Proceedings of the 25th international conference on world wide web, 2016, pp. 425–434. - D. Filmer and L. H. Pritchett, “Estimating wealth effects without expenditure data or tears: an application to educational enrollments in states of india,” Demography, vol. 38, no. 1, pp. 115–132, 2001. - H. Huang, B. Zhao, H. Zhao, Z. Zhuang, Z. Wang, X. Yao, X. Wang, H. Jin, and X. Fu, “A cross-platform consumer behavior analysis of large-scale mobile shopping data,” in Proceedings of the 2018 World Wide Web Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 2018, pp. 1785–1794. - M. N. Harris, M. C. Lundien, D. M. Finnie, A. R. Williams, T. J. Beebe, J. A. Sloan, B. P. Yawn, and Y. J. Juhn, “Application of a novel socioeconomic measure using individual housing data in asthma research: an exploratory study,” NPJ primary care respiratory medicine, vol. 24, p. 14018, 2014. - Y. J. Juhn, T. J. Beebe, D. M. Finnie, J. Sloan, P. H. Wheeler, B. Yawn, and A. R. Williams, “Development and initial testing of a new socioeconomic status measure based on housing data,” Journal of Urban Health, vol. 88, no. 5, pp. 933–944, 2011. - H. Ghawi, C. S. Crowson, J. Rand-Weaver, E. Krusemark, S. E. Gabriel, and Y. J. Juhn, “A novel measure of socioeconomic status using individual housing data to assess the association of ses with rheumatoid arthritis and its mortality: a population-based case–control study,” BMJ open, vol. 5, no. 4, p. e006469, 2015. - N. T. Coffee, T. Lockwood, G. Hugo, C. Paquet, N. J. Howard, and M. Daniel, “Relative residential property value as a socio-economic status indicator for health research,” International journal of health geographics, vol. 12, no. 1, p. 22, 2013. - J. Zhou, E. Murphy, and Y. Long, “Commuting efficiency in the beijing metropolitan area: An exploration combining smartcard and travel survey data,” Journal of Transport Geography, vol. 41, pp. 175–183, 2014. - L. Pappalardo, D. Pedreschi, Z. Smoreda, and F. Giannotti, “Using big data to study the link between human mobility and socio-economic development,” in Big Data (Big Data), 2015 IEEE International Conference on. IEEE, 2015, pp. 871–878. - N. J. Yuan, Y. Zheng, X. Xie, Y. Wang, K. Zheng, and H. Xiong, “Discovering urban functional zones using latent activity trajectories,” IEEE Transactions on Knowledge and Data Engineering, vol. 27, no. 3, pp. 712–725, 2015. - K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. - F. A. Gers, J. Schmidhuber, and F. Cummins, “Learning to forget: Continual prediction with lstm,” 1999. - Y. Gal and Z. Ghahramani, “A theoretically grounded application of dropout in recurrent neural networks,” in Advances in neural information processing systems, 2016, pp. 1019–1027. - D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” arXiv preprint arXiv:1412.6980, 2014. - G. Ke, Q. Meng, T. Finley, T. Wang, W. Chen, W. Ma, Q. Ye, and T.-Y. Liu, “Lightgbm: A highly efficient gradient boosting decision tree,” in Advances in Neural Information Processing Systems, 2017, pp. 3146–3154.
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540565544.86/warc/CC-MAIN-20191216121204-20191216145204-00509.warc.gz
CC-MAIN-2019-51
46,167
145
https://discourse.libsdl.org/t/how-to-implement-a-ttf-font-file-in-c-source-code-or-binary-executable-file/9914
code
I would like to use also text in my current program, so I downloaded the TTF library. Unfortunality I must give a valid path for my font file, this is a problem, because there a good chances that the users will only copy the binary executable file, and not the specific path where is located the font file. I tried this: $ cat my_prog david.ttf > my_new_prog And then I toke the example program showfont: $ showfont -solid ./my_new_prog It didn’t worked, he doesn’t parse the file to find a right font. It’s not the best method, but if this worked, I could load my font by doing this: int main(int ac, char **av) But I think the best would be if it could be possible to load it directly from memory, and to convert the font ttf file to a C file, and to have a new interface as like: TTF_Font* TTF_OpenFont_raw( const unsigned char *raw_data, int ptsize ); Personnally I think programs are more portable if all files are together in one executable file, but I don’t know what the professionals would say about that. :-)– saf at trashmail.net Free disposable email addresses.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515501.4/warc/CC-MAIN-20220517031843-20220517061843-00697.warc.gz
CC-MAIN-2022-21
1,083
19
https://chat.stackexchange.com/?tab=all&sort=active&page=1
code
a list of all rooms, recently active rooms first “The secret impresses no one. The trick you use it for is everything.” Discussions about VBA, the VBE, C#, COM interop and Rubberduck. Also, where Stack Overflow questions become Rubberduck inspection ideas. http://www.rubberduckvba.com | http://www.github.com/rubberduck-vba/Rubberduck Where diamonds are made, smoke is detected, and we break things by developing on production. 135k true positives and counting. For more information, see https://charcoal-se.org/about. Handy links: https://metasmoke.erwaysoftware.com, https://github.com/Charcoal-SE Associated with Math.SE; for both general discussion & math questions alike. Just ask; don't ask to ask. Rarely if ever expressible as a ratio of integers. Chat guidelines: http://tinyurl.com/hzl2955 | $\LaTeX$ in chat: http://tinyurl.com/cfqcvpc For CG&CC users to discuss and play video games and board games. Discord server: https://discord.gg/ghUBKBa General discussion about http://aviation.stackexchange.com/ The white zone is for immediate loading and unloading of passengers only. There is no stopping in the red zone. Discussions/answers to conjectures (may or may not be original), interesting maths problems in/out of MSE, and any random stuff Welcome to The Awkward Silence! Here, you can find the general discussion for https://interpersonal.stackexchange.com. Feel free to ask your questions about the site, or participate when people are practicing their smalltalk skills! HNQ Feed: https://chat.stackexchange.com/rooms/94066/hnq-feed The news discussion offshoot of the Bridge. Stars reserved for news items. Discussion should primarily be about news topics. Place to sail the open sea, and search out boats to take down. Trello board - https://trello.com/b/N4tGabC2/ask-ubuntu-abandoned-questions-cleanup Guide to handling Trello board - https://docs.google.com/document/d/1eG0qNJaXGR7krDllIITZNXUSXFdpFfFkwtdcAudvz1c/edit?usp=sharing Where English is occasionally spoken if you are a patient. Previously a testing chamber for @Smelly. Now HQ for @IPSCommentBot and @IPSMetaCommentBot (source code for both: https://github.com/thesecretmaster/ips-comment-bot). IPSBot Data Dumps: https://ipsbot.dvtk.me General on- and off-site discussion for http://dba.stackexchange.com. Jokes explained at great length (JEAGL) please. Chat de la comunidad de Stackoverflow en español. A veces charlamos sobre temas de programación, discutimos preguntas y/o respuestas, hablamos de tecnología y otras cosas. Habla, convive, conversa, responde, pregunta, pero siempre desde el respeto. https://es.stackoverflow.com General discussion for https://codegolf.stackexchange.com | Guidelines: https://ppcg.github.io/chatiquette/ Welcome to chat for http://literature.stackexchange.com/! -- Read any good books lately? General discussion for https://hinduism.stackexchange.com
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027321351.87/warc/CC-MAIN-20190824172818-20190824194818-00137.warc.gz
CC-MAIN-2019-35
2,878
18
http://www.androidbulletin.info/5-best-cel-shaded-android-games/
code
Android Cel Shaded games. This video shows best cel shaded android games. These 5 Android games have best cel shading graphics. Even these games are offline. Cel shaded games are look like an oil painting or papers like texture (sketch) games. The cel-shading process starts with a typical 3D model. Where the shadows and highlights appear more like blocks of color. If you Like this video, please Subscribe my channel for more updates.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00367.warc.gz
CC-MAIN-2019-26
436
3
https://learn.microsoft.com/en-us/answers/questions/593589/sql-read-only-user-for-specific-tables.html
code
I am not a Microsoft SQL expert, but I need to solve this trouble. I must configure a MS SQL access as read only user. I understood that I must set "db_datareader* permission for this new read only user (guide: https://www.itsupportguides.com/knowledge-base/server-side-tips/sql-management-studio-how-to-create-read-only-users/) After that I would limit read only user database view. I would that this limited user can only see specific tables inside specific database (databases). Is it possible? How can I do it?
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337287.87/warc/CC-MAIN-20221002052710-20221002082710-00223.warc.gz
CC-MAIN-2022-40
514
7
https://www.ologyonlinecourses.com/podcasts/better-brain-better-you/episodes/2147762620
code
#100: A Brain Health Diet Plan What is the best diet plan to keep your brain young? Forget fad diets that are popular for losing weight, but have no scientific support for their use. You should try the MIND diet, designed by scientists to focus on food groups that can boost your brainpower and protect you from age-related problems like Alzheimer’s disease. I breakdown the foods to eat and avoid on the MIND diet and give you a free seven-day MIND diet meal plan. RESOURCES & LINKS: Connect with us on social:
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573760.75/warc/CC-MAIN-20220819191655-20220819221655-00373.warc.gz
CC-MAIN-2022-33
513
7
https://www.allsides.com/news/2021-07-06-1409/pentagon-cancels-disputed-jedi-cloud-contract-microsoft
code
Pentagon cancels disputed JEDI cloud contract with Microsoft The Pentagon said Tuesday it canceled a disputed cloud-computing contract with Microsoft that could eventually have been worth $10 billion. It will instead pursue a deal with both Microsoft and Amazon and possibly other cloud service providers. “With the shifting technology environment, it has become clear that the JEDI Cloud contract, which has long been delayed, no longer meets the requirements to fill the DoD’s capability gaps,” the Pentagon said in a statement. The statement did not directly mention that the Pentagon faced extended legal challenges by Amazon to the original $1 million contract awarded to Microsoft. Amazon argued that the Microsoft award was tainted by politics, particularly then-President Donald Trump’s antagonism toward Amazon’s chief executive officer, Jeff Bezos. Bezos owns The Washington Post, a newspaper often criticized by Trump.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00424.warc.gz
CC-MAIN-2022-05
939
4
https://speaking.jmeiss.me/kE9zVF/mentorship-or-how-to-rebuild-civilisation-from-scratch
code
A presentation at Connect Tech in in Atlanta, GA, USA by Jeremy Meiss How really important are mentorship and documentation to the survival of civilization? What would happen if we lost everything and had to start over, say, due to a global pandemic? In this talk, we’ll explore my fascination with dystopian TV shows and seek to answer these questions, and come to some action steps to do our part through mentorship, documentation, and more. Here’s what was said about this presentation on Twitter.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644571.22/warc/CC-MAIN-20230528214404-20230529004404-00421.warc.gz
CC-MAIN-2023-23
504
3
https://lists.debian.org/debian-kde/2002/04/msg00291.html
code
compiling KDE3 - missing dcopidl I am trying to compile KDE3 from source. I have compiled and installed arts and kdelibs and am running into the following snap during configure for "checking for dcopidl... not found configure: error: The important program dcopidl was not found! Please check whether you installed KDE correctly." Chacking on the Debian packages search page I see this is in kdelibs-dev. I can see no kdelibs-dev amoung the KDE3 sources on ftp.kde.org. kdelibs from KDE2 wants to remove: libpng-dev libqt3-dev libqt3-mt-dev qt3-tools so I assume this would be the wrong route as well. I compiled and installed KDE3 on Slackware from CVS sucesssfully a few months back and did not run into this (though there were other problems of course !) I'm sure I must be missing something obvious- any suggestions ? all the best, Running "Testing" (Sid) on this machine To UNSUBSCRIBE, email to firstname.lastname@example.org with a subject of "unsubscribe". Trouble? Contact email@example.com
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816138.91/warc/CC-MAIN-20180225051024-20180225071024-00724.warc.gz
CC-MAIN-2018-09
998
18
https://compostcarpool.com/contact-compost-carpool/
code
Thoughts? Questions? Ideas? We would love to hear from you! Phone: 469-403-5221 | Email: email@example.com [group group-294 clear_on_hide inline][/group] Select OneI am a school, restaurant, or business interested in pickupI am a school, restaurant or business interested in dropoffI am outside the current service area, but would like to start a carpoolI am interested in hosting a community dropoffI am interested in becoming a farm partnerI have other questions or just want to say hi Would you like to sign up for our newsletter?
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474569.64/warc/CC-MAIN-20240224212113-20240225002113-00557.warc.gz
CC-MAIN-2024-10
533
5
https://community.kde.org/User:Jstaniek/Calligra_Sprint_2011.2_presentation
code
< User:Jstaniek Contents 1 My Plans 2 Other Plans 3 Outline 3.1 Calligra Branding 3.2 Why Kexi? - introduction for Calligra Developers 3.3 Eating our dog food 4 BoF Topics 4.1 Integration idea: Shared Calligra Themes 4.2 Integration idea: Sharing Kexi's CSV import/export engine within Calligra 4.3 Integration idea: Kexi Forms for Calligra apps 4.4 Integration idea: KoReports-based mail merge 4.5 Reusing Kexi's Modern GUI in other Calligra apps 5 Blogs My Plans Why Kexi? - introduction for Calligra Developers Sharing Kexi's CSV import/export engine within Calligra Eating our dog food: use Kexi, Tables, Plan, etc. in our work Optional topic: Better separation between engine and UI Other Plans (from https://sprints.kde.org/sprint/43) Shreya: Improving UI and features of Kexi Web Element,fixing bugs, Multimedia in Kexi Dimitrios: Need for Interoperability between Calligra apps UI perspective from a non developer Promoting Calligra Plug-ins K.I.S.S. proposal Calligra and DTP (ideas) Kexi Documentation / Making documentation roadmaps Radek: bug hunting in kexi, futher maps plugin expand Outline Calligra Branding Since the day one Calligra is THE new quality! Contributors want recognition for THEIR work Contributors take responsibility for THEIR work Contributors don't block derivative works but not at cost of their REPUTATION So we have Logo vs Icon Copyright vs Trademark Trademark registration: KDE e.V. as Calligra is informal group Copyright to be discussed (CC-BY-SA or LGPL) Idea: Dual logos (one free, one protected) as Debian Avoiding Logo misuse: Clear Guidelines Usage in context of Calligra Suite: without permission Usage as source for derivatives: MUST BE out of context of Calligra Suite It's up to Calligra contributors to react on logo misuses and suggest right solution Why Kexi? - introduction for Calligra Developers Why db apps: Databases vs Spreadsheets The Kexi Project Started in 2002 with KOffice/Calligra from the day one Had full-time contributor in 2003-2007 First nontrivial KDE app on Windows Driving force of the KDE on Windows project Maintained with one consistent vision since 2003 Not a MS Access clone less tied to the file db specifics than MS Access GUI does not mimick MS Access but acknowledges advantages of desktop databases aimed at casual users and power users almost no database knowledge needed user discovers features while using Kexi not much aimed at developers default GUI not cluttered with developer-oriented features lower priorities for these features Why Kexi instead of server db + php + apache? offline mode by default and for free utilizing lightweight industry-standard SQLite 3 file db engine empty database is 9KB! More popular/standard than LibreOffice base format (because there is no ODF for databases) point-and-click technology good for education good for prototyping good for preparing quick & dirty "office tools" really smooth transition from file db to server db Kexi specifics Constant time startup! (no matter how big database is) Cannot edit databases that it did not create this may be removed in 3.x to some extent Best known for its ease of use, (partial) MS Access support and good CSV support Kexi is not document-driven The GUI has always been specific compared to document-driven apps Even when file db is used, data is saved automatically at record (row) level, not whole database level Kexi GUI is consisted of largely separated views (aspects) much like in current KDevelop or Qt Creator Large emphasis on concept of data model, data and views Customizable GUI also serves as building block for fully-featured user applications Document-driven apps' user content is the central frame Kexi's user content is the whole application main window, with menus, toolbars [picture] Modern GUI initiated in 2011 (not 1980s) pushes the differences further, addresses specific needs not met by KDE3 GUIs TODO: show app, search Still, Qt Style awarness is kept Why Kexi and not Qt Designer + Qt/C++ development no need for tons of glue code no need for knowledge of database internals/specifics zero compilation: timesaver, architecture-independent still, extension APIs blends well with Qt/KDE/C++ development prepared for good built-in scripting features (still experimental due to unstable API, not technology problems) Competition Kexi is really competitive compared to FOSS alternatives LibreOffice Base is plagued by bad tech decissions of SUN (dependency on Java, tried to be MS Access clone, GUI based on poor framework => usability problems), lack of contributors GNOME's Glom is PostgreSQL-only Zero alternatives in FOSS Qt/KDE world KNoda no longer maintained Rekall is abandoned Kexi Mobile N900 loaned to Adam Pigg in October 2010 (Thanks Suresh!) After two months of development: usable proof of concept - mobile database viewer with forms and reports (KEXI_MOBILE build option) No development of Qt Quick-based UI for now Such UI would be lightweight and very similar to Harmattan Office There are plans to start with Kexi Mobile Forms in Qt Quick Experience in this technology: Kexi developer Adam Pigg What's new in 2011 Successfull two GSoC projects: web elements (QtWebKit-based) and map elements (Marble-based) Both are good example of code reuse Both students (Radek, Shreya) becoming regular contributors! This is second Sprint for Radek Finally: actual and maintained documentation! Done by Dimitrios, with huge attention to details and knowledge Dimitrios becoming Kexi developer too! Status: three new developers in 2011! willing to take part in Calligra Academy Kexi-Calligra integration Plan: develop mail merge within Kexi as a service for use in other apps (especially Calligra apps) Plan: the same for data entry/import/export support Idea: the same for forms support Idea: experiment with reusing Kexi GUI in other Calligra app(s) Eating our dog food Eat Why? Sends clear message: this software is useful Testing by fellow contributors is valuable Generates usage scenarios and then requirements Brings ideas for improvements in terms of integration with other apps Helps avoid feature duplication If right tool picked, development process improves Team building Easier to understand and acknowledge differences between apps Helps identify specific competences among contributors Use Where? 3 aspects Reusing our features of one app in other apps (instead of reinventing) Target: Calligra developers/designers Using our apps in the development process Target: Any Calligra contributors Using our apps elsewhere, in activities not related to Calligra Target: Any Calligra contributors and advocates Eat What? Use Tables for tabular data Status: used for some ods files Action point: identify problems like usability Use Plan for project management Status: some contributors use it Action point: get best practices from them Use Kexi for relational data Already good for storing and simple queries Not yet good for analyzing Only simple relational features Status: not used, let's start! Idea: Make Kexi useful as GUI for KDE Bugzilla. Use bugzilla's webservices for this. Probably separate plugin could be developed. Online/offline operations with synchronization. (discussed with Inge who likes the idea) Use Kexi to maintain data for automatic changelogs (with server db) Action point: provide usage scenarios Example: CSV import/export Action point: provide server infrastructure for shared databases some of that public, some of that for contributors only Eat How? Provide "Best practices for own dog food consumers": "Keep separate setup of stable version of used Calligra apps: How and why?" Separation between development (broken) version and stable (used) version Minimal compilations for development (e.g. Krita-only) can be still used while having access to all needed Calligra apps Already practiced by contributors anyway: they tend to keep multiple build directories with stable/broken versions; now it can be extended "App user: Provide feedback to app developers in context of your use cases" "App developer: Provide updates to users in context of your new possible use cases" BoF Topics Integration idea: Shared Calligra Themes Blog: http://blogs.kde.org/node/4515 Theme is a set of styles defining every visual aspect of document Goal: enabling users to work with styles via themes Default theme could be selectable globally in Calligra When working on multiple documents in various formats (ODF, Kexi...) having common theme could contribute to more professional effect In some aspects theme can define GUI elements that are not well defined by QStyle/KStyle spreadsheet grid and cell styles (Calligra Tables) and database tables grid (Kexi) can have common look defined by a theme; TODO: [mockup] Current state MSOOXML format has notion of themes (default and user defined) references them instead of embedding styles but ODF does not define themes, these can be built on top of ODF Global support for themes can be used: in documents (odt, ods, odp) in reports of any types, especially KoReports in Kexi views, currently forms and tables TODO: [mockup of Kexi report and Words document] Examples: Themes in MSO 2010 Theme fonts in fonts combo (MSW 2010) Theme creator - example for MSO 2010 Integration idea: Sharing Kexi's CSV import/export engine within Calligra History of CSV import/export features in Calligra Originally developed for in Tables Re-used by Kexi 0.x, planned for re-integration into Tables, never happened So two copies do exists, Kexi's one is far more advanced Excuse: Kexi's fork still in development, many TODOs Specifics: Import of tabular data so main target applications are Tabes and Kexi secondary use cases: Words and Stage CSV export allows to use CSV as exchange format between apps when there is no better option In Kexi also used for clipboard handling of tabular data Integration issues Size of codebase: relatively small but quite complicated Database-awarness vs Spreadsheet awarness needs smart abstraction some layers to keep maximal efficiency needs abstraction for different GUIs Common code can dramatically improve clipboard support for tabular data Type detection TODO: EXAMPLE (Semi-)autodetection detection of structure TODO: EXAMPLE Integration idea: Kexi Forms for Calligra apps Scripted functionality in LibreOffice tend to be added via forms with buttons embedded in document exampe invoice form this is example of bad design, escaping from document paradigm to application paradigm instead creating flake tool-compatible side-panes could be enabled with a few mouse clicks GUI for them may be delivered with help of Kexi Form designer The effect: so the document is not cluttered, stays just document the scriped functinality is injected in application(s) instead of injecting into document Rationale: it is out of scope for Calligra to support VB/StarBasic and forms any partial support defined by ODF is not worth effort: compatibility with LibreOffice would have to be high <piggz> would be very awesome to have a sidebar in words as a plugin that allows ad-hoc buttons to be placed that have access to qtscript + the document internals <piggz> qtscript/kross backends <piggz> would make words almost as good as reports :) <piggz> maybe implement in tables first, as spreadsheets have more structure :) <piggz> and probably more useful there <piggz> that is likely something i could get my teeth into :) Integration idea: KoReports-based mail merge Two types of reports: the currently used by Kexi and ODF templates/generation Reusing Kexi's Modern GUI in other Calligra apps On demand API can be provided and extended Only proof-of-concept for now The Design Maxim: "Start with a stripped-down visual design and slowly add elements back in." "La perfection est atteinte, non pas lorsqu'il n'y a plus rien à ajouter, mais lorsqu'il n'y a plus rien à retirer. Perfection is attained, not when no more can be added, but when no more can be removed." —Antoine de Saint-Exupéry Blogs [done] Branding [done] Introduction to Kexi [done] Eating our dog food [done] Integration idea: Shared Calligra Themes [done] Integration idea: Reusing Kexi's Forms in office apps Integration plan: Add flake based reports to KoReports for mail merge, reuse it in other apps Integration idea: Reusing Kexi's Modern GUI in other Calligra apps Integration plan: Sharing Kexi's CSV import/export engine within Calligra apps Integration idea: OwnCloud/Google Cloud integration for Kexi storage Plan: User Feedback Module Initiating tasks for (QtScript-based) user scripting API in Kexi (mission, object model, guidelines) Retrieved from "https://community.kde.org/index.php?title=User:Jstaniek/Calligra_Sprint_2011.2_presentation&oldid=78664" This page was last edited on 13 October 2017, at 08:39. Content is available under Creative Commons License SA 4.0 unless otherwise noted.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347390758.21/warc/CC-MAIN-20200526112939-20200526142939-00179.warc.gz
CC-MAIN-2020-24
12,852
1
http://www.cmstatistics.org/RegistrationsV2/CMStatistics2017/viewSubmission.php?in=1561&token=3n7o21nsr936nr23n4s50q9s6256321n
code
Title: Exploit market microstructure noise in volatility forecasting Authors: Ye Zeng - CREATES, School of Business and Social Science, Aarhus University, Denmark (Denmark) [presenting] Abstract: In volatility forecasting, realized volatility as an estimator for the latent quadratic variation of asset price unavoidably begets the measurement error in variable problem. In a similar vein to the HARQ model which tackles the problem by attenuating realized volatility according to its asymptotic variance, another simple extension of the heterogenous autoregressive (HAR) model, named HARN, is proposed which exploits size of market microstructure noise to gauge reliability of realized volatility. Empirical analysis on datasets covering 29 stocks listed in NYSE show that realized volatility is always attenuated in response to the noisiness of data. Improved forecasting accuracy is also documented, both in-sample and out-of-sample, in comparison with results of the standard HAR model. In addition, empirical results show that the HARN model utilizing a simple estimator for the size of noise computed with data sampled every 5 minutes outperforms, or at least is on par with, models incorporating more sophisticated estimators. Thus, by augmenting HAR model with an extra simple noisiness measure, we obtain a parsimonious extension which improves volatility forecasting.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529331.99/warc/CC-MAIN-20210122113332-20210122143332-00600.warc.gz
CC-MAIN-2021-04
1,377
3
https://www.producthunt.com/@janklimo
code
The safest, fastest way to buy and trade cryptocurrency. Simple audio recorder that lives in your macOS menu bar A pool party in your pocket A Shopify app to increase your store's conversion rates Create awesome timelapses of your Mac screen Super simple invoicing for freelancers Daily curated showcase of winning ad pages Collect feedback & make your customers happy. Browse over 500+ icebreaker questions for any occasion. Guides, principles, strategies and tactics for Jr to Sr devs Streamlined issue tracking for software teams Personal knowledge management and sharing on VSCode & GitHub Copy and Paste reality with AR + ML. The simplest productivity system Increase the resolution of any image, instantly. A fun diverse library of 3D avatars for your design mockups. Your one-stop shop for web scraping and web RPA solutions 31 days of free gifts for freelancers Your Shopify store on your Mac menubar. Banking on Crypto — Borrow. Earn. Pay. Invest.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400187390.18/warc/CC-MAIN-20200918092913-20200918122913-00498.warc.gz
CC-MAIN-2020-40
958
20
http://www.neeleshgokhale.com/mywiki/WordIndex
code
This is an index of all words occuring in page titles. TitleIndex -- a shorter index SiteNavigation -- other indexing schemes A | B | G | H | I | L | M | P | Q | S | T | W Include system pages WordIndex (last modified 2009-03-20 02:02:15) DeleteCache (cached 2019-02-20 10:16:40) Or try one of these actions: Attach File, Despam, Like Pages, Local Site Map, My Pages, Package Pages, Render As Docbook, Spell Check, Subscribe User
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247499009.48/warc/CC-MAIN-20190221031117-20190221053117-00496.warc.gz
CC-MAIN-2019-09
429
8
https://www.gigasoft.com/netchart/peonlref/GetExtraAxisTX_OCX_Method.htm
code
*Within client side VBScript within a web page, use GetExtraAxisTXEx Scope Scientific Graph Interfaces GetExtraAxisTX(pfMin As Single, pfMax As Single, pszLabel As String, pfManualLine As Single, pfManualTick As Single, pszFormat As String, pnShowAxis As Integer, pnShowTick As Integer, pbInverted As Integer, pbLog As Integer, pdwColor As OLE_COLOR) ...see SetExtraAxisTX for definitions for the above parameters. This method is used to manage extra top x axes. These axes are not directly related to data and are purely visual items. If you want to plot data with respect to these extra scales, then the data will need to be normalized with respect to the first top x axis scale. Before calling this method, set WorkingAxis to the axis index (0 through 4) for the appropriate axis if using more than one extra top x axis. See example 130 within demo for more information. See Also: SetExtraAxisTX, SetExtraAxisX, OCX Methods, PEP_structEXTRAAXISTX. Visual Basic 6 Example ©2022 Gigasoft, Inc. | All rights reserved. Gigasoft is a registered trademark, and ProEssentials a trademark of Gigasoft, Inc.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948817.15/warc/CC-MAIN-20230328073515-20230328103515-00602.warc.gz
CC-MAIN-2023-14
1,102
10
https://www.geekscorner.info/post/how-to-add-a-repo-in-kodi
code
I’m often asked about Kodi and how to add repo’s so I have decided to write a quick how to guide. For this tutorial I have decided use mettle kettle and UK Turk as my examples. Open Kodi / XBMC and Select SYSTEM File Manager Select add source and Select none Type in the repo EXACTLY as displayed in this case it is http://kodi.metalkettle.co Select enter a name for this media source and type in whichever name you want to use in this case Mettlekettle then Select Done then ok Go back to your home screen Go to SYSTEM > Settings Select Add on’s Select install from Zip File and select the repo you installed in this case Mettlekettle and select repository Mettlekettle. Wait for the add on enabled confirmation and Select Get Add-ons and then Select the repo in this case Mettlekettle repo add on Select video add on and click on the ones you want to add on (in this case We will use UK Turk) Select UK Turk Live Stream and Select Install Wait for Add-On Enabled Notification, UK Turk Live Stream Add-on is now installed. As usual the add-on now can be accessed via VIDEOS > Add-Ons Select UK Turk Live Stream from your home screen.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358180.42/warc/CC-MAIN-20211127103444-20211127133444-00484.warc.gz
CC-MAIN-2021-49
1,140
16
https://astronomy.stackexchange.com/questions/18110/in-seti-has-anyone-calculated-an-estimate-of-the-mean-time-between-observations
code
So the Drake Equation, proposed by Frank Drake, is often cited as a means to estimate the number of intelligent civilizations that exist in the observable universe. Initial calculations by Drake came up with a number of about 10,000 civilizations. But refinements in the equation on how to interpret, and updated data, have recently led to this evaluation published in Astrobiology Magazine: "There is a 75% chance we could find ET between 1,361 and 3,979 light-years away" But what I'm more interested in has to do with our technology and efforts in detecting the first civilization in terms of estimating a mean time between observations. Surely given the result of the Drake equation, our current efforts, and possibly other factors to consider such as sampling theory there must be a way to calculate this. Has anyone pursued this? If so what are the results? hours?, days? years? centuries? Such a calculation I believe would justify more concerted efforts and investment or otherwise abandonment of any hope. Or perhaps a different strategy to mitigate the assumptions on the calculation to reduce the mean time.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363520.30/warc/CC-MAIN-20211208175210-20211208205210-00217.warc.gz
CC-MAIN-2021-49
1,118
5
http://sap.sys-con.com/node/2441807
code
|By Marketwired .|| |November 12, 2012 04:30 PM EST|| MADRID, SPAIN -- (Marketwire) -- 11/12/12 -- HP (NYSE: HPQ) today announced the availability of its new SAP-qualified implementation services for SAP® Rapid Deployment solutions, which are designed to help organizations accelerate return on investment and lower the IT risks related to implementing and running SAP applications. HP's new implementation services for SAP Rapid Deployment solutions combine software and best practices with HP consulting services to help clients achieve benefits quickly and affordably while reducing risks. These preconfigured solutions identify the costs and scope of SAP application implementations up front, which leads to predictable results. Additionally, HP can help clients customize the SAP Rapid Deployment solutions to address future business requirements. When customers are ready for the production environment of SAP Rapid Deployment solutions, HP offers a hybrid delivery model to support the client's choice on where to run the applications. Clients can run the applications in a traditional IT environment, in a private cloud on the client's premises, or in one of HP's secure commercial data centers around the globe. "Enterprises need to address their most pressing business needs -- such as analytics, mobility, supply chain and customer relationship management -- while laying the foundation for future expansion," said Jules Beck, vice president, Global Enterprise Applications and Mobility Services, HP Enterprise Services. "With deep expertise in various delivery models and SAP applications, HP helps clients to stay competitive, improve customer retention, meet regulatory requirements and improve business processes." HP is a leading global service provider offering SAP-qualified implementation services for the following SAP Rapid Deployment solutions: - SAP HANA® Profitability Analysis rapid-deployment solution; - SAP Extended Warehouse Management rapid-deployment solution; - Rapid database migration of SAP NetWeaver® Business Warehouse (SAP NetWeaver BW) to SAP HANA; and - Rapid database migration to SAP Sybase® Adaptive Server® Enterprise (SAP Sybase ASE). The coupling of SAP Rapid Deployment solutions with HP fast start services for the SAP HANA platform offers clients a unique opportunity to deliver business value in weeks, rather than months. HP also can provide the preconfigured hardware for SAP HANA, which further accelerates time to value. "Faster time to value has become a critical factor in IT budget spending decisions," said Steven Birdsall, senior vice president and general manager, SAP Rapid Deployment Solutions. "Using HP's implementation services for SAP Rapid Deployment solutions will allow clients to quickly address their most urgent business processes utilizing SAP best practices, templates and tools." HP offers end-to-end consulting and system integration services to help clients further capture the full business advantage of their SAP applications. The HP implementation services for SAP Rapid Deployment solutions extend HP's capabilities, resources and expertise for SAP application development and delivery. HP is an SAP-certified global provider of cloud services, application management services, Run SAP implementations and hosting services, providing high-quality standards for delivering of SAP applications. HP's premier Europe, Middle East and Africa client event, HP Discover, takes place Dec. 4-6 in Frankfurt, Germany. HP creates new possibilities for technology to have a meaningful impact on people, businesses, governments and society. The world's largest technology company, HP brings together a portfolio that spans printing, personal computing, software, services and IT infrastructure to solve customer problems. More information about HP is available at http://www.hp.com. SAP, SAP HANA, SAP NetWeaver and all SAP logos are trademarks or registered trademarks of SAP AG in Germany and in several other countries. Sybase and Afaria are trademarks or registered trademarks of Sybase, Inc. or its subsidiaries/affiliates. ® indicates registration in the United States. Sybase is an SAP company. All other product and service names mentioned are the trademarks of their respective companies. Any statements contained in this document that are not historical facts are forward-looking statements as defined in the U.S. Private Securities Litigation Reform Act of 1995. Words such as "anticipate," "believe," "estimate," "expect," "forecast," "intend," "may," "plan," "project," "predict," "should" and "will" and similar expressions as they relate to SAP are intended to identify such forward-looking statements. SAP undertakes no obligation to publicly update or revise any forward-looking statements. All forward-looking statements are subject to various risks and uncertainties that could cause actual results to differ materially from expectations The factors that could affect SAP's future financial results are discussed more fully in SAP's filings with the U.S. Securities and Exchange Commission ("SEC"), including SAP's most recent Annual Report on Form 20-F filed with the SEC. Readers are cautioned not to place undue reliance on these forward-looking statements, which speak only as of their dates. This news release contains forward-looking statements that involve risks, uncertainties and assumptions. If such risks or uncertainties materialize or such assumptions prove incorrect, the results of HP and its consolidated subsidiaries could differ materially from those expressed or implied by such forward-looking statements and assumptions. All statements other than statements of historical fact are statements that could be deemed forward-looking statements, including but not limited to statements of the plans, strategies and objectives of management for future operations; any statements concerning expected development, performance, market share or competitive performance relating to products and services; any statements regarding anticipated operational and financial results; any statements of expectation or belief; and any statements of assumptions underlying any of the foregoing. Risks, uncertainties and assumptions include macroeconomic and geopolitical trends and events; the competitive pressures faced by HP's businesses; the development and transition of new products and services (and the enhancement of existing products and services) to meet customer needs and respond to emerging technological trends; the execution and performance of contracts by HP and its customers, suppliers and partners; the protection of HP's intellectual property assets, including intellectual property licensed from third parties; integration and other risks associated with business combination and investment transactions; the hiring and retention of key employees; assumptions related to pension and other post-retirement costs and retirement programs; the execution, timing and results of restructuring plans, including estimates and assumptions related to the cost and the anticipated benefits of implementing those plans; expectations and assumptions relating to the execution and timing of cost reduction programs and restructuring and integration plans; the resolution of pending investigations, claims and disputes; and other risks that are described in HP's Quarterly Report on Form 10-Q for the fiscal quarter ended July 31, 2012 and HP's other filings with the Securities and Exchange Commission, including HP's Annual Report on Form 10-K for the fiscal year ended October 31, 2011. HP assumes no obligation and does not intend to update these forward-looking statements. © 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact. May. 29, 2015 09:00 PM EDT Reads: 5,518 SYS-CON Events announced today that BMC will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. BMC delivers software solutions that help IT transform digital enterprises for the ultimate competitive business advantage. BMC has worked with thousands of leading companies to create and deliver powerful IT management services. From mainframe to cloud to mobile, BMC pairs high-speed digital innovation with robust IT industrialization – allowing customers to provide amazing user experiences with optimized IT per... May. 29, 2015 06:15 PM EDT Reads: 1,704 2015 predictions circa 1970: houses anticipate our needs and adapt, city infrastructure is citizen and situation aware, office buildings identify and preprocess you. Today smart buildings have no such collective conscience, no shared set of fundamental services to identify, predict and synchronize around us. LiveSpace and M2Mi are changing that. LiveSpace Smart Environment devices deliver over the M2Mi IoT Platform real time presence, awareness and intent analytics as a service to local connected devices. In her session at @ThingsExpo, Sarah Cooper, VP Business of Development at M2Mi, will d... May. 29, 2015 04:27 PM EDT Reads: 737 The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy. May. 29, 2015 03:45 PM EDT Reads: 5,103 Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In this session, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, will describe how to revolutionize your architecture and... May. 29, 2015 02:33 PM EDT Reads: 759 The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t... May. 29, 2015 02:00 PM EDT Reads: 6,879 We’re entering a new era of computing technology that many are calling the Internet of Things (IoT). Machine to machine, machine to infrastructure, machine to environment, the Internet of Everything, the Internet of Intelligent Things, intelligent systems – call it what you want, but it’s happening, and its potential is huge. IoT is comprised of smart machines interacting and communicating with other machines, objects, environments and infrastructures. As a result, huge volumes of data are being generated, and that data is being processed into useful actions that can “command and control” thi... May. 29, 2015 02:00 PM EDT Reads: 1,358 All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be May. 29, 2015 01:15 PM EDT Reads: 3,078 Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys. May. 29, 2015 01:00 PM EDT Reads: 7,528 We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i... May. 29, 2015 01:00 PM EDT Reads: 4,789 SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers. May. 29, 2015 01:00 PM EDT Reads: 2,480 Thanks to widespread Internet adoption and more than 10 billion connected devices around the world, companies became more excited than ever about the Internet of Things in 2014. Add in the hype around Google Glass and the Nest Thermostat, and nearly every business, including those from traditionally low-tech industries, wanted in. But despite the buzz, some very real business questions emerged – mainly, not if a device can be connected, or even when, but why? Why does connecting to the cloud create greater value for the user? Why do connected features improve the overall experience? And why do... May. 29, 2015 12:42 PM EDT Reads: 899 SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York City, NY. O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, and conferences. Since 1978, O'Reilly Media has been a chronicler and catalyst of cutting-edge development, homing in on the technology trends that really matter and spurring their adoption by amplifying "faint signals" from the alpha geeks who are creating the future. An active participa... May. 29, 2015 12:30 PM EDT Reads: 1,368 Imagine a world where targeting, attribution, and analytics are just as intrinsic to the physical world as they currently are to display advertising. Advances in technologies and changes in consumer behavior have opened the door to a whole new category of personalized marketing experience based on direct interactions with products. The products themselves now have a voice. What will they say? Who will control it? And what does it take for brands to win in this new world? In his session at @ThingsExpo, Zack Bennett, Vice President of Customer Success at EVRYTHNG, will answer these questions a... May. 29, 2015 12:13 PM EDT Reads: 840 The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago. May. 29, 2015 12:00 PM EDT Reads: 2,688 The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session at @ThingsExpo, Ryan Bagnulo, Solution Architect / Software Engineer at SOA Software, focused on desi... May. 29, 2015 11:00 AM EDT Reads: 4,491 An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t... May. 29, 2015 11:00 AM EDT Reads: 6,189 The multi-trillion economic opportunity around the "Internet of Things" (IoT) is emerging as the hottest topic for investors in 2015. As we connect the physical world with information technology, data from actions, processes and the environment can increase sales, improve efficiencies, automate daily activities and minimize risk. In his session at @ThingsExpo, Ed Maguire, Senior Analyst at CLSA Americas, will describe what is new and different about IoT, explore financial, technological and real-world impact across consumer and business use cases. Why now? Significant corporate and venture... May. 29, 2015 10:50 AM EDT Reads: 905 While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehe... May. 29, 2015 10:00 AM EDT Reads: 3,705 Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi... May. 29, 2015 10:00 AM EDT Reads: 5,873
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930866.66/warc/CC-MAIN-20150521113210-00100-ip-10-180-206-219.ec2.internal.warc.gz
CC-MAIN-2015-22
20,365
61
https://community.hubitat.com/t/hubconnect-passing-global-variables/55683
code
I'm trying to pass a text global variable defined with variable connector type from one hub to another? I looked at the device type 'RM Global Variable Connector' without any success. Should I be able to do this and how would I do it? Never had really thought about that, but I wonder if the upcoming update would take care of that. Since you select devices you want to share between hubs I wonder if that includes connector variables. Boy do I hate typing these words: "Works for me" I have a Connector on one of my hubs, created in RM: And then I have that mirrored to my HubConnect Server Hub: The ONLY reason I have it is to test RM Connector I haven't used it in 6 months for sure. Just tried it, it works, both directions. (the screen cap with version: v2.0.0 is the mirrored one) dev:2774 2020-11-15 10:12:45.177 am info LocKode variable is just great app:2 2020-11-15 10:12:44.176 am info Received event from 192.168.7.66:2774--ZeeRadioLower/LocKode: [false variable, just great null LocKode variable is just great] Logs from two hubs above.. sender followed by receiver. I'm using package manager to manager the code. The HubConnect RM Global Variable Connector driver look very different. Here is my generated device Doesn't look anything like your generated device. Maybe I have the wrong driver? You're displaying the whole device info using the HubConnect RM Connect driver.. and that's correct and matches... but I displayed the lower portion: the Device Info portion, because the top portion is 'busy' -- the only portion that matters in the top is the part you didn't capture.. the Current States part there on the Right side. On the "real RM Connector" hub, there's a Set Variable button and string entry. Type something into the String field, click the Set Variable button above. Look to the right, the Current States should show your typed word(s). There's the same on the bottom left area of the mirrored device. The Current States should have the word(s) you typed and if you type a different word in the Set Variable string field and click Set Variable, again, the Current States on both hubs should follow along. The Commands portion of the HubConnect RM Global Variable Connector device will have a button for every type of Global Variable RM allows you to create. But recognize that the original Connector is ONE type. The mirrored device will be one type too but it can't know in advance which, so the HubConnect driver has "One Of Each" to cover all the bases. Thanks for explaining this. It's now working but i had to 'kick' it by typing something into the client side Set Variable then it sort of started working. I haven't had to do this 'kick' process on any of my other devices. I expect there is a technical reason for this. I didn't paste this area before because there wasn't anything there. When I ran the report of driver versions from the server.. the version level reported was blank. Actually it still is, Anyway thanks for responding so quickly. Keep up the excellent work. So I'm not quite where I want to be yet. I am currently running 3 hubs. One acts as the server and the other two are clients. I want each of them to send a global variable. One of them now works perfectly but I can't get the other one to work. On the server device under Current Status it never reports a variable. I can type in some text into Set Variable on the server and it appears on the client side. I have removed the device several times from the client and repeated the exact same process that works on the other client. Otherwise this client is working well and shares data from some 20 devices.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100781.60/warc/CC-MAIN-20231209004202-20231209034202-00018.warc.gz
CC-MAIN-2023-50
3,620
23
https://www.cmorissette.com/shop/loafers-drivers-mens-fendi-force-black/
code
Fendi Force loafers. Made of black leather. Toe box trimmed in fabric with gray and black FF motif. Lightweight rubber sole with embossed FF detail on the toe and heel. Made in Italy 100% calfskin, 65% polyester, 35% cotton, inside: 100% lamb leather
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494974.98/warc/CC-MAIN-20230127065356-20230127095356-00360.warc.gz
CC-MAIN-2023-06
250
3
http://fixunix.com/snmp/584116-mib-printer-prtconsoledisable-print.html
code
I am trying to lock/unlock the console from different copiers and printers. With the OID (iso.18.104.22.168.22.214.171.124.1.13.1) from the Printer-MIB I can make a getSNMP but when I try to setSNMP it doesn't work with any copier/printer. Normally this OID should be writeable but the SNMP-Agents doesn't support to write. Are there possibilities or is there any work around? Thanks for any tip,
s3://commoncrawl/crawl-data/CC-MAIN-2016-36/segments/1471983577646.93/warc/CC-MAIN-20160823201937-00055-ip-10-153-172-175.ec2.internal.warc.gz
CC-MAIN-2016-36
396
3
https://community.museomix.org/t/museomix-berlin-2017/589
code
The Museum Think Tank Berlin (community of museum + innovation experts) will gather on 10th January at the Jewish Museum Berlin to start working on an application to create Museomix Berlin 2017. Here is our organisation chat space firstname.lastname@example.org (from Berlin, attended Museomix Bale) email@example.com (from Museomix Bale) firstname.lastname@example.org (from Museomix Bale) email@example.com (from Museomix Bern) firstname.lastname@example.org (facilitator in Berlin)
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662595559.80/warc/CC-MAIN-20220526004200-20220526034200-00712.warc.gz
CC-MAIN-2022-21
484
7
https://pypi.org/project/multimethod/
code
Multiple argument dispatching. Multimethod provides a decorator for adding multiple argument dispatching to functions. The decorator finds the multimethod of the same name, creating it if necessary, and registers the function with its annotations. There are several multiple dispatch libraries on PyPI. This one aims for simplicity and speed. With caching of argument types, it should be the fastest pure Python implementation possible. from multimethod import multimethod @multimethod def func(x: int, y: float): ... func is now a multimethod which will delegate to the above function, when called with arguments of the specified types. Subsequent usage will register new types and functions to the existing multimethod of the same name. If an exact match isn't registered, the next closest method is called (and cached). Candidate methods are ranked based on their subclass relationships. If no matches are found, a custom TypeError is raised. strict flag can also be set on the in which case finding multiple matches also raises a Keyword arguments can be used when calling, but won't affect the dispatching. If no annotations are specified, it will inherently match any aruments. Multimethods are implemented as mappings from signatures to functions, and can be introspected as such. method[type, ...] # get registered function method[type, ...] = func # register function by explicit types method.register(func) # decorator to register annotated function (with any __name__) style syntax is also supported. This requires creating a multidispatch object explicitly, and consequently doesn't rely on the name matching. register method returns a decorator for given types, thereby supporting Python 2 and stacking of multiple signatures. from multimethod import multidispatch @multidispatch def func(*args): ... @func.register(object, int) @func.register(int, object) def _(*args): ... Overloads dispatch on annotated predicates. Each predicate is checked in the reverse order of registration. The implementation is separate from multimethod due to the different performance characteristics. Instead a simple isa predicate is provided for checking instance type. from multimethod import overload @overload def func(obj: isa(str)): ... @overload def func(obj: str.isalnum): ... @overload def func(obj: str.isdigit): ... $ pip install multimethod 100% branch coverage. $ pytest [--cov] - Missing annotations default to object - Removed deprecated dispatch stacking - Forward references allowed in type hints - Register method - Overloads with predicate dispatch - Multimethods can be defined inside a class - Optimized dispatching - Support for - Dispatch on Python 3 annotations Download the file for your platform. If you're not sure which to choose, learn more about installing packages. |Filename, size & hash SHA256 hash help||File type||Python version||Upload date| |multimethod-1.0-py2.py3-none-any.whl (5.2 kB) Copy SHA256 hash SHA256||Wheel||py2.py3| |multimethod-1.0.tar.gz (6.5 kB) Copy SHA256 hash SHA256||Source||None|
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249500704.80/warc/CC-MAIN-20190223102155-20190223124155-00370.warc.gz
CC-MAIN-2019-09
3,031
46
https://www.idr-inc.com/jobdetail/?id=21388
code
IDR is looking for a DevOps Engineer one of our Fortune 200 Healthcare clients in Denver, CO. This position will be a 12+ month contract with potential for extension. The ideal candidate will have hands on experience with Docker-Swarm, Kubernetes, Jenkins (or similar pipeline technologies), and ideally knowledge of google cloud. Responsibilities of the DevOps Engineer • Collaborate with development to design in-house monitoring tools/software for managing the SCM and Development Operations environments. • Adopt, customize and implement industry standard DevOps policies and DevOps procedures. • Provide Sr. Management with metrics and other reporting materials for the executive team. • Work closely with strategic planning groups to provide future technologies direction that fits executive vision. • Develop and define process and procedure to proactively manage all pre-production and production environments. • Work with multiple in-house external Software Configuration Management (SCM) teams enterprise-wide to assist in new architectural needs and optimize existing environments to improve workflow and productivity. Required skills of the DevOps Engineer NICE TO HAVE What's in it for you?
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519923.26/warc/CC-MAIN-20210120054203-20210120084203-00757.warc.gz
CC-MAIN-2021-04
1,215
11
https://discusstest.codechef.com/t/can-you-help-me-develop-my-style/711
code
I am sort of a newbie to all this.I am good with syntaxes and basic data types in C.C++. I saw a lot of problems in Codechef and when I think of a solution I dont know where to start From where do u start when u see a problem the first time?? Is it algorithm which u think of,or data types u intend to use or any other? By seeing all those awesome solutions I sometimes feel scared wether I can ever do things like that? Everyone is a newbie at some time, so no need to worry Start solving practice problems, the “easy” ones. They will mainly require array as data structure. But even the easy problems require knowledge of “algorithms”, “data-structures”, and most importantly “maths”. For algorithms I know 2 excellent books “Introduction to algorithms” by Cormen, and “Algorithms in C” by Sedgewick. Important topics - Dynamic Programming, Graphs, Heaps, Sorting, Searching You will gain knowledge gradually. And do give your best during live contests. With experience, you will most of the time, quickly recognize the algorithm and data structure required.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151866.98/warc/CC-MAIN-20210725205752-20210725235752-00244.warc.gz
CC-MAIN-2021-31
1,084
10
https://assignmentassistant.xyz/computational-genomics-21628
code
Computational Genomics Reviews & Tips The Unusual Secret of Computational Genomics The concepts could be somewhat difficult to grasp at first. A whole lot of theory is involved. The theory part can acquire tedious to study, but it’s easy. A good comprehension of genetics is necessary. You are going to have larger comprehension of the technologies being used in the biotechnology market. Scoring wise, the topics are simple, with a couple of formulae and derivations. The subject is chiefly voluminous theory, which might be problematic for some. It also includes editing tools. New Ideas Into Computational Genomics Never Before Revealed The plan will take place each summer for a single month. It consists of two parts. HIM programs at other universities may opt for a proper approach to fit in their own curriculum. Students will learn how to do the true work of computational genomics. The students were required to complete the project in fourteen days. In other words, they may not need to take multiple college-level biology courses to perform well in this class. Students coming into the course must already understand how to program in some computer language, but nevertheless, it need not be Python. In a research setting, our students learn to evaluate and adapt the greatest new instruments and methods that emerge annually. Each student had to work on a single genome annotation undertaking. Students are going to learn all facets of medical care which range from genomics to electronic health records. Doctoral students have to complete one minor subject of study and one breadth region of study OR two minor regions of study. Each doctoral student requires one big subject of study and two additional regions of study. In addition, should you do happen to consult with a different student, both of you have to cite this. DrPH students are highly advised to decide on a breadth in Leadership. Wearable technology will nonetheless be around, but in much more subtle forms. Computers are quite obviously required to manage the gigantic amount of information created by genome sequencing projects. The lab is designed to produce the app available freely so that those that are in cancer genomics community can use it. Computational Biology is an increasing field not just in academia, but also in industry. Molecular biology lets us comprehend the way the natural world works. Textbooks are a beneficial analogy. There’s no required textbook. The program is connected to Runge Kutta methods, Euler’s equation and other types of iteration procedures. It is quite interesting and the concepts learned prove to be useful in the future. It will be beneficial for computational biologists and experimental biologists who are doing data analysis. Creating such a prosperous course is no simple task and needs a ton more work than simply standing facing a camera and uploading videos in youtube. Because the work involves multiple collaborators, a fantastic balance between independence and team spirit is vital, and efficient communication skills are essential. Our present work focuses on two distinct classes of RBPs that are related to cancer. For people who are interested in solving real-life troubles and good at quick math, it’s pretty straightforward. Along with presentations, results and documentation should be shown on the class Wiki website. Digital Medicine’s major course outcome is to recognize the effect of technology on the health care world. The improvement of high-throughput DNA sequencing technologies will allow it to be feasible for all to acquire their private genome sequenced in the forseeable future. All normal pension benefits and occupational wellness care are supplied for university employees. When you’re comfortable modeling globins, consider using profile HMMs to locate Swissprot homologues of your favourite protein family. In addition, it shows how frequently a gene is mutated across the samples and other details. Genomics is reminiscent of first-year biology, and if you aren’t comfortable with it, genomics might be a struggle to comprehend. Therefore, it’s tough to ascertain what sequences are important simply by viewing the genome of an organism. Information on the division alignment will be supplied during the recruitment procedure. The descriptive information regarding the sequence was removed beforehand. The knowledge on offer is quite useful when you want to have in the area of mechanical design. No prior understanding of the subject is needed. Advanced programming skills aren’t required. The sound level in the job environment is usually moderate. Utilizing DNA microarrays, it’s possible to assess the expression level of each gene in the yeast genome in 1 experiment. The field itself deals with the usage of mathematical analytical methods to aid decision making. It is possible to then specialize in at least one of these areas of study. The extra focus areas may be one minor field of study and one breadth region of study OR two minor regions of study.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949035.66/warc/CC-MAIN-20230329213541-20230330003541-00729.warc.gz
CC-MAIN-2023-14
5,044
14
https://www.vice.com/en_us/topic/robot-dog
code
First page loaded. Page 1 of 1 displayed. Time to Get Nervous About Robot Dogs Again While everyone’s concerned about whether algorithms are going to take over the earth and turn the world into a bunch of computerized interactions taking place amidst some sort of grey goo, we may yet have other AI issues to deal with. Boston Dynamics...
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347388427.15/warc/CC-MAIN-20200525095005-20200525125005-00062.warc.gz
CC-MAIN-2020-24
340
3
https://tldr.tech/tech/newsletter/2019-07-31
code
Big Tech & Apple is hosting augmented reality art walking tours in major cities (1 minute read) Apple is hosting a project called [AR]T Walk, which aims to make AR more consumer-friendly while also portraying Apple stores as civic centers where communities can come together. [AR]T Walk will take people on a walking tour through various city centers around the globe, making digital art come alive in physical spaces. People in Hong Kong, London, New York, Paris, San Francisco, and Tokyo during mid-August can sign up for the tours online. The tour will last a couple of hours and involve a 1.5-mile walk. Facebook just published an update on its futuristic brain-typing project (3 minute read) Facebook's brain-reading computer interface is able to look for patterns in activity in the brain and match them with specific words and phrases in real-time. In experiments, participants answered multiple-choice questions with their thoughts, which were then translated into words by the brain-computer interface. The system was able to detect when the participant was being asked a question, understand the content of the question asked, and translate the thought answer into words. The patients in the study were using highly invasive implants, and the system was very limited. It was only able to understand nine questions and 24 total answer options with between 61 to 76 percent accuracy. Even a very basic level of thought control could make huge differences in how we interact with VR systems. The technology could help improve the lives of people who can't speak due to paralysis or other issues. NASA taps SpaceX, Blue Origin and 11 more companies for Moon and Mars space tech (2 minute read) NASA has partnered with 13 companies on 19 new projects that will help it reach the Moon and Mars. Blue Origin will develop a navigation system for safe landing on the Moon, a fuel cell-based power system, and engine nozzles for rockets with liquid propellant suited for lunar lander vehicles. SpaceX will be working on technology to help move rocket propellant safely around during orbit and on refining its vertical landing capabilities to adapt it to conditions on the Moon. Lockheed Martin will be creating metal powder-based materials using solid-state processing that can operate better in high-temperature environments, as well as autonomous methods for growing and harvesting plants in space. A link to more details on the other projects is available. Chinese vlogger who used filter to look younger caught in live-stream glitch (3 minute read) A technical glitch has revealed that a popular Chinese vlogger is actually a middle-aged woman, rather than the young glamorous girl that she portrayed. This has sparked discussions about standards of beauty across the country's social media platforms. With more than 100,000 followers, the vlogger solicited gifts from her fans, with some fans giving her more than US $14,533. After the revelation, many fans stopped following her and withdrew their transactions. The use of face filters during live-streaming is common in China. Live-streamers in China are discouraged from broadcasting publicly and are extremely restricted in what they can say. Programming, Design & Data Science Summer 2020 Internships (GitHub Repo) This repository contains a database of internships for Summer 2020 in tech, SWE, and related fields. Positions are open to anyone enrolled in a Bachelor's degree program. Additional information on other prerequisites or preferences are listed. Atomize Code (GitHub Repo) Atomize Code is a UI Design System for web apps featuring elegant and beautiful React components. It supports modern browsers and Internet Explorer 9+, server-side rendering, and Electron. New bill would ban autoplay videos and endless scrolling (2 minute read) A new bill called the Social Media Addiction Reduction Technology Act, or the SMART Act, will ban features that keep users on platforms longer, targeting the tech industry's addictive designs. Big tech has designed many products using psychological tricks that make it difficult to look away. The new bill will also make it unlawful for tech companies to use deceptive designs to manipulate users into opting into services. Companies may be required to implement tools for tracking how much time users are spending on different apps and websites if the bill becomes law.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710829.5/warc/CC-MAIN-20221201153700-20221201183700-00443.warc.gz
CC-MAIN-2022-49
4,378
16
http://freecode.com/tags/window-manager?page=1&sort=updated_at&with=417&without=
code
KaufKauf Slim Linux is a fully configured Linux-based operating system that is able to run from flash memory. It doesn't require more than 700 MBytes of disk space. It comes with full support for several touch controllers and is made to run on weak systems (400 MHz CPU and 256 MBytes RAM are enough). It is designed for embedded systems, doesn't need to be shut down cleanly and comes with Xfce, CUPS, and several other features. Wimpiggy is a library for writing EWMH-compliant, compositing window managers using Python and GTK+. The goal is to make writing a window manager as easy as writing a PyGTK application. This library can be used to build a trivial, working window manager in only about 40 lines of code. Musca is a simple window manager for X allowing both tiling and stacking modes. It is similar to ratpoison but more friendly to the mouse and using a simpler keyboard navigation. There are no built in status bars, panels, or window decorations save for thin window borders to indicate focus. Window navigation can be mouse click to focus or entirely keyboard driven. Window tiling is manual but simple, and there are no restrictions on how you divide up the screen. It uses dwm's dmenu utility for launching apps and running various built in commands not mapped to hot keys. libAfterImage is an image import, storage, manipulation, and output library for X. It features support for antialiased, TrueTypei, and X text, a 128-bit internal graphics engine, in-memory RLE image compression, high quality image scaling/flipping/blending, multipoint linear gradients, superior quality image output on X drawables, and much more. Perl OS is a program that provides an easy interface to run Perl/Tk programs. It was also created to be an easy working environment, complete with a text editor, paint program, and more. It comes with several programs, along with a utility to add many more which can be found on the Internet. From the outside, Perl OS looks like a simple operating system. But inside, it is a powerful system for working with Perl and Tk. Lobotomy involves many sub-projects oriented to experimentation about new design for human-computer interaction and, more generally, a new way for home computing. It involves a relational filesystem, a window manager, and many libraries, tools, and daemons to automatically extract and handle metadata.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706635944/warc/CC-MAIN-20130516121715-00088-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
2,365
6
https://5280geek.wordpress.com/2018/11/05/walking-dead-3/
code
“What Comes After” #TWD Alright everyone let’s get out our Walking Dead Bingo cards and play: Rick on a horse, RV, RV breaks down, helicopter, apocalyptic writing on the wall, talking on a hand radio, Atlanta, excessive black blood head shot, dead pushing through a door, Rick talking to the dead and wanting to join them, and shots from the colt python in the free space. And we’ve got Bingo, what do we win? A clip show… Really? They found a way to give us a clip show in a drama show, in 2018 when we’ve evolved TV passed that. And before you argue that Rick is dying and going through his past, that doesn’t change that it’s still poorly veiled filler. Oh these are new conversations with dead characters in old sets so that’s not the same. If the actor wasn’t leaving the show would this still be the dead horse taking up all this screen time? Is this really a journey of the mind we needed to take with Rick? Or is this just making sure we all know Rick is going away and not coming back? The crappy metaphors beat the audience over the head with ‘hidden’ messages. Rick is at a crossroads, Maggie has a key in the lock, there are choices to be made, did you get that? Let’s do a conversation with Shane to explain it to make sure. Maggie can you explain why Negan crying and broken means that Negan is metaphorically dead? Good because we simple viewers couldn’t have gotten that on our own. Now Rick could you get up and cross that bridge so the show can be about characters and humanity being tested instead of big name actors and they’re weird farewell parties? I got to say Beowolf-I mean Rick-it’s a hero’s job to die fighting monsters and not grow old and die quiet in bed. In that way, you were too far gone for what comes after. #TWD #WalkingDead #AMC #RedsTake #5280Geek
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305266.34/warc/CC-MAIN-20220127133107-20220127163107-00516.warc.gz
CC-MAIN-2022-05
1,820
5
https://communities.bmc.com/message/249568?tstart=0
code
It seems the missing files are ok, javamail is trying to look for them in several locations: It seems that at least the POP3 part is ok, can you receive some mails?: DEBUG POP3: connecting to host "10.10.100.180", port 110, isSSL false S: +OK The Microsoft Exchange POP3 service is ready.
s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701146600.56/warc/CC-MAIN-20160205193906-00313-ip-10-236-182-209.ec2.internal.warc.gz
CC-MAIN-2016-07
288
4
http://gbpoy-bik.ru/dlya-zhenskoy-krasoti/shop-617545.html
code
Зеркало Magic Mirror в Добрянке 3 700 руб. На складе - 9 шт. Последний заказ был 12 мин. назад Зеркало Magic Mirror One version one focus, and make it great Introducing Magic Mirror 3 Defining a new perspective to Sketch plugins. "Introducing Magic Mirror 3" is published by James Tang in MagicSketch Blog Using Magic Mirror 3 Brief walkthrough on the features. "Using Magic Mirror 3" is published by James Tang in MagicSketch Blog Magic Mirror 3 Private Beta We've carefully identified and bring over every critical to the new version, along with from the group up revamp over our UI and UX: Magic Mirror 3 is currently in private beta, if you're interested. Supporting Corner Radius in Magic Mirror 3 Sketch Plugin Development Series Signup for Magic Mirror 3 Private Beta! One request from Magic Mirror 2 users are the feature to support shapes contains corner radius. Consider following example: Background: The basic requirement of making a perspective transform of an image is to have exactly 4 coordinates of the shape determined, so I know what shape it's for the image to transformed to.
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514577478.95/warc/CC-MAIN-20190923172009-20190923194009-00038.warc.gz
CC-MAIN-2019-39
1,157
14
https://onmsft.com/news/microsoft-joins-the-linux-foundation-as-a-platinum-member-google-joins-net-community/
code
Microsoft’s annual Connect() developer conference is officially underway in New York City. At the event, Microsoft announced it has joined the Linux Foundation as a Platinum member and that Google has joined the independent .NET community. The announcements mark another one of Microsoft’s significant steps towards the goal of empowering the ecosystem by giving developers greater choice in the tools they use. Since Microsoft is a Platinum Member of the Linux Foundation, Microsoft’s customers will now benefit from increased collaboration and innovation among a diverse ecosystem. Microsoft has already contributed to several Linux Foundation projects, including Node.js Foundation, OpenDaylight, Open Container Initiative, R Consortium and Open API Initiative. According to VentureBeat, the Platinum level membership is the highest level, and will cost the company $500,000 annually. John Gossman, an architect on the Microsoft Azure team, will sit on the Linux Foundations Board of Directors and help underwrite projects on behalf of Microsoft. In a statement, Microsoft Cloud and Enterprise Executive Vice President Scott Guthrie reflected on Microsoft joining the Linux Foundation: “The Linux Foundation is home not only to Linux, but many of the community’s most innovative open source projects. We are excited to join The Linux Foundation and partner with the community to help developers capitalize on the shift to intelligent cloud and mobile experiences… We want to help developers achieve more and capitalize on the industry’s shift toward cloud-first and mobile-first experiences using the tools and platforms of their choice…By collaborating with the community to provide open, flexible and intelligent tools and cloud services, we’re helping every developer deliver unprecedented levels of innovation.” Similarly, Jim Zemlin, the Executive Director of The Linux Foundation also reflected on Microsoft joining the Linux Foundation: Microsoft has grown and matured in its use of and contributions to open source technology… The company has become an enthusiastic supporter of Linux and of open source and a very active member of many important projects. Membership is an important step for Microsoft, but also for the open source community at large, which stands to benefit from the company’s expanding range of contributions In other Connect() news, Microsoft also announced that the addition of Google to the .NET Foundation’s Technical Steering Group. According to Microsoft, this “further reinforces the vibrancy of the .NET developer community as well as Google’s commitment to fostering an open platform that supports businesses and developers who have standardized on .NET.” Under CEO Satya Nadella, Microsoft has increased its interests in open source projects and communities. Microsoft currently has many open source projects on GitHub, and even released the open source .NET Core 1.0; partnered with Canonical to bring Ubuntu to Windows 10; and after acquiring Xamarin, open sourced its software development kit.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818452.78/warc/CC-MAIN-20240423002028-20240423032028-00092.warc.gz
CC-MAIN-2024-18
3,069
8
https://awfullibrarybooks.net/japan-expo-guide/
code
Submitter: Another horribly outdated travel book. The sad thing is this actually circulated 10 years after the Expo! Even sadder is that it’s been sitting on the shelf for 30 more years. Imagine a deluxe hotel today going for $18 a night. We love the suggestion of packing a fur stole. We are a state library. Holly: Cool book in 1970. By 1980 it was pretty much done-for. Now? Doorstop.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00541.warc.gz
CC-MAIN-2021-43
389
2
https://tech.forums.softwareag.com/t/how-to-make-an-upsert-of-records-to-a-database-using-adapter-service/235573
code
Great answers Simon/fml2. Typically the best way to implement DB logic is on the DB (but that does require the right level of access) A stored procedure to upsert masks all that DB logic from the integration, and contains this within the DB where it belongs. Of course, there are many scenarios where you don’t have the right access, or doing this could affect a support policy of a particular application, so in those scenarios you can try to build a SQL query to do this, or as a very last resort, combine individual adapter calls to retrieve/insert/update, creating an upsert service from these, and using this versus the adapters direct. It’s typically good practice to create a service façade layer in front of the adapters and use these from your integrations versus calling the adapters directly as this can help to shield your integrations from adapter changes/etc which can happen at times, and help to alleviate the ripple effect of such changes. It also means that for example, should you move away from direct DB integration to REST APIs, the façade can just be changed to invoke the REST APIs and the integration logic would continue to wokr.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046151638.93/warc/CC-MAIN-20210725045638-20210725075638-00693.warc.gz
CC-MAIN-2021-31
1,161
6
https://dev.to/manuelfs12/hugo-website-cms-4dna
code
In my previous post, I successfully deployed a Hugo static site through Netlify, you can read all about it here. I managed to create the post with ease using Markdown, but, despite the relative simplicity, it got me thinking. Is there some kind of Content Management System (CMS) for static sites? To my surprise, YES there is, but before we get all excited and dive in into this, lets put up some context of what a CMS is, and why it could help you with your own blogs. In this case, I’m looking for a different way of managing my content, don’t get me wrong, I like how I set my blog site up, and writing in Markdown is really cool, but if I can find a way to enhance it, I’m all in for it. The Hugo website has some recommendations for CMS, you can check out in the documentation. After glancing over the different suggestions, the one that caught my eye was Netlify CMS, it is funny because I am starting to feel like a Netlify fanboy, but hey, they make interesting stuff. Netlify CMS is an open source content management system for your Git workflow that enables you to provide editors with a friendly UI and intuitive workflows. You can use it with any static site generator to create faster, more flexible web projects. Sounds fun, for this experiment I’ll be implementing it on an existing Hugo site. If you want to learn more about Netlify CMS you can check it out here. For the purpose of this post, I’ll use the Hugo quickstart site, you should have something like this. Now, let’s deploy our site through Netlify, I went with it due to its ease of use and the fact that I deployed my blog through it. You can use Netlify CMS with other hosting solutions, but I’ll be using Netlify since I already know the very basics of it. If you want to know how I did it for my blog, check out this post, or you can always check the documentation here. With the site deployed, It’s time to start implementing Netlify CMS. I followed the instructions from the documentation, and I have to say, it was really easy to implement, the documentation was easy to follow, and I had the CMS on production very quickly. Keep in mind that I did host the site on Netlify which made it easier, so you millage may vary depending on the host you choose. Here is the admin panel working on the website Create Post page I also tested the Editorial Workflow which is available on GitHub hosted repositories and it is on beta for GitLab and Bitbucket. This lets me save drafts, review changes and publish when the article is ready. Now, I will hit publish, this will create a new commit on our master branch, that Netlify will take and deploy. A great success!! I enjoyed this experiment a lot, a CMS can certainly help you if you need some visual aid while editing content, or you simply don’t want to deal with Markdown directly. Netlify CMS was a surprise for me, because I thought it would be way harder to implement, but thankfully it wasn’t, and I will have on the back of my head if I ever need to make another website with a static site generator and need a CMS. Maybe in the future, I’ll try another CMS, or another static site generator, but for now, I’m really happy with my current setup, Hugo, Markdown, VS Code and Netlify. Hope to see you around for my next post.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474526.76/warc/CC-MAIN-20240224080616-20240224110616-00149.warc.gz
CC-MAIN-2024-10
3,283
17
http://yueyouu.xyz/archives/4851
code
Novel–Cultivation Online–Cultivation Online Chapter 432 Supreme Heaven’S Legacy spoil rat Even Elderly Bai shook his head having a complex concept on his deal with. starfist – kingdom’s swords and sandals “You gave it to an individual?! How can you be so having faith in? I highly doubt this ‘friend’ of yours won’t take advantage of your goodness and steal it from you.” “What actually transpired to him?” It turned out a horrific view which would terrify even the most chilly-hearted cultivators across the world. Having said that, for some reason, Yuan didn’t actually feel everything as he noticed this b.l.o.o.d.y arena. “You will find a reduce.” Xu Jiaqi stated, and she continued, “The Paradise Refining Shape merely helps you consume stuff like monster cores and demon cores, but if you take in far too much faith based vitality at once, yourself will explode as with every other cultivator. It’s that your ‘limit’ is greater than some others, so don’t overestimate yourself and eat only whatever you can go through.” “The Supreme Heavens?” Yuan then requested, “Have you considered its limits? We have a number of treasures with me that might help me attain Nature Emperor or above should i enjoyed them, however i was advised that my body would be unable to deal with it, as I’d be attaining excessive faith based power simultaneously.” “Temper my body… Ok, I will keep that in mind.” Nonetheless, right before Yuan could enquire about it, he out of the blue experienced a very sharp ache as part of his chest muscles. “Temper my body… Ok, I am going to keep that in mind.” Yuan then inquired, “Have you thought about its boundaries? I actually have some treasures with me that can help me achieve Mindset Queen or higher generally if i taken them, although i was informed that my body system would be unable to handle it, as I’d be gaining too much psychic power simultaneously.” “You brought it to an individual?! How would you be so trusting? I highly question this ‘friend’ of yours won’t take advantage of your goodness and rob it by you.” “I do… although i don’t have it on me. Do you want it lower back?” Yuan said to her. “Temper my body… Fine, I am going to bear that in mind.” “Will be there any other thing I can use up besides monster cores and demon cores?” Yuan then inquired. “Whats up! What went down?!” Xu Jiaqi questioned him, but Yuan was not anymore concerned. richest wealth is wisdom quotes And without contemplating, nearly as nevertheless his physique reacted instinctively, Yuan swung his left arm, slicing the man cleanly by 50 percent. All of a sudden, a excessive roar that caused the location to tremble resounded behind him, causing Yuan to change around. “It’s true. She’s a Spirit California king.” “You gave it to a person?! How can you be so relying? I highly suspect this ‘friend’ of yours won’t take advantage of your goodness and steal it by you.” “I-I see…” Yuan mumbled, experience happy that he’d heard Feng Yuxiang’s suggestions and didn’t eat the Dragon Ancestor’s our blood heart and soul. “You don’t already have it for you? Although I mentioned to maintain it near you? Imagine if you drop it? That issue isn’t cheap, you realize. Actually, it’s truly worth over all things in the bottom Heavens combined.” Xu Jiaqi quickly began lecturing him. Yuan out of the blue introduced an unpleasant weep, shocking Senior citizen Bai and Xu Jiaqi. “It’s correct. She’s a Character Master.” At some time after, Xu Jiaqi requested him, “Anyway, do you still have my Historical Character Jade?” “Eh? Why?” Yuan was perplexed by her notice. Observing Yuan’s system plunging to the ground, Xu Jiaqi subconsciously reacted and traveled to get his human body. Yuan didn’t recognise this individual, nevertheless it was crystal clear to him that guy was wanting to hurt him. The frown on Xu Jiaqi’s experience grew greater, and she spoke from a minute of silence, “The Legacy your close friend has… Would it be the Supreme Heaven’s Legacy?” “What the… Why am I positioning this sword?” Yuan was speechless as he realized that he possessed a sword that had been still leaking with blood vessels in their comprehension. “Oh definitely? Who’s this pal of yours and what’s her history?” Xu Jiaqi out of the blue inquired about Xiao Hua. The departed man’s entire body decreased in the floor and immediately blended in with the surroundings which had been littered with a great number of corpses. “Mainly because she’s an ‘Exile’.” Xu Jiaqi said inside of a frosty voice. “Haaa…” Xu Jiaqi all of a sudden sighed ahead of rubbing her eye. Novel–Cultivation Online–Cultivation Online
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500904.44/warc/CC-MAIN-20230208191211-20230208221211-00504.warc.gz
CC-MAIN-2023-06
4,856
39
http://webapps.stackexchange.com/questions/tagged/browser+google-maps
code
Web Applications Meta to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site Web browser usage census map? Does anyone know of a web application that will display the types/versions of browsers people use overlaid over google maps? Dec 15 '10 at 22:04 newest browser google-maps questions feed Putting the Community back in Wiki Hot Network Questions Is this a valid method of finding magnitude of complex fraction How to copy a function by value? How can I roleplay a character more manipulative than myself? How do you align equations parts vertically? Why are laptop screens sized the way they are? Is my phone battery better / more capacitative than my laptop battery? Is there a single word that means “hitting the target but missing the point”? Can we call someone X太太 or not? If you confirm others' transactions, what do you do to earn 25 bitcoins per block? Functions of Independent Random Variables Can 只有 be used in this context? What kind of steps can I take to avoid overwhelming a new, support-heavy party? Search vs. Look Up Hide "Mark as Complete Check Box" in Sharepoint 2013 Task List Interacting with high school teachers (US) How do you choose treasure for encounters with creatures that drop treasure? Help!! How to do square root! Apply color function to matrix plot Why use "reds and oranges" not "red and orange"? NTP - How are NTP servers so accurate Simply put, what are the similarities between integers and polynomials? Can you get sucked tight on a planes toilet? Why are there many guitarists, but only one drummer in a band? Can random events destroy your ship? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00233-ip-10-147-4-33.ec2.internal.warc.gz
CC-MAIN-2014-15
2,186
54
https://www.codespeedy.com/products/youtube-channel-gallery-wordpress-plugin/
code
If you want to create a channel gallery in your WordPress posts or pages, this plugin is the right choice for you. This plugin is able to create a video gallery from any YouTube channel in your posts and pages. YouTube Channel Gallery WordPress Plugin uses shortcode to ad YouTube gallery. This plugin will retrieve videos from your channel and show it as the gallery. Below is the feature list of the YouTube Channel Gallery WordPress Plugin: To use YouTube Channel Gallery WordPress Plugin, you need to get YouTube data API. You can get it easily from https://developers.google.com/youtube/v3/. After you get the data API, all you need is just to put it in the plugin settings. Note that this plugin will skip playlists and take only channel videos. So, if you want to create a gallery with your channel videos, then this plugin is a perfect choice for you.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00388.warc.gz
CC-MAIN-2022-21
859
6
https://www.datapopulator.com/sketch/
code
# Data Populator for Sketch v3.6.5 # Installation & Updates - Download the latest version (requires Sketch 69+), unzip it and double-click the data-populator.sketchpluginfile to install it in Sketch - Sketch will notify you if there are updates and you can install them via Plugins → Manage Plugins… # Demo Document Download our Demo Document and play around out of the box. You'll find some helpful hints in the Sketch Document. # Install with Sketch Runner With Sketch Runner, just go to the install command and search for Data Populator. Runner allows you to manage plugins and do much more to speed up your workflow in Sketch.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506029.42/warc/CC-MAIN-20230921174008-20230921204008-00044.warc.gz
CC-MAIN-2023-40
634
11
http://technologynotes.net/technology-notes/2012/12/28/omnifocus-and-gtd-for-technology-team-managers
code
In contemplating how to frame an article around how I use OmniFocus for work it started becoming obvious to me that the type of work I do makes a huge difference in how I use the tool to begin with. As a technology team manager, I oversee several teams of software developers who write and QA software that runs the gamut from iOS application development to database application programming. At any given time there could be two to three major projects running across several large teams or a more fragmented approach which might encompass a dozen simultaneous projects in various stages of completion involving just a single resource. Despite this abstraction, my daily work still involves working through discrete action steps. Work projects rarely involve the type of hands-on things that I might track in OmniFocus for home projects, however. The things I generally need OmniFocus to help me with for work involve reminders for sending critical emails, scheduling meetings, following up on requests, preparing for meetings etc. One of the general things that is thrown around a lot in blog posts about productivity is something like "if it was truly high priority you wouldn't need a reminder", but I think that's bullshit. When you have several projects going on at once, all needing at least some portion of your attention, it is very easy to lose critical threads as you context-switch between all of the things that go on in a typical day's chaos. How Do Technology Managers use Task Manager Applications? In the past few years, I haven't had a day when I didn't meet on at least two ongoing projects. Usually it is many more than that. All of the projects are in varied states of completion. Some of those states might be: - "idea" stage - business concept - business requirements - technical requirements - QA planning - risk assessment and management - resource planning That's just to name a few. In each one of those little meetings -- it might be a simple hallway conversation or a full-blown project kick-off meeting lasting hours -- the goal is to come out of them with actionable, trackable responsibilities. While the project manager for each project will be tracking individual tasks, I am responsible for seeing my tasks to completion and my method of keeping track of those things needs to be bulletproof. As I've written about extensively, OmniFocus is the key to all of these things for me but I've found that I use OmniFocus a bit differently than other people and I believe that owes simply to my role as a manager. Rather than write something about how everyone should use OmniFocus the same way, irrespective of their role, I thought it best to go in depth with how I use it given what I do. So, for some of you, this won't make any sense and using OmniFocus or GTD this way will likely lead to some confusion. I doubt the way I have outlined below will work for you if you're a teacher or a project manager, for instance. It might lead to some interesting ways to think about how you use the tool, however, so at least you'll get some new perspectives if you happen to wade through what will likely be a very long post. The Importance of Contexts One thing that was a bit of a revelation to me, after the initial three week period of hammering out a viable GTD workflow to help me with work, was how major a role Contexts played compared to Projects. To review the basic concepts on these and how most people use them, refer back to my earlier Beginning OmniFocus posts on them. - Beginner's OmniFocus Series: (#1) Setting up Projects and Single-Action Lists - Beginner's OmniFocus Series: (#2) Setting up Contexts As a technology-based manager, however, my need for Contexts runs a bit counter to the ideal. I use Contexts throughout the day to focus myself depending on where I am, what I'm doing or who I am talking to. I touched on this point briefly in my Context post but today I'll go into a lot more depth. I have the following Context types: - Team/Person Context - These are arranged hierarchically with team names defined as a folder and then team members rolling up to each team type. I have "Dev", "QA", "IT", "Management" and "Peers" with people defined as Contexts below each. If I have an item to cover in a QA team meeting, I'll put it in the general QA team context, but if I need to speak to a specific team member, I'll assign it to their individual Context. These are helpful when I'm in a hallway conversation with someone about a specific project and I have a few other things that they need to know about. Each item that I need to address with them is under their individual Context. Obviously this wouldn't work with huge teams but I work at a small company and this works out really well for me. - Stakeholder Context - These are also based on specific people but I keep them in a Stakeholder folder given the disparate teams they may belong to across the organization. These are tasks that I need to complete when speaking with or interacting with them directly (fact finding, spec resolutions, etc.) - Location-based Contexts - Work, Home, and Phone are the three that pertain here although things like Home and Phone can pertain to Home-based projects too. Here's where we start seeing the strength of the GTD Contexts. If I am at home and have some spare minutes to get my phone calls out of the way, it is useful to see them all listed in one Phone context. - Standing Meeting Contexts - I have created Contexts for weekly/monthly standing meetings. I add items to these when I know there are topics I need to cover in those meetings and don't want to lose track of them. These items become my agenda prior to the meeting. - Email/Computer - Any items that I need to research, emails that need replying to, websites that I need to read get filed here. So when I'm replying to email, irrespective of project, it happens when I'm taking care of my "Email/Computer" Context. - Management - This is another person-based Context which is a catch-all for anyone I need to deal with politically, cross-departmentally or otherwise. My boss is in this group as well as other SVPs across the organization. It also contains peers and people I might require to resolve organizational needs like HR and Finance. - IT Context - A person-based Context containing IT personnel who are pivotal to completion of projects or resolving issues that members of my teams might be having. These types of items can take time and a lot of back-and-forth to resolve so having them in a specific Context keeps me following up on them often enough to get taken care of. - Thinking - As I wrote a while back, this is a very untraditional Context in the GTD methodology but I use it to file away things that I need to lock the office door to think, whiteboard or brainstorm about. It doesn't get used that often but having this catch-all keeps it from getting lost in a sea of other tasks in more targetted Contexts. As you learn after doing GTD for a while, Contexts are the single-most important construct in the GTD methodology. After you spend months looking at things in Project view, as long as you haven't ignored Contexts completely, something will click and you'll understand how incredibly useful they are and why, once you've used them, using a bog-standard list of "Things To Do" in the Reminders app just doesn't cut it anymore. Well... Then What About Projects? Projects obviously have their place but because GTD isn't a project management methodology and because I'm not a project manager in the strictest sense they take a backseat to Contexts when it comes to actually working on things. Projects still are necessary for me, however, so don't think that I threw the baby out with the bathwater. It is just that I think about them differently than in the typical GTD sense. When a new project is announced, I create a Project for it in OmniFocus. It is basically a dropzone for new tasks before they get Contexts assigned to them. It also helps to set something up like this early on in the project lifecycle because you can do a full GTD "capture" about the project, cataloguing all of the things that you need to do to get the project off of the ground (things like assigning resources, checking project timelines for conflicting due dates, vacations etc.) It is fine to add a project, give it a descriptive name (or a codename) and leave it empty until you can think of things to add to it. The typical project in GTD is one where you can define the goal and break it down into a series of actions but, at this point in a project lifecycle, that may be a waste of time. Given how little you know about what the completion state is, it's not really feasible yet. What will generally happen as the lifecycle moves forward is that sub-projects will present themselves -- ones that require discrete actions steps to complete -- and they will take up residence under each Project. I suppose you could say these capital-P "Projects" serve as overarching buckets into which many sub-projects will live, each with a singular, defined conclusion and all playing a part in marching the project towards completion. The Project is also key when doing Reviews. They are a good place to take a focused look at all of the things you're on the hook for (the "what") related to a given project, regardless of Context (the "how" or "where"). Reviews Are Your Friend Reviews are essential to the care-and-feeding of a typical GTD workflow. As I laid out in my previous article focusing specifically on Reviews, doing a full review can have the feel of a reset when things are getting a bit ragged around the edges. As you scan through your tasks, deleting and adding new items as needed, you get a more well-rounded view of your project and also give yourself space to think outside-the-box on items that may have been escaping your purview when wallowing in the trenches, day to day. I generally cycle through a variety of review types. They all have their place and some are more important than others at different times in a project's lifecycle. - Project - Project-level reviews often happen in the other review cycles as well but, more specifically, they happen during project status meetings. These meetings present the ideal time to cross tasks off of the list and interact with individuals you have created Contexts for (developers, QA folks, stakeholders and project managers). You can even use the tasks outlined in your project as an agenda if called upon to do so, checking in with each contact and crossing things off of your list when you verify they're complete. It is a good time to go about adding tasks you will be responsible for as the project reveals itself over the intervening weeks or months. The worst thing that can happen is that you don't capture a critical item and the project ends up getting delayed because you didn't meet a responsibility. This can present an opportunity for a real life case of "GTD to the rescue". Between your trusted capture system and an approach to the review process that keeps things front-and-center, you can virtually eliminate this type of thing from taking you by surprise. - Daily - Every morning I check my "High Priority" perspective to see what is critical for a given day, but I also spin through the project view and get a sense of things that might be upcoming. If I get some time amidst the higher priority items, it's sometimes useful to cross some of these off the list as well before they become an issue. - Weekly - Punctuating the week, usually on Friday morning, I do a full weekly review of everything. All home and work Projects get examined and tweaked as necessary. It sounds like a lot but, if you've been doing your daily reviews, it usually only takes a few minutes. As I mentioned in my Review article, deleting tasks is just as powerful as adding them. It becomes even more pivotal as you function in a managerial role. It also grows in importance simply because you're aiming at a moving target with development projects -- often the state is not completely known at the project's inception and, adding to the complication, it is also changing on a daily basis. Like the saying goes, project management is like trying to shoot a bullet with a bullet. The best defense against being out of sync is to delete things that don't matter anymore and add things the second they reveal themselves. The Review is the single best time to do that. In the Trenches As you can see, all of these OmniFocus articles I write build on each other. They are all describing a comprehensive methodology that took a few years to arrive at and was adapted to serve my unique needs. As this website evolved, the feedback I was getting from readers led me to believe that perhaps my needs weren't all that unique. Many of the GTD books and articles out there point you towards this ideal set up where we have example projects like "Mend the roof" and such. These types of projects have fairly well-defined action steps and the end result is clear -- a mended roof. Software development is a messy business. Development projects don't appear, pre-formed, with a clear view from the first step to completion. Often, when new opportunities arise, the completion state isn't even known. In that case, having a flexible and adaptable system that allows us to manage the disparate pieces of the project for which we are responsible can mean the difference between "shipping" and an over-budget, delayed nightmare. It also keeps our finger on the pulse of the teams and stakeholders involved which is a key job of the manager. That said, where software is concerned, there are only so many parts of the process which we can hope to control. Stakeholders often interact with clients who rarely feel the pressures of time and expense and for whom it is easy to make demands that exceed the bounds of both. Specifications can take longer than anticipated to complete which will, in turn, hinder your team's ability to generate technical requirements and test plans. Each step is a link in the chain and, as a technical manager, your job is often to simply put your team in the best possible position to succeed despite all of these issues. Putting a workflow and methodology in place like the one described above is not the only way to do it, but it is the one I've arrived at that works the best for me. Hopefully you can find something in that heap of words that can help you too.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705956263/warc/CC-MAIN-20130516120556-00063-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
14,462
50
https://www.ieltsanswers.com/ielts-cambridge-past-test-books.html
code
IELTS Cambridge Past test books The main reason why I am so dogmatic (insistent) about using actual past test papers and not tests that have been created by others is because of the following reasons: - Past test papers more accurately reflect what you’re likely to get in the real test. You want to get used to what you well experience in the real test and also build up skills and strategies for the real test. - Past test papers have certain “tricks” that are used repeatedly to test candidates. You will only experience these tricks if you do the past test papers. For instance, in part one of the listening test you often need to write down a name and it is difficult to catch the spelling of the name when it involves an “M” and “N” because they sound almost the same. Note that “M” is a longer sound than “N”. In the reading test with true false not given question certain words are often used to “trick you.” For example, “many” and “most” are not synonyms (most = over 50%, but many = could be less than 50% or could be more), so if the question says most and the article says many it is not given! - You will only be able to get a reasonable idea of your level from past test papers. Tests that are not created by the IELTS organisation will not give you an accurate assessment. This is because the IELTS organisation trials tests before actually using them to make sure that they are of similar difficulty to other tests.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057973.90/warc/CC-MAIN-20210926205414-20210926235414-00657.warc.gz
CC-MAIN-2021-39
1,466
5
https://meta.discourse.org/t/post-views-counter/143807
code
I created this theme about a year ago for client and I thought I’d like to share this to the community. This theme will show the views (reads) count for posts, that’s it! I hope someone found this useful Note: only tested on tests-passed branch Hi! The administrators of the site where I am registered have installed you theme component, but the counter always stays at 0 Is there any way to fix it? The site is: It worked perfect! Thank you! Can you provide the option to hide the number of views in the replies? I think is broken since last discourse upgrade If anybody is still using this component and encounter problems after upgrading Discourse to 2.9.0.beta3, have a try with my fix: GitHub - freemdict/discourse-post-views-counter-theme. Remember to disable or remove the previous version of component. I basically changed only one line of code: Fix bug brought by Discourse 2.9.0.beta3 · freemdict/discourse-post-views-counter-theme@3f337c0 · GitHub. Thanks so much your version worked @mdict_free (oct 2022)
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711013.11/warc/CC-MAIN-20221205064509-20221205094509-00794.warc.gz
CC-MAIN-2022-49
1,023
12
https://security.stackexchange.com/questions/160130/how-should-i-securely-generate-random-passwords-when-importing-new-users
code
I need to generate random passwords when importing new users from an external source. I'm currently doing it by taking a random assortment of 8 lowercase/uppercase letters, and numbers. I am also using Ruby's rand as a PRNG. Is this cryptographically secure? First of all, you need to use a secure random number generator, like Ruby's Second, you should choose a target security level for the generated passwords, i.e., how hard should they be to crack? Security levels are often given in bits, i.e., the base 2 logarithm of the number of distinct equiprobable passwords. In simpler words: - A 64-bit security level means that there are 2^64 distinct passwords, and your system is just as likely to be choose one as any of the others; - A "random assortment of 8 lowercase/uppercase letters and numbers" is 62 distinct characters, which means that choosing one such character at random provides log2(62) ≈ 6 bits of security. Choosing 8 such characters independently at random provides log2(62) * 8 ≈ 48 bits of security. 48 bits is crackable in practice for a really dedicated attacker; I would recommend about a 64-bit security level as the bare minimum for passwords, and I prefer about 80 bits for my own. With lowercase/uppercase letters + digits, you need 11 random characters for a 64-bit security level, and 13 characters gives you a 78-bit security level. Before an edit your question asked about the effect of adding symbols to your passwords as well bumps the alphabet size to about 95 characters, and log2(95) ≈ 6.6, which brings those password sizes down by one character: 10 characters for a 64-bit password and 12 for a 79-bit password. If you're generating passwords at random, you can therefore see that it's not a huge effect. In Ruby, you can simply use SecureRandom.hex for random passwords that the user needs to change. pry(main)> require 'securerandom' => true pry(main)> SecureRandom.hex => "c3c4fe04dcc0d388fa37fc5991423a5d" pry(main)> It uses the OS-provided secure random number generator, while the rand() function is a Mersenne Twister (easy to predict once you see the output). The default length is perfectly fine as well. 8 lowercase/uppercase letters, and numbers This is the bare minimum of a weak password policy. Especially 8 characters are not enough. As you are generating your passwords, why restrict yourself to passwords which are quite weak and easily bruteforced? I would suggest at least 64bits, but there is really no good reason not to go with 128/160, which would be more secure. You will also have the problem of distributing the passwords, which will likely be insecure (probably email?). Because of this, I would strongly suggest to make these one-time only passwords which the user must change after the next successful login (so basically it becomes a password reset token). The way you do it is generally fine. Make sure you use a secure random number generator. Some PRNGs create predictable output, making it possible to guess another user's password from my own. The more possible outcomes your algorithm has, the more secure the password will be. This means that the longer the password, the more secure it is. However, a longer password will also be harder for the user to type. You use eight characters, which is a bit low. I recommend at least 10 characters. In the same way, including punctuation characters increases the security a bit. To make passwords easier for your users, you may want to remove characters that look alike, such as 0, o and O. There are also algorithms that create somewhat rememberable passwords, such as "NaughtyChopstick21@". However, it is often better if users use a password manager or create a memorable and secure password themselves. Your password requirements could be better but will probably work as a temporary password. I'd be more concerned about how you are going to distribute those passwords. Probably by email? If you are that is very insecure. If you do go this route you should allow the passwords to be used only one time. rand is not cryptographically secure. I would look into using sysrandom instead. For randomly generated passwords there are two important measures: - Make the entropy of the password high enough - Make it as user friendly as possible. You need to decide how many bits of security is necessary. 8 alphanumeric characters does not provide enough entropy, there are two ways you can improve that: - Use a larger set of possible characters - Use a longer password Making the password longer is the more efficient of the two at increasing the entropy of your passwords. It is also the most user friendly approach because the more different characters you use the harder they will be to type. And even if they are not going to be typed by the user, the more different characters will also make the password harder to copy-paste and more likely to trigger interoperability problems. I would say you have already gone too far by using all the alphanumeric characters. To make it more user friendly I would recommend sticking with only lower case letters. And to make it secure I would recommend to make the password long enough. Assume your security goal is 64 bits, you could either go with: - 14 random lower case characters - 10 random characters from all printable ASCII characters The inconvenience of having to deal with all the special characters is much worse for the user than the inconvenience of having the generated password be 4 characters longer. With any password system and it's security you really have to consider more than just the length/randomness of the password, it's really more important to think about how they'll be attacked. There's two ways passwords get attacked Online brute-force. Here the attacker is attacking the user's password via your system. Unless your user has chosen a password that can be guessing before the account lockout kicks in, the attacker won't succeed in getting in. 8 generally random characters are very very unlikely to be guessed in this scenario. Important things for you to do here is ensure your system blocks attackers and try to ensure users don't choose very common passwords like 123456. Offline brute-force Here the attacker has a copy of your password database (presumbably via a flaw such as SQL Injection, if it's a web based system) and is trying to crack the passwords. Likely the most important control here is what algorithm you're using to store the passwords. Here's I'll defer to Thomas Pornin to explain the options but it's worth noting that ruby has decent library support for bcrypt (Rails uses it for SecurePassword). Password length is also important here as is randomness, but cracking 8 char generally random passwords hashed with bcrypt and a decent work factor, is a non-trivial thing for most attackers. Without knowing your risk model I can't say it'll be fine, but I don't see it as dreadfully bad either. On your point about cryptographically secure, as others have said ruby SecureRandom is better than Use five-word, randomly generated passphrases instead. You could adapt the wordlist found here: Make them all lower case - easy to type, easy to memorize if needed.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100602.36/warc/CC-MAIN-20231206162528-20231206192528-00154.warc.gz
CC-MAIN-2023-50
7,187
46
http://boardgames.stackexchange.com/questions/tagged/starcraft+variants
code
Board & Card Games Board & Card Games Meta to customize your list. more stack exchange communities Start here for a quick overview of the site Detailed answers to any questions you might have Discuss the workings and policies of this site Board & Card Games Where can I find more Brood War scenarios? The Brood War expansion of Starcraft introduces the idea of the Scenario Game Mode, which basically comprises a pregenerated galaxy setup with special victory conditions. Often, playing scenarios ... Mar 29 '12 at 2:49 newest starcraft variants questions feed Hot Network Questions How to roleplay languages that none of the player characters know? Save site as template in SharePoint 2013 How to avoid being forked into oblivion by a more powerful contributor? Better way to code these deeply nested / multiple if/else statements Can humans survive without consuming life? Is it professional for interviewers to end a job interview midway if not happy with candidate? Why doesn't Mutex get released when disposed? How did the Tamarians of ST:TNG manage technical language? Is it common to provide digital notes (slides or handwritten) for students? Check existence of limit Uncertainty Principle Intuition pythontex - How to choose the version of Python used? How can I deal with senior colleagues constantly getting my name wrong? Why does void in C mean not void? Output the legendary Yggdrasil Is there a word which means whatever you want it to mean? Or has no meaning? If a baby is born on an international flight over international waters, what nationality are they? Why does research cost so much money? The longest gap between a movie and its sequel Interview question: "Tell me about when you offended somebody." Was the location of the 180° line of longitude coincidence or deliberately chosen? How can I only draw outer common tangent to two circles? How to achieve this cover design using LaTeX? How many even numbers are the sum of at most one pair of prime numbers? more hot questions Life / Arts Culture / Recreation TeX - LaTeX Unix & Linux Ask Different (Apple) Geographic Information Systems Science Fiction & Fantasy Seasoned Advice (cooking) Personal Finance & Money English Language & Usage Mi Yodeya (Judaism) Cross Validated (stats) Theoretical Computer Science Meta Stack Exchange Stack Overflow Careers site design / logo © 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0
s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500826025.8/warc/CC-MAIN-20140820021346-00013-ip-10-180-136-8.ec2.internal.warc.gz
CC-MAIN-2014-35
2,424
55
https://oer.galileo.usg.edu/mathematics-textbooks/16/
code
Topics covered in this text include: Files can also be downloaded on the Dalton State College GitHub: Accessible files with optical character recognition (OCR) and auto-tagging provided by the Center for Inclusive Design and Innovation. Analytic Geometry & Calculus I, II, & III MATH 2253, MATH 2254, MATH 2255 Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License. University System of Georgia Algebraic Geometry | Geometry and Topology Gonzalez, Thomas; Hilgemann, Michael; and Schmurr, Jason, "Dalton State College APEX Calculus" (2018). Mathematics Open Textbooks. 16.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100781.60/warc/CC-MAIN-20231209004202-20231209034202-00814.warc.gz
CC-MAIN-2023-50
654
10
https://french.meta.stackexchange.com/users/2317/matthieu-rouget
code
Top network posts - 82 How to get error message when ifstream open fails - 20 Go to Matching Brace in Visual Studio? - 16 C++ cout gives undeclared identifier - 13 DLL written in C vs the same written in C++ - 12 Is it possible to get the time (of the day) and date at time of compilation? - 11 gcc: error: unrecognized command line option '-fforce-mem' - 7 Correct way of initializing a struct in a class constructor - View more network posts → Keeping a low profile. This user hasn't posted yet.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154466.61/warc/CC-MAIN-20210803155731-20210803185731-00107.warc.gz
CC-MAIN-2021-31
499
11
http://ccit.college.columbia.edu/jobs
code
Columbia College Information Technology (CCIT) is a small group of professionals who provide software, hardware, infrastructure, Web publishing, and custom application development solutions for a variety of clients at Columbia University. We foster a lively, open, and productive work environment. For more on working at Columbia University, see the main jobs site. Our current open opportunities are listed below.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218194601.22/warc/CC-MAIN-20170322212954-00172-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
414
5
http://mail-index.netbsd.org/tech-kern/2004/07/23/0010.html
code
Subject: Re: Valid use of bus_dma(9)? To: Jochen Kunz <email@example.com> From: Manuel Bouyer <firstname.lastname@example.org> Date: 07/23/2004 17:19:23 On Thu, Jul 22, 2004 at 03:13:10PM +0200, Jochen Kunz wrote: > Is this a valid use of bus_dma(9)? > I.e. the usual way to get some DMA memory and later... > y_segs = x_segs > y_segs.ds_addr += sizeof(some_thing); > y_segs.ds_len = sizeof(some_other_thing); > Thus resulting in a double mapping of the previously allocated and maped > "x" DMA memory. The physical "y" memory is subregion of the physical "x" > memory mapped at a different KVA. > To me this looks wrong. The "y" memory should be handled as an offset of > the "x" memory and a driver should never modify or "self create" a DMA > segment / map. (At least that is the way I understand bus_dma(9).) The mapping of this memory is machine-dependent (or "opaque"); machine-independent code is not to assume that the addresses returned are valid in kernel virtual address space, or that the addresses returned are system physical addresses. The address To me, this means that we can't make any assumptions about the values returned by bus_dmamem_alloc(), and especially it's wrong to assume we can do memory-related operations on ds_addr/ds_len. If this is really needed, we may need a bus_dmamem_submap(), or something But I have the feeling that this could be handled in some other ways, maybe with device-dependant macros or functions, like siop/esiop does. > Background: fpa(4) does the above. It fails on hp700 for this reason. > I now need to decide if the hp700 bus_dma(9) code needs fixing (my > problem) or if fpa(4) needs fixing (not my problem ;-) ). My guess is that fpa(4) needs fixing. I suspect it may have problems on platforms like sparc64 for the same reason. Manuel Bouyer <email@example.com> NetBSD: 26 ans d'experience feront toujours la difference
s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370493684.2/warc/CC-MAIN-20200329015008-20200329045008-00379.warc.gz
CC-MAIN-2020-16
1,879
33
https://blockchain-society.science/?p=1193
code
What blockchain does is shift some of the trust in people and institutions to trust in technology. You need to trust the cryptography, the protocols, the software, the computers and the network. And you need to trust them absolutely, because they’re often single points of failure.When that trust turns out to be misplaced, there is no recourse. If your bitcoin exchange gets hacked, you lose all of your money. If your bitcoin wallet gets hacked, you lose all of your money. If you forget your login credentials, you lose all of your money. If there’s a bug in the code of your smart contract, you lose all of your money. If someone successfully hacks the blockchain security, you lose all of your money. In many ways, trusting technology is harder than trusting people. Would you rather trust a human legal system or the details of some computer code you don’t have the expertise to audit?Blockchain enthusiasts point to more traditional forms of trust—bank processing fees, for example—as expensive. But blockchain trust is also costly; the cost is just hidden. For bitcoin, that’s the cost of the additional bitcoin mined, the transaction fees, and the enormous environmental waste.Blockchain doesn’t eliminate the need to trust human institutions. There will always be a big gap that can’t be addressed by technology alone. People still need to be in charge, and there is always a need for governance outside the system. This is obvious in the ongoing debate about changing the bitcoin block size, or in fixing the DAO attack against Etherium. There’s always a need to override the rules, and there’s always a need for the ability to make permanent rules changes. As long as hard forks are a possibility—that’s when the people in charge of a blockchain step outside the system to change it—people will need to be in charge.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476374.40/warc/CC-MAIN-20240303111005-20240303141005-00800.warc.gz
CC-MAIN-2024-10
1,854
1
https://superuser.com/questions/792258/jenkins-docker-client-wasnt-initialized
code
I am trying to run the docker plugin in Jenkins but I get the error: [Docker] ERROR: docker client is not initialized, command 'Pull image' was aborted. Check Jenkins server log which Docker client wasn't initialized What does that mean? I have installed docker on the machine that runs jenkins.
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389798.91/warc/CC-MAIN-20210309092230-20210309122230-00616.warc.gz
CC-MAIN-2021-10
295
3
https://saker.build/saker.java.compiler/doc/javacompile/bundleclasspath.html
code
The package provides facility for creating a classpath that contains saker.nest bundles. The saker.java.classpath.bundle() task allows creating classpath for a given set of bundles. If the build is running inside an IDE, then the source attachments will be automatically downloaded byte the task. saker.java.compile( SourceDirectories: src, ClassPath: saker.java.classpath.bundle(example.bundle-v1.0) ) The above compilation will have the example.bundle-v1.0 on its classpath. Note that the dependencies of the bundle are not resolved. To resolve them, use the saker.java.compile( SourceDirectories: src, ClassPath: saker.java.classpath.bundle( nest.dependency.resolve(example.bundle) ) ) The above will include the example.bundle with an appropriate version as well as its dependencies (including transitive ones).
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00464.warc.gz
CC-MAIN-2022-21
815
8
https://phabricator.kairohm.dev/T55
code
A commercially available G2 Bus Connector doesn't seem to exist, so another solution must be found. I made a crude model in FreeCAD to test my ideas on. It's only the mating side of the modem connector so far, but the side that mounts to the PCB doesn't really matter since we don't need an exact replacement. 3D printing the connector may be a viable option. I'll create a model to see if it's even possible to meet the design rules.
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500126.0/warc/CC-MAIN-20230204110651-20230204140651-00556.warc.gz
CC-MAIN-2023-06
434
3
https://brmlab.cz/event/gnome_python_hackfest
code
GNOME Python Hackfest |GNOME Python Hackfest |17.1.2011 - 21.1.2011 On the third week of January we'll hold a GNOME Python Hackfest at our hackerspace. Nine top-hackers involved in GNOME and Python communities will hack for 5 days in order to adapt Python bindings to the new GNOME 3.0 API. See GNOME wiki for more information about the agenda. They will also hold two talks during the hackfest: - Wed 19.1.2011 19:00 OLPC and Sugar (Tomeu Vizoso & Simon Schampijer) - The One Laptop Per Child initiative has been working on improving the educational opportunities of children from all around the world since 2005. The organization designs and produces machines and software specifically aimed to this goal, while also supporting the institutions that deliver and maintain the project on the field. As of January 2011, more than 1.5 million machines have been delivered to more than 15 countries. - Sugar is an immersive user experience that is used in educational projects such as OLPC. The main design goal is to maximize the chances of learning and does so through clarity, providing good affordances for collaboration and estimulating the production and critique of content. It is a Free Software project with a diverse community with members from all around the world. - This talk will introduce both projects and explain their current states and perspectives of future. - recording: http://nat.brmlab.cz/talks/OLPCandSuggar.mkv - Fri 21.1.2011 19:00 Getting things done in Open Source - The legacy of PyGObject and it's benefactor the GNOME Foundation - This talk will go through the history of the Python bindings for GTK+ along with how the Foundation helped keep the project going even after the main developers had left the project. We will talk about the decisions made to keep the project a continued success and how organizations such as Brmlab help us in our ongoing mission. - GNOME Python Hackfest (gnome.org) - Python ♥ GNOME Hackfest 2011 (tomeuvizoso.net) - PyGI in Prague (j5live.com) - Na zdravi PyGI (piware.de) - Bridging future gaps – bringing Sugar to Gtk-3 (erikos.sweettimez.de) - Wrap-up: Python ⊕ GNOME Hackfest 2011 (tomeuvizoso.net) - PyGObject Hackfest Report (laszlopandy.com)
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817036.4/warc/CC-MAIN-20240416000407-20240416030407-00210.warc.gz
CC-MAIN-2024-18
2,215
23
https://signsofmyrab.wordpress.com/2015/02/08/gravity-and-its-direction/
code
Assalamu alaikum wa rehmatullahi wa barakatuh Every body on the Earth is attracted towards it by a force known as gravitational force. The direction of the gravitational force is towards the Earth. Weight is nothing except the gravitational pull on a body. Now see this verse. O ye who believe! what is the matter with you, that, when ye are asked to go forth in the cause of Allah, ye cling heavily (weigh) to the earth? Do ye prefer the life of this world to the Hereafter? But little is the comfort of this life, as compared with the Hereafter.(Surah Taubah 38) The word used is thaqaltum,and its root word is thaqal which means weight. See the verse again, …. the reason that you weigh (the phenomena) towards(this phenomena has a direction) the Earth (the correct direction)…!!! The truth is from your Lord, so never be among the doubters.(Surah baqarah 147)
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590794.69/warc/CC-MAIN-20180719090301-20180719110301-00266.warc.gz
CC-MAIN-2018-30
867
10
https://godotengine.org/qa/5524/how-to-clear-text-on-key-input
code
LineEdit has a text_entered signal that will fire when it has focus and Enter is pressed. Connect that signal to a function and it should be as easy as calling clear() on the LineEdit node. You should be able to write your other logic in there as well. Just tested and it works: label_node = self.get_parent().get_node("RichTextLabel") func _on_LineEdit_text_entered( text ): if text.length() > 0: # append the text to RichTextLabel # option: label_node.newline() instead of "\n" label_node.add_text(text + "\n") # clear LineEdit
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400274441.60/warc/CC-MAIN-20200927085848-20200927115848-00749.warc.gz
CC-MAIN-2020-40
529
11
https://www.freelancer.sg/projects/java/java-tourgds-xml-plugin-for/
code
We need to connect our APIs (XML) to 3 different calls for a tour/events vendor plugin. We need just the JAVA part, as we supply every aspect of the connectionAPI for: - retrieving availabilty - sending a reservation/amend/cancellation The plugin is well commented on a github repository. Slack account to communicate with HQ is required. toursgds XML Schemas for Tours GDS is used. 9 freelancers are bidding on average €171 for this job Hello, I have an experience in java over more than 7 years. I worked for multiple clients and delivered the projects successfully on time. Let me know if you found myself a good fit for this job. Thanks Hamza
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655880243.25/warc/CC-MAIN-20200702205206-20200702235206-00071.warc.gz
CC-MAIN-2020-29
648
8
http://moorec.people.cofc.edu/group_assignment140_18.htm
code
Group Design Concepts Presentation Assignment Instructions: Organize into groups of 2-3. Then choose a pair of design principles and elements from the list (see OAKS groups). I have the principles and elements paired up in a random fashion. They have no relationships with each other. Pairing them is simply a way to get all of them covered. After researching and studying them, you will create a compelling presentation to share your findings and designs with the class. The Presentation: If you use slides, the presentation must not be text-heavy. Since this portion of the course is heavily imbued with graphics, we want to take a departure from the text-heavy slides and display more vibrancy with use of imagery. Therefore, do not use the traditional bullet-point style. They are visually boring and the presenter might make the mistake of reading from the screen. Peruse this article to discover some ways to avoid killing the audience by bullet points. Here are some additional guidelines and rules for preparing and delivering your presentation: As for the content, your goal is three-fold: define, show examples, and inspire. Thus, More on Delivery & Submissions: Delivery: The time limit is 7 minutes for the presentation and about 5 minutes Q&A. In your quick introduction of team members, include an interesting or fun fact about each person. (Make it blend into your blurb; don’t say “A fun fact about John is…”) You must moderate the 3-5-minute question and answer period in an interesting way. Some suggestions are: A) ask the audience questions, B) revisit a slide for more explanation C) suggest a question for the exam. Please don't neglect the very important part of moderating a discussion, as it will count into your grade. Final Submission: Do not submit the slides. Instead, write a 1-2 page summary, including two possible test questions. Your summary can and should contain a few images, but should not be overwhelmingly images. You may submit it in the OAKS group dropbox called "Summary". About the Grade: This will count as a regular assignment for each group member.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703557462.87/warc/CC-MAIN-20210124204052-20210124234052-00562.warc.gz
CC-MAIN-2021-04
2,103
12
https://www.ciscopress.com/articles/article.asp?p=28688&amp;seqNum=6
code
Quality of service (QoS) and MPLS are, at a political level, similar. They're both technologies that have been gaining popularity in recent years. They both seem to be technologies that you either love or hatesome people are huge QoS fans, and others can't stand it. The same is true of MPLSsome people like it, and others don't. At a technical level, though, QoS and MPLS are very different. QoS is an umbrella term that covers network performance characteristics. As discussed in Chapter 1, "Understanding Traffic Engineering with MPLS," QoS has two parts: Finding a path through your network that can provide the service you offer Enforcing that service The acronym QoS in respect to IP first showed up in RFC 1006, "ISO Transport Service on Top of the TCP: Version 3," published in 1987. The term QoS has been around for even longer, because it is a general term used to describe performance characteristics in networks. In the IP and MPLS worlds, the term QoS is most often used to describe a set of techniques to manage packet loss, latency, and jitter. QoS has been rather appropriately described as "managed unfairness": If you have contention for system resources, who are you unfair to, and why? Two QoS architectures are in use today: Integrated Services (IntServ) Differentiated Services (DiffServ) For various reasons, IntServ never scaled to the level it needed to get to for Internet-size networks. IntServ is fine for small- to medium-sized networks, but its need to make end-to-end, host-to-host, per-application microflows across a network means it can't grow to the level that large service provider networks need. DiffServ, on the other hand, has proven quite scalable. Its use of classification on the edge and per-hop queuing and discard behaviors in the core means that most of the work is done at the edge, and you don't need to keep any microflow state in the core. This chapter assumes that you understand QoS on an IP network. It concentrates on the integration of MPLS into the IP QoS spectrum of services. This means that you should be comfortable with acronyms such as CAR, LLQ, MDRR, MQC, SLA, and WRED in order to get the most out of this chapter. This chapter briefly reviews both the DiffServ architecture and the Modular QoS CLI (MQC), but see Appendix B, "CCO and Other References," if you want to learn more about the portfolio of Cisco QoS tools. QoS, as used in casual conversation and in the context of IP and MPLS networks, is a method of packet treatment: How do you decide which packets get what service? MPLS, on the other hand, is a switching method used to get packets from one place to another by going through a series of hops. Which hops a packet goes through can be determined by your IGP routing or by MPLS TE. So there you have itMPLS is about getting packets from one hop to another, and QoS (as the term is commonly used) is what happens to packets at each hop. As you can imagine, between two complex devices such as QoS and MPLS, a lot can be done. This chapter covers five topics: The DiffServ architecture DiffServ's interaction with IP Precedence and MPLS EXP bits The treatment of EXP values in a label stack as packets are forwarded throughout a network A quick review of the Modular QoS CLI (MQC), which is how most QoS features on most platforms are configured Where DiffServ and MPLS TE intersectthe emerging DiffServ-Aware Traffic Engineering (DS-TE) devices and how they can be used to further optimize your network performance DiffServ and MPLS TE It is important to understand that the DiffServ architecture and the sections of this chapter that cover DiffServ and MPLS have nothing to do with MPLS TE. DiffServ is purely a method of treating packets differently at each hop. The DiffServ architecture doesn't care what control plane protocol a given label assignment comes from. Whether it's RSVP or LDP, or BGP, or something else entirely, the forwarding plane doesn't care. Why does this chapter exist then, if it's not about TE? Partly because MPLS TE and DiffServ treatment of MPLS packets go hand in hand in many network designs, and partly because of the existence of something called DS-TE, discussed later in this chapter. The DiffServ Architecture RFC 2475 defines an architecture for Differentiated Serviceshow to use DiffServ Code Point (DSCP) bits and various QoS mechanisms to provide different qualities of service in your network. DiffServ has two major components: Traffic conditioningIncludes things such as policing, coloring, and shaping. Is done only at the edge of the network. Per-hop behaviorsEssentially consist of queuing, scheduling, and dropping mechanisms. As the name implies, they are done at every hop in the network. Cisco IOS Software provides all sorts of different tools to apply these architecture pieces. You can configure most services in two waysa host of older, disconnected, per-platform methods, and a newer, unified configuration set called the MQC. Only MQC is covered in this chapter. For information on the older configuration mechanisms, see Appendix B or the documentation on CCO. Not all platforms support MQC, so there might be times when you need to configure a service using a non-MQC configuration method; however, MQC is where all QoS configuration services are heading, so it's definitely worth understanding. Traffic conditioning generally involves classification, policing, and marking, and per-hop behaviors deal with queuing, scheduling, and dropping. Each of these topics are discussed briefly. The first step in applying the DiffServ architecture is to have the capability to classify packets. Classification is the act of examining a packet to decide what sort of rules it should be run through, and subsequently what DSCP or EXP value should be set on the packet. Classifying IP Packets Classifying IP packets is straightforward. You can match on just about anything in the IP header. Specific match capabilities vary by platform, but generally, destination IP address, source IP address, and DSCP values can be matched against. The idea behind DSCP is discussed in the section "DiffServ and IP Packets." Classifying MPLS Packets The big thing to keep in mind when classifying MPLS packets is that you can't match on anything other than the outermost EXP value in the label stack. There's no way to look past the MPLS header at the underlying IP packet and do any matching on or modification of that packet. You can't match on the label value in the top of the stack, and you can't match on TTL (just as you can't match on IP TTL). Finally, you can't do any matching of EXP values on any label other than the topmost label on the stack. Policing involves metering traffic against a specified service contract and dealing with in-rate and out-of-rate traffic differently. One of the fundamental pieces of the DiffServ architecture is that you don't allow more traffic on your network than you have designed for, to make sure that you don't overtax the queues you've provisioned. This is generally done with policing, although it can also be done with shaping. Policing is done on the edge of the network. As such, the packets coming into the network are very often IP packets. However, under some scenarios it is possible to receive MPLS-labeled packets on the edge of the network. For example, the Carrier Supporting Carrier architecture (see Appendix B) means that a provider receives MPLS-labeled packets from a customer. The marking configuration is usually very tightly tied to the policing configuration. You can mark traffic as in-rate and out-of-rate as a result of policing traffic. You don't need to police in order to mark. For example, you can simply define a mapping between the IP packet's DSCP value and the MPLS EXP bits to be used when a label is imposed on these packets. Another possibility is to simply mark all traffic coming in on an interface, regardless of traffic rate. This is handy if you have some customers who are paying extra for better QoS and some who are not. For those who are not, simply set the EXP to 0 on all packets from that customer. Being able to set the EXP on a packet, rather than having to set the IP Precedence, is one of the advantages of MPLS. This is discussed in more detail in the sections "Label Stack Treatment" and "Tunnel Modes." Queuing is accomplished in different ways on different platforms. However, the good news is that you can treat MPLS EXP just like IP Precedence. Multiple queuing techniques can be applied to MPLS, depending on your platform and code version: First In First Out (FIFO) Modified Deficit Round Robin (MDRR) (GSR platforms only) Class-Based Weighted Fair Queuing (CBWFQ) (most non-GSR platforms) Low-Latency Queuing (LLQ) FIFO exists on every platform and every interface. It is the default on almost all of those interfaces. MDRR, CBWFQ, and LLQ are configured using the MQC, just like most other QoS mechanisms on most platforms. Just match the desired MPLS EXP values in a class map and then configure a bandwidth or latency guarantee via the bandwidth or priority commands. The underlying scheduling algorithm (MDRR, CBWFQ/LLQ) brings the guarantee to life. Queuing is one of two parts of what the DiffServ architecture calls per-hop behaviors (PHBs). A per-hop behavior is, surprisingly, a behavior that is implemented at each hop. PHBs have two fundamental piecesqueuing and dropping. Dropping is the other half of DiffServ's PHB. Dropping is important not only to manage queue depth per traffic class, but also to signal transport-level backoff to TCP-based applications. TCP responds to occasional packet drops by slowing down the rate at which it sends. TCP responds better to occasional drops than to tail drop after a queue is completely filled up. See Appendix B for more information. Weighted Random Early Detection (WRED) is the DiffServ drop mechanism implemented on most Cisco platforms. WRED works on MPLS EXP just like it does on IP Precedence. See the next section for WRED configuration details. As you can see, implementing DiffServ behavior with MPLS packets is no more and no less than implementing the same behavior with IP.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296820065.92/warc/CC-MAIN-20240425000826-20240425030826-00594.warc.gz
CC-MAIN-2024-18
10,162
52
https://man.archlinux.org/man/diff3.1.en
code
diff3 - compare three files line by line diff3 [OPTION]... MYFILE OLDFILE YOURFILE Compare three files line by line. Mandatory arguments to long options are mandatory for short options too. - -A, --show-all - output all changes, bracketing conflicts - -e, --ed - output ed script incorporating changes from OLDFILE to YOURFILE into MYFILE - -E, --show-overlap - like -e, but bracket conflicts - -3, --easy-only - like -e, but incorporate only nonoverlapping changes - -x, --overlap-only - like -e, but incorporate only overlapping changes - like -x, but bracket conflicts - append 'w' and 'q' commands to ed scripts - -m, --merge - output actual merged file, according to -A if no other options are given - -a, --text - treat all files as text - strip trailing carriage return on input - -T, --initial-tab - make tabs line up by prepending a tab - use PROGRAM to compare files - -L, --label=LABEL - use LABEL instead of file name (can be repeated up to three times) - display this help and exit - -v, --version - output version information and exit The default output format is a somewhat human-readable representation of the changes. The -e, -E, -x, -X (and corresponding long) options cause an ed script to be output instead of the default. Finally, the -m (--merge) option causes diff3 to do the merge internally and output the actual merged file. For unusual input, this is more robust than using ed. If a FILE is '-', read standard input. Exit status is 0 if successful, 1 if conflicts, 2 if trouble. Written by Randy Smith. Copyright © 2023 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. The full documentation for diff3 is maintained as a Texinfo manual. If the info and diff3 programs are properly installed at your site, the command - info diff3 should give you access to the complete manual. |May 2023||diffutils 3.10|
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100287.49/warc/CC-MAIN-20231201120231-20231201150231-00302.warc.gz
CC-MAIN-2023-50
1,977
41
https://remotive.io/jobs/software-dev/frontend-software-engineer-12064
code
Frontend Software Engineer 2 months ago Job type: Full-time Hiring from: US / Canada First appeared on Github Category: Software Dev CircleCI is looking for a front end development focused software engineer to help us build the rich web experiences that power our platform. You will work closely with product, design, and your engineering teammates to help engage with CircleCI’s users across our variety of web applications and marketing sites. To thrive in this role, you are someone who works well with distributed teams and sees collaboration as the key to success. You have a passion for learning and working with a variety of front end web technologies across web applications and marketing efforts. Here are a few things you’ll get to do in this role: Work closely with product and design to brainstorm effective ways to engage with CircleCI’s users. Help implement the direction of the UI and UX of our web applications and marketing efforts. Collaborate, grow with, and learn from your engineering teammates through planning, pairing, testing, and delivery of the features you build. Work within your team to foster a culture of priority-setting and urgency in alignment with organizational strategy. We’re looking for someone who enjoys collaboration, is curious and interested in learning, brings strong communication and teamwork skills, and helps others grow by sharing their expertise and encouraging best practices. If this sounds like you, here are some additional qualities we’re looking for: Practical experience working with modern front-end web technologies. Articulate UI and UX opinions. An eye for detail when implementing complex UI designs. A deep appreciation and understanding of the value of testing. A desire to learn how the work you do provides value to our users. The ability to break down tasks to ensure they’re appropriately sized, and the ability to estimate the effort required to complete. Working remotely at CircleCI We’re a distributed company with teammates across the world. For this role, we can support you working remotely anywhere in the United States or Canada. CircleCI Engineering Competency Matrix This role equals level E2 on our Engineering Competency Matrix, our internal career growth system for engineers. CircleCI is the best platform for software teams looking to rapidly build quality projects, at scale. Our intelligent continuous integration and delivery tools are simple yet powerful. Our aim is to provide the wisdom of a connected development ecosystem to every team member making technology decisions. We run 12M+ builds a month on our platform for companies like Spotify, Kickstarter, Sony, and Coinbase. Please mention that you come from Remotive when applying for this job. Help us maintain Remotive! If this link is broken, please just click to report dead link!
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258849.89/warc/CC-MAIN-20190526045109-20190526071109-00437.warc.gz
CC-MAIN-2019-22
2,845
15
http://lists-archives.com/debian-devel/224164-naming-of-network-devices-how-to-improve-it-in-buster.html
code
Re: Naming of network devices - how to improve it in buster - Date: Fri, 14 Jul 2017 11:20:00 -0300 - From: Henrique de Moraes Holschuh <hmh@xxxxxxxxxx> - Subject: Re: Naming of network devices - how to improve it in buster On Fri, 14 Jul 2017, Tom H wrote: > > I've never seen the kernel vary the order it enumerates a PCI bus. It doesn't, the last time it changed was on 2.4->2.6. OTOH, *driver probe* ordering can and does change, especially when device probes are being done in parallel. It is best to not get bus enumeration (and even device endpoint enumeration) confused with device *registration* by kernel drivers. > For PCI Express; for all I know, other technologies might enumerate > differently or change the enumeration method with different driver Most buses are stable as far as enumeration goes: we don't have HBAs and endpoints being renumbered across boots at all for PCI and PCIe and even But bus device enumeration is separate from device register ordering. > Per driver. There's no guarantee that the kernel will load the drivers > in the same order at boot. There was even a (specific) note in one of Indeed. Unless you add modules for which you care about the load order to /etc/modules. Those are staticaly loaded first (obeying dependencies by depmod, though) even by the initramfs. It has been supported for a decade or more. That *still* won't help for some drivers, where parallel *device* probes are done AND device answer speed mandates which one will register first. This does *not* apply to PCI/PCIe NICs handled by the same kernel driver, but it very likely applies to USB for example. > The classic naming scheme for network interfaces applied by the kernel > is to simply assign names beginning with "eth0", "eth1", ... to all > interfaces as they are probed by the drivers. As the driver probing is Unfortunately, this is incorrect. MOST PCI/PCIe NICs indeed use "ethX", etc. But the naming scheme really is device driver-specific, and the "default" name used by a driver is considered part of the kernel stable ABI, and cannot be changed on the kernel side unless it is done opt-in at kernel config time (kconfig) or at boot time (kernel command line, device tree, etc). That said, most consumer devices nowadays are handled by drivers that will use either ethX or wlanX by default. > generally not predictable for modern technology this means that as soon > as multiple network interfaces are available the assignment of the names > "eth0", "eth1" and so on is generally not fixed anymore and it might > very well happen that "eth0" on one boot ends up being "eth1" on the Correct, in the general case.
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891814393.4/warc/CC-MAIN-20180223035527-20180223055527-00625.warc.gz
CC-MAIN-2018-09
2,641
42
https://python.tutorialink.com/creating-overlapping-square-patches-for-rectangular-images/
code
Given be a rectangular image img and patch s. Now I would like to cover the whole image with square patches of side length s, so that every pixel in img is in at least one patch using the minimal number of patches. Furthermore I want neighboured patches to have as little overlap as possible. Thus far: I have included my code below and worked out an example. However it does not work yet perfectly. Hopefully, someone finds the error. Example: Given is img of shape: (4616, 3016) and s = 224 That means I will 21 patches on the longer side, 14 patches on the smaller the width, 21*14 = 294 patches in total. Now I try to figure out patches how to distribute the overlap between the patches. My patches can cover an image of size: (4704, 3136), thus my patches in the height have to cover 88 overlapping pixels missing_h = ht * s - h, width is analogous. Now I try to figure out, how to distribute 88 pixels on 21 patches. 88 = 4* 21 + 4 Thus I will have hso = 17 patches with overlap shso = 4 and hbo = 4 patches with overlap 5, width is analogous. Now I simply loop over the whole image and keep track on my current position (cur_h, cur_w). After each loop I adjust, cur_h, cur_w. I have s, my current patch number i, j, that indicates if the patch has a small or big overlap. import numpy as np def part2(img, s): h = len(img) w = len(img) ht = int(np.ceil(h / s)) wt = int(np.ceil(w / s)) missing_h = ht * s - h missing_w = wt * s - w hbo = missing_h % ht wbo = missing_w % wt hso = ht - hbo wso = wt - wbo shso = int(missing_h / ht) swso = int(missing_w / wt) patches = list() cur_h = 0 for i in range(ht): cur_w = 0 for j in range(wt): patches.append(img[cur_h:cur_h + s, cur_w: cur_w + s]) cur_w = cur_w + s if j < wbo: cur_w = cur_w - swso - 1 else: cur_w = cur_w - swso cur_h = cur_h + s if i < hbo: cur_h = cur_h - shso - 1 else: cur_h = cur_h - shso if cur_h != h or cur_w != w: print("expected (height, width)" + str((h, w)) + ", but got: " + str((cur_h, cur_w))) if wt*ht != len(patches): print("Expected number patches: " + str(wt*ht) + "but got: " + str(len(patches)) ) for patch in patches: if patch.shape != patch.shape or patch.shape != s: print("expected shape " + str((s, s)) + ", but got: " + str(patch.shape)) return patches def test1(): img = np.arange(0, 34 * 7).reshape((34, 7)) p = part2(img, 3) print("Test1 successful") def test2(): img = np.arange(0, 4616 * 3016).reshape((4616, 3016)) p = part2(img, 224) print("Test2 successful") test1() test2() Above problem can be fixed, making the following edits: hbo = missing_h % (ht-1) wbo = missing_w % (wt-1) shso = int(missing_h / (ht-1)) swso = int(missing_w / (wt-1))
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100276.12/warc/CC-MAIN-20231201053039-20231201083039-00671.warc.gz
CC-MAIN-2023-50
2,644
28
https://osec.pl/courses/ad184/
code
Helping Java SE developers write Java EE applications Red Hat Application Development I: Programming in Java EE with exam (AD184) exposes experienced Java Standard Edition (Java SE) developers to the world of Java Enterprise Edition (Java EE). This course is based on Red Hat® Enterprise Application Platform 7.0. This course is a combination of AD183 Red Hat Application Development I: Programming in Java EE and EX183 Red Hat Certified Enterprise Application Developer Exam. In this course, you will learn about the various specifications that make up Java EE. Through hands-on labs, you will transform a simple Java SE command line application into a multi-tiered enterprise application using various Java EE specifications, including Enterprise Java Beans, Java Persistence API, Java Messaging Service, JAX-RS for REST services, Contexts and Dependency Injection (CDI), and JAAS for securing the application. This course is intended to develop the skills needed to make the transition from Java SE programming to Java EE programming. This course introduces core concepts of multi-tiered Java Enterprise applications and gives you experience writing, deploying, and testing Java EE applications. You will use various tools from the Red Hat JBoss middleware portfolio, including JBoss Developer Studio, Maven, and the JBoss Enterprise Application Platform application server. As a result of attending this course, you should be able to describe most of the specifications in Java EE 7 and create a component with each specification. You will be able to convert a Java SE program into a multi-tiered Java EE application. You should be able to demonstrate these skills: This course is designed for Java developers who want to learn more about the specifications that comprise the world of Java Enterprise Edition (Java EE). Duration: 5 days Oferujemy szkolenia w naszych ośrodkach w Warszawie, Wrocławiu i Krakowie, jak i w lokalizacjach wskazanych przez klienta. W celu ustalenia szczegółów prosimy o kontakt na email@example.com For more details, please contact us at firstname.lastname@example.org
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224644867.89/warc/CC-MAIN-20230529141542-20230529171542-00755.warc.gz
CC-MAIN-2023-23
2,106
8
http://simpleminds.org.uk/spreeecommerce.com/storefront
code
Our software has been used by thousands of developers to build more than 45,000 storefronts worldwide. Spree is one of the largest open source software projects in the world -- not just ecommerce software, but out of all categories. It’s one of the most robust, refined technologies that you could ever hope to use. The Spree storefront offers a full feature set and is built on common standards, so you don't have to compromise speed to market, efficiency or innovation. The modular platform allows you to easily configure, supplement or replace any functionality you need, so that you can build the exact storefront that you want. Because of the modular architecture you can use features that come pre-integrated with the core, as well as best-in-class technologies for the optimal delivery of your business objectives. Flexible and Responsive Site Design Spree offers a responsive design out of the box for an optimal user experience across all devices. Use animation, live video or other innovative techniques to take your shoppers on an engaging tour of your product line. With the Spree Commerce storefront you have complete flexibility to create a unique and creative storefront that allows the user to interact with your products, not just view them. Spree Commerce offers a complete API for nearly every aspect of the system. It’s easy to create just about any experience that you want, in record time. When you use the Spree platform you own the code and can modify the software as you see fit. Because the platform is built on modern standards, there are no proprietary programming skills needed. The code is yours for as long as you need it. No strings attached. You have access to the complete source code to use where and how you like. Finding developers to work on the platform is easier because it's standards based and not proprietary. Spree Commerce is one of the most popular ecommerce platforms in the world. With more than 45,000 stores and growing, and an active community, you can trust our reliable technology for your ecommerce storefront. Our active community contributes features that are driven by the real world experience of the more than 45,000 stores using the platform around the world. Spree has been translated into more than 30 languages. "Spree is an incredibly bold and ambitious project. I love what these guys have done so far."
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474523.8/warc/CC-MAIN-20240224044749-20240224074749-00326.warc.gz
CC-MAIN-2024-10
2,372
19
https://reifer.com/agile-test-automation-may-2017/
code
On Agile Test Automation (May 2017) While everyone agrees that test automation is essential when using agile methods, not everyone agrees on the “why’s,” “what’s” and “how’s.” Although many license tools and/or develop a dashboard to perform the software testing job, many do not spend the time necessary to identify how they will use the results to improve their testing discipline. This represents what many in the industry call the metrics dilemma. Instead of using metrics to answer the important questions raised during testing, users take whatever information they are given and use it as best they can. This is where the “Goal-Question-Metric paradigm” comes into play. This standard is used for defining the test questions that will be answered by the metrics generated by the tool. For example, “Have all planned tests been run successfully?” To answer the question, the tool would develop a metric based on the score of the number of tests passed versus failed along with the total number of tests that were and were not run. In order to define more metrics like this with test automation in mind, we are developing a typical list of questions that agile users want answered relative to testing. As part of this list, which we plan to release later this year, we plan to develop and provide a set of metrics that can be used to assess whether the results of software development, regression, and acceptance testing have been satisfactorily performed. Of course, users of the list will need to fine-tune it based on their individual situations. For those interested, inputs are encouraged and solicited. You can send your inputs to us at email@example.com.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057564.48/warc/CC-MAIN-20210924171348-20210924201348-00680.warc.gz
CC-MAIN-2021-39
1,691
2
https://github.community/t/one-image-doesnt-show-up-in-github-page/201812
code
I’m trying to create my own website. For now, I’m working on the 404 page. (But I made the same thing in index.md and there is no problem…) However, i’d like to add on it an image with the following path Here is the repository if you’re interested. All topics (having the same kind of issue) in several website explain that this is a path issue. But I tried every single advice: - I thought it was a case sensitive issue but it seems that it’s not the case (the path seems to be correct/ I even tried to convert a .png into .png XD ) - I also tried any possible paths - the permalink ./assets/kurzgesagt.pngbut I don’t understand its meaning… - multiple paths using - My final attempt to patch the problem was to replace the problematic code with http markers… Can someone help me ? I’m thankful for any recommendation ! (modified message in order to be precise)
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585045.2/warc/CC-MAIN-20211016231019-20211017021019-00105.warc.gz
CC-MAIN-2021-43
882
17
https://docs.licensespring.com/license-entitlements/floating-licenses/cloud-concurrent-licensing
code
Cloud Concurrent Licensing With LicenseSpring, application developers can allocate licenses for simultaneous usage, wherein LicenseSpring functions as the floating license server. Configuring a cloud-based floating license involves adjusting several parameters: Within the process of setting up a new product, you'll encounter a parameter named "floating timeout." This value, measured in minutes, dictates how frequently the license application must communicate with the LicenseSpring server to maintain the license's active state. Should the application fail to establish contact with the server before the designated timeout duration elapses, the license will be automatically freed from that particular application. If you wish to modify the Floating timeout duration for an already-configured product, follow these steps: Access the product > Choose "Edit Product" > Navigate to "License Configuration." Note: The floating timeout period is defined at the product level, but you can overide it per license policy or on the license itself. During the setup of a new product license policy, if the chosen license type is a floating cloud license, you will specify the quantity of devices that can simultaneously utilize the license. Additionally, it's possible to designate the number of concurrent users at the time of license issuance, and this allocation can be modified subsequently, even after the license has already been granted. Note: Max simultaneous users is not the same as max activations. Simultaneous users are the total number of machines that can concurrently use a license, while max activations are the number of machines that are node locked to the license. A license must have been activated on a device before the device can check it out. Simultaneous users should therefore be less than or equal to max activations, if max activations are not set to unlimited. The usage of these licenses is similar to that of normal licenses. You utilize activate_license, check_license and other endpoints as usual. The SDK handles the other aspects such as checking for concurrent usage. After a license is activated, it needs to be checked to occupy a floating slot. A call should be made to /api/v4/floating/release to gracefully release the license floating slot. This call will avoid waiting for the timeout to release the license. See Releasing a Floating License for more information on how to release a floating license via our API.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510358.68/warc/CC-MAIN-20230928031105-20230928061105-00866.warc.gz
CC-MAIN-2023-40
2,451
17
https://github.com/angular/angular/commit/d83599d
code
Please sign in to comment. fix(ivy): inheriting injectable definition from undecorated class not… … working on IE10 in JIT mode (#34305) The way definitions are added in JIT mode is through `Object.defineProperty`, but the problem is that in IE10 properties defined through `defineProperty` won't be inherited which means that inheriting injectable definitions no longer works. These changes add a workaround only for JIT mode where we define a fallback method for retrieving the definition. This isn't ideal, but it should only be required until v10 where we'll no longer support inheriting injectable definitions from undecorated classes. PR Close #34305 - Loading branch information Showing with 26 additions and 2 deletions.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250606226.29/warc/CC-MAIN-20200121222429-20200122011429-00201.warc.gz
CC-MAIN-2020-05
732
5
https://forums.fast.ai/t/error-in-earlystoppingcallback-code-and-documentation/53549
code
Not entirely sure what’s going on with this but there appears to be a bug in the EarlyStoppingCallback code as shown here: https://docs.fast.ai/callbacks.tracker.html#EarlyStoppingCallback The error output in the documentation is the same error you get running the code in a notebook so I’m assuming the error has been included in the documentation when it was generated automatically? The error seems to disappear when the metric is switched from ‘accuracy’ to ‘mean_squared_error’, though I have not tested for other metrics. Also after changing the metric, the callback does not trigger after the patience period even when the condition has been met (i.e. doing 10 more update steps with no improvement when patience is set to 5) Something worth noting is that I am using Windows which I realize is unsupported. However as this error appears in the official documentation I expect the error runs slightly deeper than Windows compatibility. I can try to put together a minimum reproducible example of the behavior error (not stopping) that does not use confidential data when I’m free later. I will of course try to reproduce this on Linux as I’m sure this is more helpful to the dev team. I don’t have time right now to dig into the code for the root cause, but I am happy to do this if it hasn’t been addressed in a day or so.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711151.22/warc/CC-MAIN-20221207085208-20221207115208-00409.warc.gz
CC-MAIN-2022-49
1,350
8
https://community.yellowfinbi.com/topic/27817-extend-bigquery-data-source-parameters
code
Extend BigQuery Data Source Parameters In order to get our BigQuery data source to work, we needed to use the Generic JDBC connection, since the parameter options in the BigQuery data source are very limited. We needed to add up two: - EnableSession → EnableSession=1 - Location → Location=europe-west1 At least the location is a must to support non-US based BigQuery data sets. It would be great to get at least those two parameters into the BigQuery Data Source UI dialog. Maybe it would be good to add an additional URL-parameter field to all Data Sources where one can add any JDBC URL parameter in a free text format?
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100399.81/warc/CC-MAIN-20231202105028-20231202135028-00198.warc.gz
CC-MAIN-2023-50
626
7
https://www.rapid7.com/blog/post/2015/04/03/dataops-how-pluralsight-uses-tableau-logentries-for-better-analytics/
code
Last updated at Sun, 05 Nov 2017 16:57:48 GMT *Mike Roberts is a Logentries customer and Director of Data Analytics at Pluralsight. * A truly stable and robust Analytics platform is able to support both the analysis of external data *as well as *internal data, or data about its own state. Tableau Software is one such analytics platform and Logentries is the ‘listener’ that makes analyzing Tableau’s system data easier. As a result, our Pluralsight team is better able to understand the finer points of an analytics infrastructure, or what we call DataOps, and release beautiful and functional dashboards and reports for our customers. There’s enormous value in collecting logs for Tableau Server (and its Desktop product). Much of which would go unnoticed if not for tools like Logentries. For example, we’re able to analyze, in real time, Apache logs to see which views are not loading and, most important, which view (or views) are being downloaded the most via our custom ‘csv’ tag (one of many that we use to parse the logs). This functionality so far has been instrumental in assessing the usefulness of many reports and dashboards. On the other hand, we’re alerted when we receive an http status code that is not 200 or when a query takes longer than a set number of seconds. The above line chart demonstrates the tags trending over time. The real feedback loop comes from when we leverage the Logentries open API to grab the filtered data, export to csv and then import into Tableau for deeper analysis and dashboarding. One potential long-term benefit of this historical analysis is the ability to be flexible in how we scope our Tableau infrastructure or, simply, to scale linearly with additional processes and resources. Again, we stress the importance of creating material that is both functional and beautiful; our Pluralsight customers, who enjoy the reports, also want to understand what can be improved with the report’s design. One common request is often how fast a dashboard loads on Tableau Server. And with Logentries ability to parse a JSON payload, and the Open API, we can (with a simple log search function) find out how long each dashboard might take to load and what view in the dashboard might be taking the longest amount of time. The above bar graph was generated with Logentries and shows what views on Tableau are taking the longest to load. This, again, is invaluable information and is generated live. We’re usually able to reach out to our customers about this before they reach out to us (if they do). The last thing we want our customers to expect is a slow system and, dreadfully, not have any reason to explain it. For Tableau Server users, understanding the product and its many bits is truly a DataOps-specific task. As data volume continues to grow and become interdependent, operationalizing and analyzing this is paramount to the success of our team (or any company). And the log data, while typically ignored, provides an enormous benefit to us as we continue to analyze the mounds of data. In the end, we’re just connecting the dots from end to end.
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943484.34/warc/CC-MAIN-20230320144934-20230320174934-00741.warc.gz
CC-MAIN-2023-14
3,120
8
https://www.askqna360.com/blog/what-is-pandas-in-python
code
Pandas is a high-level data manipulation tool. In other words pandas is a software library written for the Python language for data manipulation and data analysis. It is built on the Numpy package. And the key Data Structure is DataFrame. In Python there are many tools for fast data processing, such as Numpy, Scipy, Cython, and Pandas. But we recommend Pandas because working with Pandas is fast, simple and more expressive than other tools.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057857.27/warc/CC-MAIN-20210926083818-20210926113818-00582.warc.gz
CC-MAIN-2021-39
443
2
http://friendsinbusiness.blogspot.com/2005/02/bloggers-got-feedburner.html
code
Bloggers... Got FeedBurner? FeedBurner handles all that for you, and they even give you this spiffy button... Oh, and while you're thinking along these lines... I'm sure you already know about Pingomatic, yes? OH WOW!! Will miracles never cease? I was feeling testy because Pingomatic is called PingoMATIC and it isn't automatic at all... you have to go there yourself and ping. But guess what? It had remembered all my information, and all I had to do was click "SUBMIT" (it's pretty hard to complain about that) -- PLUS, one of the places it pings is FeedBurner, so unless I'm missing something, you don't have to go to FeedBurner every time you make a change (I could be wrong about that). Eee gads... SO much to learn!! P.S. Just dropping by a couple of hours later to tell you I pinged through Pingomatic and it did not update my FeedBurner; I had to go there and update manually, but again, only about three clicks and no big deal. (Just wanted to let you know so you don't think you're pinged when you aren't. Best not to take my word for anything technical!) P.S. Again: Well, here it is a week later, and I'm dropping by to say, "I told you not to listen to me about anything technical!!" I have since learned that you do not need to PING at all! Pingomatic says, "Hey, you! There's been a change! Spider me now!" Each place does it on their own schedule, and Pingomatic alerts them, but if you do nothing, you will still get updated. (In fact, all that pinging apparently uses up bandwidth, which I gather is bad for us all.) And that's all I know.
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188553.49/warc/CC-MAIN-20170322212948-00127-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
1,558
9
https://kmkstaralliance.com/measuring-the-effectiveness-of-field-reimbursement-teams/
code
A large biotech company had invested significant resources in reimbursement support, including Field Reimbursement Managers (FRMs). The organization was frustrated by the lack of quantitative metrics to evaluate and assess the effectiveness of the FRMs and asked BCA to develop an approach to address this gap. BCA designed and developed a composite measure, the Reimbursement Confidence Index (RCI) to meet our client’s need. The core elements of the measure are as: - Bases on physician/practice perceptions of reimbursement and how these influence their choice/use of products - RCI was designed, tested, and validated via two customer research waves to first establish the baseline and second to assess change over time - The RCI is tracked over time (six-month intervals) for each sales territory and each therapeutic area - In addition to capturing the RCI score, the customer assessment provides insights that assist the FRMs in how to better approach and serve their customer tragets
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943809.22/warc/CC-MAIN-20230322082826-20230322112826-00064.warc.gz
CC-MAIN-2023-14
993
6
https://www.experts-exchange.com/questions/24138584/WDS-not-working-with-boot-wim-for-Proliant-DL380-G5-servers-WinPE-2-0.html
code
I'm trying to PXE boot a HP Proliant DL380 G5 with HP Broadcom NICs to a newly built Windows 2008 WDS server. I've added the images (boot and OS) into the WDS server but when I PXE boot the target Proliant, I get a message telling me the WinPE images I'm using (the boot.wim from the Windows 2008 OS DVD) doesn't containg the correct NIC drivers. I've viewed http://apcmag.com/how_to_inject_drivers_into_microsofts_free_os_windows_pe_20.htm but have read many posts where others have done this and have still had no postive results. Could anyone advise a detailed guide as to how to remove the MS NIC drivers from the boot.wim (many others mention this but withhold the method of how they did it) and to add the Q57 Broadcom NIC drivers required for the Proliant servers?
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347398233.32/warc/CC-MAIN-20200528061845-20200528091845-00019.warc.gz
CC-MAIN-2020-24
771
4
https://community.f5.com/t5/technical-articles/data-center-feng-shui-normalizing-phased-deployment-with/ta-p/281963
code
Normalizing deployment environments from dev through production can eliminate issues earlier in the application lifecycle, speed time to market, and gives devops the means by which their emerging discipline can mature with less risk. One of the big “trends” in cloud computing is to use a public cloud as an alternative environment for development and test. On the surface, this makes sense and is certainly a cost effective means of managing the highly variable environment that is development. But unless you can actually duplicate the production environment in a public cloud, the benefits might be offset by the challenges of moving through the rest of the application lifecycle. One of the reasons developers don’t have an exact duplicate of the production environment is cost. Configuration aside, the cost of the hardware and software duplication across a phased deployment environment is simply too high for most organizations. Thus, developers are essentially creating applications in a vacuum. This means as they move through the application deployment phases they are constantly barraged with new and, shall we say, interesting situations caused or exposed by differences in the network and application delivery network. Example: One of the most common problems that occurs when moving an application into a scalable production environment revolves around persistence (stickiness). Developers, not having the benefit of testing their creation in a load balanced environment, may not be aware of the impact of a Load balancer on maintaining the session state of their application. A load balancer, unless specifically instructed to do so, does not care about session state. This is also true, in case you were thinking of avoiding this by going “public” cloud, in a public cloud. It’s strictly a configuration thing, but it’s a thing that is often overlooked. This causes problems when developers or customers start testing the application and discover it’s acting “wonky”. Depending on the configuration of the load balancer, this wonkiness (yes, that is a technical term, thank you very much) can manifest in myriad ways and it can take precious time to pinpoint the problem and implement the proper solution. The solution should be trivial (persistence/sticky sessions based on a session id that should be automatically generated and inserted into the HTTP headers by the application server platform) but may not be. In the event of the latter it may take time to find the right unique key upon which to persist sessions and in some few cases may require a return to development to modify the application appropriately. This is all lost time and, because of the way in which IT works, lost money. It’s also possibly lost opportunity and mindshare if the application is part of an organization’s competitive advantage. Now, assume that the developer had a mirror image of the production environment. S/He could be developing in the target environment from the start. These little production deployment “gotchas” that can creep up will be discovered early on as the application is being tested for accuracy of execution, and thus time lost to troubleshooting and testing in production is effectively offset by what is a more agile methodology. Additionally developers can begin to experiment with other infrastructure services that may be available but were heretofore unknown (and therefore untrusted). If a developer can interact with infrastructure services in development, testing and playing with the services to determine which ones are beneficial and which ones may not, they can develop a more holistic approach to application delivery and control the way in which the network interacts with their application. That’s a boon for the operations and network teams, too, as they are usually unfamiliar with the application and must take time to learn its nuances and quirks and adjust/fine-tune the network and application delivery network to meet the needs of the application. If the developer has already performed these tasks, the only thing left for the ops and network teams is to implement and verify the configuration. If the two networks – production and virtual production – are in synch this should eliminate the additional time necessary and make the deployment phase of the application lifecycle less painful. If not developers, ops, or network teams, then devops can certainly benefit from a “dev” environment themselves in which they can hone their skills and develop the emerging discipline that is devops. Devops requires integration and development of automation systems that include infrastructure which means devops will need the means to develop those systems, scripts, and applications used to integrate infrastructure into the operational management in production environments. Like developers, this is an iterative and ongoing process that probably shouldn’t use production as an experimental environment. Thus, devops, too, will increasingly find a phased and normalized (commoditized) deployment approach a benefit to developing their libraries and skills. This assumes the use of virtual network appliances (VNA) in the development environment. Unfortunately the vast majority of hardware-only solutions are not available as VNAs today which makes a perfect mirrored copy of production at this time unrealistic. But for those pieces of the infrastructure that are available as a VNA, it should be an option to deploy them as a copy of production as the means to enable developers to better understand the relationship between their application and the infrastructure required to deliver and secure it. Infrastructure services that most directly impact the application – load balancers, caches, application acceleration, and web application firewall – should be mirrored into development for use by developers as often as possible because it is most likely that they will be the cause of some production-level error or behavioral quirk that needs to be addressed. The bad news is that if there are few VNAs with which to mirror the production environment there are even fewer that can be/are available in a public cloud environment. That means that the cost-savings associated with developing “in the cloud” may be offset by the continuation of a decades old practice which results in little more than a game of “throw the application over the network wall.” from tag hardware
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663016853.88/warc/CC-MAIN-20220528123744-20220528153744-00696.warc.gz
CC-MAIN-2022-21
6,469
12
https://bbpress.org/forums/profile/txmom/replies/
code
Forum Replies Created Working now. As this was a new data base with no content yet, I dropped the database, made a new one, installed wordpress, then bbpress. All seems to be working correctly. I must have put wp_ in step 1 the first time instead of leaving it at bb_. No problems with logging in like I had when I tried the upgrade. Thanks. Posts work now. No longer have the admin option, I went to clicked upgrade, posts in forum now work I no longer have admin privileges Listed as member Thanks, getting closer. So when integrating with wordpress we use bb, not wp. I also lost admin priv. in wordpress I get: You do not have sufficient permissions to access this page. Doesn’t even give me choice to log in. I had no problem logging in as admin either place before changing the tables, etc I did use the wordpress secret key phrase. Used 0.9.0.1 downloaded fresh copy yesterday I need to find a way to fix the errors, not so minor, getting data base errors for posts. bbPress database error: [Unknown column ‘post_id’ in ‘where clause’] SELECT * FROM wp_posts WHERE post_id = 0 Thanks, seems I had a bad copy, downloaded and installed a fresh copy. Seems to be working with wordpress, I was already logged into wordpress and I’m the forums shows me that I’m logged in. Now to test and customize. It did show minor errors: Incorrect table definition; there can be only one auto column and it must be defined as a key Key column ‘post_id’ doesn’t exist in table Duplicate key name ‘user_nicename’ >>> User tables will already exist when performing a database integrated installation. Anything I should be concerned about? I have a new wordpress site which is the new 2.5. I’ve uploaded the new bbpress but when I go to install by visiting the url I get redirected to xxx/bb-admin/install.php/ which comes up as page not found (which ends up at the wordpress 2.5 home page). I notice that install.php is now in the plug-in folder so tried Warning: main(../bb-load.php): failed to open stream: No such file or directory Fatal error: main(): Failed opening required ‘../bb-load.php’ (include_path=’.:/usr/local/PEAR’) both on line 9 Suggestions, some little thing I’m doing wrong?
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474650.85/warc/CC-MAIN-20240226030734-20240226060734-00760.warc.gz
CC-MAIN-2024-10
2,217
30
http://www.sitepoint.com/forums/showthread.php?203445-blurred-css-dashed-vertical-lines&p=1455733
code
Results 1 to 2 of 2 Oct 19, 2004, 00:02 #1 blurred css dashed vertical lines I searched on the forums for this problem and couldn't find anything similar, so here it goes. Whenever I create a dashed (1 px) verticle line in CSS and view it in IE6, and scroll, the line gets all blurred and messed up. Does anyone know about this and if there is a way to fix this? ps - you can view an example here http://pwp.netcabo.pt/cgaspar/dashed...d-example.html just scroll up and down and you'll see the problem. i just noticed that when i put a 2 pixel dashed border, there isn't a problem... but i wanted to use a 1 pixel, so if anyone has a tip or a solution, please let me know!oi! Oct 19, 2004, 00:05 #2 - Join Date - Jan 2004 - Melbourne, Australia - 1 Post(s) - 1 Thread(s) Yeah, that is a bug in IE. The only fix I know of is making a tiled image background. But that's not always possible.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122159.33/warc/CC-MAIN-20170423031202-00474-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
888
17