url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://aiapipro.com/academy/ai-glossary/work-organisation/
code
“Work Organisation” in AI context refers to structuring, planning, and optimizing of work processes involved in creating, training, deploying, and managing AI systems. This includes decisions about team structures, workflows, software architectures, and resource management to ensure effective operations of AI initiatives. Imagine you want to build a big snowman. You need to gather snow, make it into balls, stack the balls, add the details, and then maintain the snowman. All these tasks and how you organize them – deciding who does what and when, which tools to use, or how to fix the snowman when it starts to melt – that’s similar to “Work Organisation” in the AI world. “Work Organisation” in the AI field is about arranging and controlling the various tasks and processes involved in building and maintaining AI or ML systems. The goal is to better align these tasks with the project’s objectives and team’s resources, and to prioritize efficiency and effectiveness. Teams building AI systems are often cross-functional, including data scientists, data engineers, machine learning engineers, software developers, and non-technical roles like project managers. A good work organisation structure ensures collaboration between these roles, clearly defining responsibilities and reporting lines. Workflows are the sequential pathways of tasks that the team follows. In AI, common workflows include data collection and preprocessing, model training and testing, system integration, deployment, and maintenance. Organizing these workflows in the right order and ensuring smooth transitions between tasks is a key part of work organisation. Software architecture is another crucial aspect. Good software architecture ensures that all parts of an AI system, from its data pipelines to its prediction servers, work efficiently together and are scalable. Decisions about what frameworks to use, how to handle storage and computation, or how to structure the codebase all fall under work organisation. Resource management refers to how the team uses and allocates its resources, like data, computational power, or team members’ time, to build and maintain AI systems. Good work organisation avoids overloading resources and ensures that resources are used where they add the most value to the project. In essence, “Work Organisation” in AI involves structuring and coordinating the many tasks, roles, resources, and workflows involved in AI projects to ensure efficient and effective development and maintenance of AI systems. “Cross-Functional Team”, “Workflows”, “Software Architecture”, “Resource Management”, “Collaboration”, “Efficiency”, “Data Preprocessing”, “Model Training”, “Deployment”, “Maintenance”
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474581.68/warc/CC-MAIN-20240225035809-20240225065809-00865.warc.gz
CC-MAIN-2024-10
2,778
9
https://thehuntsman.com.au/night-tech-xd-65-pro-ii-thermal-monocular-review-unboxing-demonstration/
code
Over the last couple of months I’ve been having heaps of fun with the Night Tech XD 65 Pro II Thermal Monocular and now it’s time to wrap up all my thoughts in a review. In this video I unbox the XD 65 Pro II Thermal Monocular, provide and a demonstration on it’s key features, walk you through the interesting features in the menu, give my constructive feedback on the features I think can be improved as well as showcase some of the footage I’ve collected these last two months. If you’re interested in seeing the Thermal handheld camera in action you can also check out the videos linked below. State Park 30 minute thermal challenge! https://youtu.be/dVdG9s8ru9c 04:12 Feature demonstration 06:31 Constructive feedback and issues 09:18 Wrap up DISCLIAMER: A Night Tech XD 65 Pro II thermal monocular was provided to The Australian Huntsman by Ground Force International to help facilitate this content. The comments, opinions, thoughts and reflections on this device in this video are my own, authentic and unaltered, and have not influenced, pressured or altered by others in any way. I understand and acknowledge the responsibly I have to be be honest online in light of the way weight some people put on my opinions and thoughts. For that reason I will continue to be honest when it comes to products that have been provided to me by others. If a product is good, I’ll tell you why it’s good. If a product has issues I’ll tell you what those issues are. If a product is rubbish I’ll tell you that too. Simple. #HuntingAustralia #ThermalHunting #ThermalMonocular Thermal Hunting, thermal monocular review, thermal monocular for hunting, Night Tech, xd65 pro ii, XD 65 Pro II, XD 65 Pro 2, night tech thermal monocular, thermal monocular, thermal monocular hunting, thermal imaging monocular hunting, handheld thermal monocular, handheld thermal camera, thermal camera, thermal camera review, thermal unboxing, thermal footage, thermal hunting footage #Night #Tech #Pro #Thermal #Monocular #Review #Unboxing #Demonstration
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817699.6/warc/CC-MAIN-20240421005612-20240421035612-00896.warc.gz
CC-MAIN-2024-18
2,045
11
http://www.netbuilders.org/monetizing/offline-cpa-marketing-26454.html?mode=threaded
code
10 September, 2011, 12:31 PM Offline CPA Marketing Has anyone had success in using fliers (leaflets) to advertise CPA offers? I think that it goes something like this: you sign up to promote a free credit report offer, your flier advertises a local business that requires a credit report such as a car dealer offering cars on repayment plans (loans/hire purchase), but to save the fee for the credit report, you direct your leads to get a free one from your CPA offer (so you can get amazing conversions). I'm not sure if I got this correct, but anyway, there seems to be some flaws in this business model now such as: FTC clamp down on misleading "free" offers. The offer clearly stating that it is not intending to offer a free service e.g. credit card to be billed later etc. I believe that it's becoming harder to make easy money like this and to beware of any gurus peddling e-books about this method that may no longer work or be legal. Tags for this Thread
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398445033.85/warc/CC-MAIN-20151124205405-00007-ip-10-71-132-137.ec2.internal.warc.gz
CC-MAIN-2015-48
963
7
https://lists.freebsd.org/pipermail/freebsd-current/2004-June/030012.html
code
Multiple choices for RAID-0, best performance? paul at gromit.dlib.vt.edu Fri Jun 25 09:26:36 PDT 2004 On Fri, 25 Jun 2004 10:23:37 +0200, S?ren Schmidt<sos at DeepCore.dk> wrote: > Brad Knowles wrote: > > My understanding is that regular vinum won't work on 5.x, you need > > geom-vinum for that. Recent postings from Sren have lead me to believe > > that none of the ataraid stuff works with 5.x, either -- you can use > > them as plain ATA controllers, but the RAID stuff has not yet been > > worked out. > Wrong, ataraid works fine on 5.x. Ataraid has never worked entirely for me under 5.x. Specifically, I have *never* been able to get "atacontrol rebuild" to kick off reconstruction after replacing a drive in a DEGRADED array. This has been on an assortment of standard ATA controllers. The most recent on which I attempted it is this one: atapci0: <Intel PIIX4 UDMA33 controller> port 0xf000-0xf00f,0x376,0x170-0x177,0x3f6,0x1f0-0x1f7 at device 7.1 on pci0 atapci0 at pci0:7:1: class=0x010180 card=0x00000000 chip=0x71118086 rev=0x01 hdr=0x00 vendor = 'Intel Corporation' device = '82371AB/EB/MB PIIX4/4E/4M IDE Controller' class = mass storage subclass = ATA Naturally, not being able to rebuild an array in the event of drive failures makes ataraid useless for my purposes, so I gave up on it. > However there are some ATA software RAID setups that I havn't done the > metadata code for yet, but they'll get there eventually time permitting. > You can always use atacontrol to setup a RAID on *any* ATA > disk/controller setup, however you will need a BIOS with a known > metadata layout (Promise/Highpoint) to be able to boot from it (se man > atacontrol(1) for details).. You should, then, clarify the man page for atacontrol, as it appears to conflict with the above: create Create a type ATA RAID. The type can be RAID0 (stripe), RAID1 (mirror), RAID0+1 or SPAN (JBOD). In case the RAID has a RAID0 component, the interleave must be specified in number of sec- tors. The RAID will be created of the individual disks named disk0 ... diskN. Although the ATA driver allows for creating an ATA RAID on disks with any controller, there are restrictions. It is only possi- ble to boot on an array if it is either located on a ``real'' ATA RAID controller like the Promise or Highpoint controllers, or if the RAID declared is of RAID1 or SPAN type; in case of a SPAN, the partition to boot must reside on the first disk in the In other words, the man page says that it is possible to boot without the need for a RAID BIOS (known or otherwise) so long as the ATA RAID is of type RAID1 or SPAN (with further restrictions). Whilst on the subject of clarifying man pages, is the following entry in "man atacontrol" literally correct: rebuild Rebuild a RAID1 array on a RAID capable ATA controller. Specifically, does it mean that if you don't have a RAID-capable ATA controller (i.e., you have only a standard ATA controller) then "atacontrol rebuild" will not work? (If that is the case, it is strange to allow certain ataraid arrays to be created and booted on non-RAID controllers but not rebuilt.) e-mail: paul at gromit.dlib.vt.edu "Without music to decorate it, time is just a bunch of boring production deadlines or dates by which bills must be paid." --- Frank Vincent Zappa More information about the freebsd-current
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250619323.41/warc/CC-MAIN-20200124100832-20200124125832-00552.warc.gz
CC-MAIN-2020-05
3,329
59
https://www.icofluid.com/project/2274-content-neutrality-network
code
Content Neutrality Network Content Neutrality Network (CNN) is an innovative content ecosystem based on Blockchain technology Content Neutrality Network (CNN) Platform introduces several mechanisms/protocols related to content creation, distribution, circulation and revenue share. CNN Platform combines personalized recommendation with the community votes to distribute the most relevant content to each user. CCM (Content Circulation Mechanism) is designed to stimulate seamless circulation of content among different communities. from 11/13/18 to 11/13/18 from 02/05/18 to 02/11/18 Token Symbol: cnn Total Supply of Tokens: 99,999,999,999
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484689.3/warc/CC-MAIN-20190218053920-20190218075920-00278.warc.gz
CC-MAIN-2019-09
641
7
https://community.smartthings.com/t/google-assistant-relay-install-does-not-match-latest-verion/160294
code
I am using the procedure located at the following URL, as well as going through his YouTube video: I have run into issues today, 4/7/19 setting up Google Assistant Relay and hoping someone can help: - On the Google end, when I create a new project and set the device type, there is no longer an Auto selection. Everything is a specific device with predefined actions. Does this matter? Can I select anything here? I selected Camera so I could keep going, and Skip when asked which functions to select. - When I run npm run build-config there is only one option: Change Port. Add User does not come up. I am not sure if it is pulling it in from my client_secret…json file or what. Any suggestions? I kept going, and when I run npm run start I get a ton of errors, so clearly the above items are an issue, as everything else went as expected.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00169.warc.gz
CC-MAIN-2022-21
842
5
https://community.intel.com/t5/Graphics/Hardware-acceleration-is-unsupported-or-has-been-disabled-on/m-p/673074
code
Ok, we know you run SketchUp, but nothing else. Download, run, and ATTACH (using the paperclip under the toolbar) the results of this utility: You have an i5-2410m 2nd gen processor. This processor, and its graphics,. are not supported on Windows 10. There is no driver except for the Microsoft generic driver, which lacks performance and features. You need to get a new or newer laptop that has graphics with the features you need supported by Windows 10.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817650.14/warc/CC-MAIN-20240420122043-20240420152043-00339.warc.gz
CC-MAIN-2024-18
456
4
https://discourse.odriverobotics.com/t/homing-startup-sequence-one-by-one/6631
code
I was able to get the homing sequense of both motors to initiate at startup. I was wondering if it is possible to let the odrive start with one axis and then the other? The robot the odrive wil be powering is in a scara configuration so the position of the first motor wil affect the posistion of the second motor. A startup delay has been suggested before, however as of yet I don’t think it has been implemented. Thank you, I will set the homingspeed of the 2nd motor slower. this might be a good enough workaround for now. You can also run motor callibration on the second one, and skip it on the first.
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00467.warc.gz
CC-MAIN-2022-21
608
6
http://www.ascendleadership.org/page/stutemple/Ascend-Temple-University-Chapter.htm
code
We are a student professional organization that strives to provide our members with academic and professional career-enhancing opportunities. As one of the youngest and largest Ascend student chapters in the Northeast, our 100+ members are actively involved in events ranging from weekly Friday meetings with companies and firms, formal and informal mentoring sessions, diversity luncheons, community services events, and networking events with our Greater Philadelphia Chapter of professionals. Our guest speakers include professionals from the Big Four, Comcast, Cigna, Unisys, PNC, Vanguard, IRS, and more! This year we have a lot of great new things in store for our members such as scholarship and award opportunities. Join Us! Weekly Meetings: Fridays 12 P.M. - 12:50 P.M. Alter Hall 032 (unless otherwise announced) 2012-2013 Leadership Team - President: Jinyoung Gabe Park | firstname.lastname@example.org - Vice President: Xiao Layla Yang | email@example.com - Treasurer: Helen Liang | firstname.lastname@example.org - Secretary: Igor Messano | email@example.com - Internal Affairs Coordinator: Jaisohn Nam | firstname.lastname@example.org - Internal Affairs Coordinator: Thomas Huang | email@example.com - Ascend Ambassador: Sujin Kim | firstname.lastname@example.org - Events Coordinator: Jinho Jace Park | email@example.com - Events Coordinator: Shaohui Vivi Ren | firstname.lastname@example.org - Fundraising Chair: Clara Wong | email@example.com - Fundraising Chair: Xiangxi Terry Song | firstname.lastname@example.org - Marketing Coordinator: Anna Choe | email@example.com - Director of Information Technology: Andrew Dao | firstname.lastname@example.org - Alumni Chair: Xiufen Wang | email@example.com Congratulations to our President, Gabe Park (on the left), for winning the National Scholarship! Congratulations and thank you for your support!
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125654.80/warc/CC-MAIN-20170423031205-00464-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,862
20
https://coderanch.com/t/130504/engineering/Joe-Marasco
code
Originally posted by Joe Marasco: Cool! What year? I'm BChemE class of '66. Joe, I'm a bit later... ... BChe '87. For everyone else whose wondering: This is not a small thing. It is very rare that I find a fellow CU alumni, who also majored in Chemical Engineering. In my graduating class, we had a total of 7 BChe graduates !!
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573070.36/warc/CC-MAIN-20190917101137-20190917123137-00111.warc.gz
CC-MAIN-2019-39
327
3
https://lists.debian.org/debian-user/2005/01/msg01749.html
code
Re: Windows Key Mapping Is there anyway to determine what kind of keyboard I have (via software detection)? Currently, I have pc104 as the main selection but I am to doubt myself. Unless the "recent upgrade" above also included replacing the keyboard I wouldn't worry about it. I have a laptop with 80-something keys and pc104 specified in /etc/X11/XF86Config and every key does what it's supposed to do. If you're interested you can take a look at a gui-style keyboard reconfiguration utility called xkeycaps. It has lots of different keyboards and shows the keycodes, keysyms.. when you press a key on your keyboard (highlighted on the display..). Be careful though, misuse and a bit of luck could probably render your keyboard unusable. :-) Furthermore, what is the system file that controls the keys? Not sure what gnome does. Probably just inherits the X config and that can be listed via a 'xmodmap -pk | less'. As to X's keyboard config files I have no idea. In any event I've never had to change anything in config files. As far as I know the "normal" way to modify your keyboard's config under X is via xmodmap (enabling Windows keys.. switching CapsLock and Ctrl-L..). My guess is that you would only want to change them if you needed something really different that's not covered by X's default configs - ie. exotic keyboard/language (?).
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824822.41/warc/CC-MAIN-20181213123823-20181213145323-00214.warc.gz
CC-MAIN-2018-51
1,349
21
https://lists.fedoraproject.org/pipermail/devel/2010-November/145847.html
code
Fixing the glibc adobe flash incompatibility jkeating at redhat.com Fri Nov 19 05:12:51 UTC 2010 On 11/18/10 10:58 AM, Doug Ledford wrote: > ----- "Richard W.M. Jones" <rjones at redhat.com> wrote: >>> On Wed, Nov 17, 2010 at 10:29:56PM -0500, Gregory Maxwell wrote: >>>>> Most code is not performance critical. >>> Much more code than you think is performance critical, >>> particularly when I can throw up 1000 instances of it in the >>> /me considers making snide comment about Python and how many >>> extra power stations we've built because of it ... > Allow me: if we all turned off python apps for one hour, we would > save enough electricity to power the entire world for a year. That electricity would be eaten up on developer workstations for the increased code development, compile, and debug time it would take to write the same tools in other languages.... Fedora -- Freedom² is a feature! More information about the devel
s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991870.70/warc/CC-MAIN-20210517211550-20210518001550-00570.warc.gz
CC-MAIN-2021-21
936
18
http://www.conceptart.org/forums/showthread.php/121478-Sketchy-sketchitty-sketches
code
I, i'm new here! My brother (pantless_wanderer) introduced me to this site, and since I love to draw, here am I. I might be only 13 years old, but I aim to be a professional artist some day, so don't forget to crit my work if you stop by. My "art" is coming in the next post. The picture quality isn't the best (cellphone camera)... sorry for that.
s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398456289.53/warc/CC-MAIN-20151124205416-00319-ip-10-71-132-137.ec2.internal.warc.gz
CC-MAIN-2015-48
348
4
https://www.computing.net/answers/office/excel-world-cup-predictor/11158.html
code
|Since none of us can see your spreadsheet from where we're sitting, it's kind of hard to know what you are looking for, other than "i want the score of the person to appear on the top of the spreadsheet"| As an example for Person 1, you could... 1 - Merge cells A1:D1 and put in their name. 2 - Merge cells A2:D2 and put in this formula: 3 - In Row 3, Columns A - D, put a label for each of your "criteria". 5 - In Rows 5 - 7, put an X under the criteria that fits the "prediction" that person made and the score will appear in the merged cell. As you add X's in any of that person's columns, the formula will COUNT them and then SUM the "count times the multiplier" for each of the 4 columns. In other words, 2 X's in Column A and 1 in Column C will total 17 points. A5:A7, B5:B7, etc. will get you through 3 games. I don't know anything about World Cup brackets, so feel free to adjust this as required. Replicate this method in E:H for Person 2, I:L for Person 3, etc. BTW...this made sense when I read it, but that's because I wrote it. ;-) Let me know if I've totally confused you.
s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280763.38/warc/CC-MAIN-20170116095120-00064-ip-10-171-10-70.ec2.internal.warc.gz
CC-MAIN-2017-04
1,087
12
https://blog.crashspace.org/2017/01/one-thing-to-do-today-learn-about-the-onion-router-tor/
code
I’ve put off talking about Tor because, well, discussing Tor takes nuance. Whether or not you decide to bring Tor into your life on the regular, learning about how it works and how clever folks get around it will sharpen your security mindset. I think even if you think, “I don’t need Tor,” there are vulnerable people in the world who could use the cover of your banal data going over the same network. Using Tor doesn’t make you a criminal, and there are great reasons to do so. Since Tor constantly gets pummeled by folks looking for exploits and is therefore also constantly updated, I thought it important to highlight the date of the information being provided. The links get more in depth down each list, so the top ones may be the only one you need. Getting your head around Tor starts with understanding Proxies. When I think of proxies I think of those glove-box isolation chambers. A proxy lets you handle another website without getting your IP address dirty. That box can also sometimes hold a local copy of a website or file if the person running the proxy predicts a lot people will want to handle it from one location. While going through a proxy(s) can slow web traffic down by adding hops, local caches speed things up. If you’re using StartPage as your search engine, next to each link is the option of going to the page via a “Proxy.” Top Google search results tend to served by proxy by default, so you may be being served from one now without even knowing it. Proxies DO NOT provide encryption. They’re merely call forwarding. - Video: Proxy, VPN, SmartDNS (BestVPN, 3:10) - Video: Proxy Server (2011, Greg French, 5:47. Slow talker, ramp up the speed) - Reading: VPN vs Proxy (2016, How to Geek) - Video: Using Web Proxy Servers (2012, Eli the Computer Guy, 40:19) - Reading: Forward Proxy vs Reverse Proxy (Best answer 2016, Stackoverflow) - Video: Understand Proxy/Firewall/NAT/PAT Traffic Flows (2014, Laura Chappell. 13:33. Heads up, assumes you know what things like demultiplex means.) Tor’s Special Sauce The Tor network bounces your requests through a series of proxies via a special protocol called Onion Routing. Each computer only knows about the one before and the one after. It only takes three hops for originator to become obscured. Onion routing is not just sequential call forwarding. Each new node peels off a layer of encryption, only then discovering who it should send the message on to. Only the exit node will see the original data packet. - Video: What is Tor and Should You Use It (2016, Mashable, 2:24) - Reading: Tor Is For Everyone: Why You Should Use Tor (2014, EFF) - Interactive InfoGraphic: Tor and HTTPS (EFF) - Video: How Tor Works – A ComputeCycle Deep Dive (2012, ComputerCycle 10:10. Actually talks about session keys) - Video: Onion Routing and TOR Overview | Mechanics (2013, SourceFire, 9:44 and 12:59) - Reading: Tor Overview Page (Tor Project) Tor isn’t magic All security products fail. Security is a process. Learning about the shortcomings of Tor can fail without writing the whole attempt off completely seems like the most grownup choice. It’s also kind of fascinating lesson in secure system design. - Video: Security concerns with Tor (2013, Eli the Computer Guy, 29:12. Fair warning, he’s anti-Tor, btw. Says he wouldn’t use it, but gives no alternatives.) - Video: How Tor Users Got Caught (2015, Garrett Fogerlie, 34:46) - Reading: Is Tor still secure after Silk Road? (2015, Phys.org) - Reading: Has the Tor network really been compromised? (2015, Quora, 2nd answer by Shava Nerad also came highly recommended.) - Reading: Mozilla and Tor release urgent update for Firefox 0-day under active attack (2016, ArsTechnica, Windows users only) - Reading: Court Docs Show a University Helped FBI Bust Silk Road 2, Child Porn Suspects (2015, Motherboard) - Video: Tor: Hidden Services and Deanonymisation [31c3] (2015, CCCen) - After Dec. 2016, things have gotten more complicated legally. - The FBI’s Quiet Plan to Begin Mass Hacking (2016, TorProject) - DOJ insist rule 41 change not important (2016, TechDirt) - Expanded Government Hacking Powers Need Accompanying Safeguards (2016, EFF) Ways to Support Tor The Tor project valiantly maintains one of the very best band-aids we’ve got for the fact that the internet was not designed to address privacy concerns at it’s core. Like with VPNs, if one understands what the tool is for, it’s invaluable to have available. Help the Tor project by going ahead and sending your innocuous data traffic over it, and by setting up a relay node to mitigate that demand. Exit nodes require a deeper level of commitment, but you can donate to support one. If Tor traffic becomes popular and common place, more ISPs and server companies will get comfortable with it and the onion routing protocol in general. - Tor Project Download Page - Reading: Tor Browser for Windows (2016, Security in a Box) - Reading: 5 Ways to Stay Safe From Bad Tor Exit Nodes (2015, Make Use Of) - Reading: 11 Do’s and Don’ts of Tor Network (Hongkiat, no date displayed) - Reading: Donate to Relay Provider via TorProject FAQ - Reading: How you can make Tor faster for $10 a month (2016, Motherboard) - Reading: EFF Tor Challenge (original start 2014, EFF) - Reading: Support the Tor Project 2016 (2016, TorProject) - Resources: Educational Outreach Materials (2015, TorProject) Making Tor Obsolete Folks involved in the Tor project work very hard to make folks safe on the internet as it exists now. But what if the internet was designed completely differently? Although flawed, some of the nascent “Tor alternatives” explore P2P architectures. Look into conversations around the Future Internet. Tools like OpenFlow. provide the ability to rapidly prototype network architecture. Blockchains may not just be for Bitcoin anymore. Have a research group with its own ideas? Submit a proposal. If this topic tickles your nose try checking out MIT OpenCourseWare 6.033 Computer System Engineering. I hope this post pointed you in the direction of helpful resources to understand how Tor works, where it fits in the privacy tool box, and how to properly connect to the network. Tor’s had some struggles, but it’s in good hands.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100531.77/warc/CC-MAIN-20231204151108-20231204181108-00764.warc.gz
CC-MAIN-2023-50
6,257
43
https://knowledge.crowdbotics.com/what-is-the-visual-editor
code
Find out more about the Crowdbotics Visual or Layout Editor, and when and where to use it. The Crowdbotics Visual Editor is a layout editor that you can access directly from your App Dashboard. The Visual Editor enables you to customize the appearance, location, and content of elements in your app’s user interface. Each individual screen in your app will have visual elements like buttons, text, or images. You can use the Visual Editor to edit aspects of these elements such as font, color, text, or shape. Simply drag-and-drop these elements into your app’s screens and use the contextual settings that appear to tweak their appearance. How to use the Visual Editor The layout editor allows you to customize the appearance, location, and content of elements in your app’s interface from the Storyboard page of the App Builder platform. After making these customizations, the App Builder will generate a preview of your app and custom code that reflects the screen adaptations you did in the layout editor. To effectively use the layout editor, here is the important terminology you should refer to: - Element Tree: A convenient way to view all the different nested elements you have inputted into your layout, and individually select or delete each inputted element. - Elements: Drag and drop elements of your choice from a variety of different options, including the following: flex, columns, rows, button, text, text input, number input, date input, text area, switch, image, slider, checkbox, radio button, and icon - Resources: Drag and drop images to use in your layout under this tab. To edit a specific element in your layout, press “Edit Selected” after selecting the element. The “Edit Selected” tab has the following options: - Vertical Alignment: Adjust the placement of text within your element with this option. Text-top and text-bottom move the text to the top and bottom, respectively, of the button/text box. - Overflow: Customize whether surplus text appears on the screen, and if it does appear, how this text will be displayed. - Width and Height: Enter in numerical dimensions if pt. is selected, or a percentage if % is selected of the associated element or flex. - Padding: Values in padding boxes are used to allocate space to the left, right, bottom, or top of text within an element itself. - Margins: In contrast to the padding values, margin values can be used to allocate space to the left, right, bottom, or top of the element as a whole (for example, you can create space between elements and flexes with the margin tool). - Font Style: Select normal or italic font from the dropdown menu. - Font Weight: Adjust the density of your text with ranging weight values, where higher weight values indicate stronger bolding - Border Weight: Inputting values for border weight allows you to create a line (solid, dashed, or dotted) on select sides of the element, or all four sides
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474544.15/warc/CC-MAIN-20240224180245-20240224210245-00003.warc.gz
CC-MAIN-2024-10
2,922
17
https://www.fi.freelancer.com/projects/Mobile-Phone/Notification-issue-solve-moodle/
code
I have develop phonegap moodle application and i implement airnotifier notification but i am facing issue. please contact with me if you can do it. 11 freelanceria on tarjonnut keskimäärin 8610 ₹ tähän työhön I am interested for this projects because we have many projects completing in open source like moodle, wordpress, angular, node. Thanking You!!!
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891817437.99/warc/CC-MAIN-20180225205820-20180225225820-00007.warc.gz
CC-MAIN-2018-09
361
3
https://www.offerzen.com/blog/python-developer-salary-south-africa
code
Python remains South African developers’ most-wanted programming language and the country’s fourth most widely used language. In this article, we’ll explore what they can expect to earn throughout their careers and how their salaries compare to those of Go, Ruby and PHP developers. Average entry-level and junior Python developer salary trends Salaries for entry-level Python developers have grown by 20.1% (R3 928) since 2022. They now begin their careers on an average monthly income of R23 450. That’s 31.3% (R5 585) more than rookie PHP developers, but 4.4% (R1 073) less than Go, and 17.5% (R4 971) less than Ruby. With at least two years of experience, Python developers should see the largest salary increase of their careers, jumping 49.3% (R11 556) to R35 006. That’s 11.8% (R3 688) up on 2022, while being 29.1% (R7 893) ahead of PHP, a notable 26.6% (R12 660) below Go, and 23.3% (R10 619) behind Ruby. With four-to-six years on the job, Python developers are in for another huge pay jump of 45.4% (R15 894), bringing their income to R50 900 – 9.4% (R4 392) more than the 2022 equivalent. This increase is larger than for other developers at this level, with the result that PHP devs earn 48.1% (R16 532) less, Go devs 23.1% (R15 329) more, and Ruby just 2.5% (R1 279) more. Average Python Developer Salaries by Experience Average Salary by Years Experience, showing 25th and 75th percentiles |Years of Experience||25th Percentile||Average||75th Percentile| Average senior Python developer salary trends There’s more good news for Python developers around the six-year mark, with a 35.6% (R18 095) increase putting them on a monthly income of R68 995, where last year they would’ve earned 6.3% (R4 104) less. PHP developers at this level earn 28.9% (R15 450) less, Go 17.2% (R14 310) more, and Ruby 10.5% (R8 060) more. Salaries for highly experienced Python developers with at least a decade under their belts have risen just 3.7% (R3 332) since 2022, but a 34.4% increase from their previous level means they should still earn a very respectable R92 759. That’s 17.2% (R13 593) more than PHP, 8.5% (R8 635) less than Go, and 4.3% (R4 176) less than Ruby. Senior Python developers should have excellent coding and design skills, while also taking on responsibilities like creating technical software reports, supervising testing, training other team members and guiding the overall improvement of the software. Unsurprisingly, candidates with these skills are desirable, which explains why salaries continue to rise so generously for more experienced Python developers. Average Python Developer Salaries in 2023 vs 2022 |Years of Experience||2023||2022| Keep in mind The data in this article is taken from OfferZen’s 2023 State of the Software Developer Nation Report. In this article, ‘salary’ refers to the gross monthly salary (before tax) provided by more than 4500 survey respondents. Average salaries are single data points and only one part of a bigger story. It’s expected that many respondents may earn significantly more or less than these averages. However, we hope to provide a picture of underlying trends by mapping the average salaries for different experience levels. These averages should not be used to estimate what your actual salary will or should be. Salaries depend on the industry, individual, perks and nature of work. These factors all influence the salary a company will offer to a prospective hire. In addition, most developers are “fluent” in several languages, which will affect the final figures. - State of South Africa’s Software Developer Nation - How to Negotiate a Job Offer That’s More Than Just the Money - How to Negotiate when Hiring Developers - Developer Salaries 2022: Cape Town, Johannesburg and Pretoria - Backend Developer Salary Trends in South Africa - Java Developer Salary Trends in South Africa - Front End Developer Salary Trends in South Africa - Full Stack Developer Salary Trends in South Africa - Node.js Developer Salary Trends in South Africa - Azure Developer Salary Trends in South Africa - TypeScript Developer Salary Trends in South Africa - Go Developer Salary Trends in South Africa - Ruby Developer Salary Trends in South Africa - Kotlin Developer Salary Trends in South Africa
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506559.11/warc/CC-MAIN-20230924023050-20230924053050-00652.warc.gz
CC-MAIN-2023-40
4,289
33
https://erikwramner.wordpress.com/2012/02/01/proxy-the-proxy/
code
Proxy the proxy Many corporations prevent direct Internet connections from the internal network, enforcing the use of a proxy server. That does not have to be an issue, but in Windows shops the proxy often requires Windows (NTLM) authentication. Few Java applications support that, there are even many native Windows applications that fail. Want to upgrade Eclipse or jDeveloper or install a cool plugin? Not very convenient, everything must be downloaded and installed manually. Want to use Maven? Again, all dependencies must be downloaded and installed manually. Enter Cntlm, a small and efficient proxy server that supports NTLM. Install it locally and use it as a go-between. Applications such as Eclipse can connect to Cntlm without authentication and Cntlm talks to the official proxy. There is only one snag. Cntlm needs a user id and password, or at least a password hash. Be sure to stop or update Cntlm before changing your password, or you may be locked out!
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121453.27/warc/CC-MAIN-20170423031201-00140-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
970
5
https://scicommtoolbox.se/robot-programming/
code
In this workshop the visitors can try simple robot programming. Usually a scientist or other person with knowledge of programming is present at the workshop. A simple version of robot programming uses LEGO robots. Target group: Suitable for a younger target group. Preparations: Find leaders for the workshop, decide on a venue and time. Organise the necessary materials. Market the activity. Challenges: Finding scientists and robots. Benefits: Usually attracts young visitors.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104375714.75/warc/CC-MAIN-20220704111005-20220704141005-00646.warc.gz
CC-MAIN-2022-27
478
5
https://play.google.com/store/apps/details?id=com.laurencedawson.reddit_sync.dev
code
Gifs won't load It always gives me error: couldn't load video, every single time, please tell me if there's a solution/fix, I've paid for both Pro (which I've been using for a long time just fine) until one day it stopped loading for me. So I bought the Dev version in hopes of getting it to run correctly. My favorite Reddit app Great app with tons of features. Dev updates frequently and listens to the users. First "pro" Reddit app I've ever bought. Switched from RIF The interface is absolutely seamless. The best. Intuitive and smart, and frequent updates. Great product. Yup buy it - The bottom sheet in a gallery will now stay open if you scroll through the images - The bottom sheet in a gallery is now slightly transparent - Added an option to disable saving images to subreddit folders - Fixed a bug that could cause the comment toolbar to get stuck after a snackbar is removed - Fixed a bug where certain comment links weren't opening
s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823802.12/warc/CC-MAIN-20160723071023-00190-ip-10-185-27-174.ec2.internal.warc.gz
CC-MAIN-2016-30
945
10
http://www.ecosuite.com/product/bpm/
code
EcoSuite Business Process Management (BPM) module that helps in the automation of business processes such as processing a sales order or handling an employee leave request or tracking tasks. BPM used to be known as “Workflow” in some industry terms such as workgroup applications earlier. Workflow was the name given to the automatic routing of documents to the users responsible for working on them. But that was not enough. Businesses need to be able to handle all sorts of business processes. A business process is the flow or sequence of activities that may represent a person or a system or another software toward a business goal based on some rules or logic. Currently available systems do not have integrated BPM. They just provide workflow based on sequence of assignments. Customers should watch out for that. Ideally, BPM system is a programmable system that does all that, given the business logic and definition of the whole process. Many of the current BPM systems available are complicated to program and manage.. EcoWorks has simplified the process by using predefined configurable steps to define a whole process. It is like defining a process by combining building blocks like a Lego system. Once a process is defined or modeled, application logic can be configured to deliver the complete automation. Many of the basic business processes can be standardized using parameters that can meet requirements of most organizations. This way standard process can be used, with no programming needed on the customer side. Examples of processes are:
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475701.61/warc/CC-MAIN-20240301193300-20240301223300-00893.warc.gz
CC-MAIN-2024-10
1,562
4
https://forum.emclient.com/t/migration-of-oulook-com-to-outlook365-servers-what-are-server-settings-for-em-client/56954
code
I’m just an eM Client 6 Pro user, and I did some searches about your question but was unable to find anything specific. I doubt seriously that Support here will be able to answer your question (not a knock on them at all, btw). It appears more complicated than necessary, something Microsoft with their lack of wisdom is extremely good at doing - making things even more complicated when simplicity is what end users need. Still, I offer this link as a possible place to start your quest to find an answer: For migrating the account, simply recreating it as an Exchange account in the automatic setup should be enough. It does usually take 5-10 minutes to obtain the server configuration while doing it, so take that into consideration and don’t cancel the process. The address for the Exchange server should read https://outlook.office365.com/EWS/Exc… after the automatic setup. Hope that helps.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335362.18/warc/CC-MAIN-20220929163117-20220929193117-00393.warc.gz
CC-MAIN-2022-40
902
5
http://dotnet.sys-con.com/node/2146481
code
|By David Dodd|| |May 22, 2012 10:08 AM EDT|| To capture, parse, and analyze traffic tcpdump is a very powerful tool. To begin a basic capture uses the following syntax. tcpdump -n –i <interface> -s <snaplen> -n tells tcpdump to not resolve IP addresses to domain names and port numbers to service names. -I <interface> tells tcpdump which interface to use. -s <snaplen> tells tcpdump how much of the packet to record. I used 1515 but 1514 is sufficient for most cases. If you don’t specify a size then it will only capture the first 68 bytes of each packet. A snaplen value of 0 which will use the required length to catch whole packets can be used except for older versions of tcpdump. Below is an example output of a dump, although it only contains a few lines it holds much information. 12:24:51.517451 IP 10.10.253.34.2400 > 220.127.116.11.53: 54517 A? www.bluecoast.com. (34) 12:24:51:517451 represent the time 10.10.253.34.2400 Source address and port > Traffic direction 18.104.22.168.53 Destination address and port 54517 ID number that is shared by both the DNS server 22.214.171.124 and 10.10.253.34 A? 10.10.253.34 asks a question regarding the A record for www.bluecoat.com (34) The entire packet is 34 bytes long. More tcpdump capture options Here are some examples of options to use when capturing data and why to use them: -I specify an interface; this will ensure that you are sniffing where you expect to sniff. -n tells tcpdump not to resolve IP addresses to domain names and port numbers to service names -nn don’t resolve hostnames or port names -X Show packet’s contents in both hex and ASCII -XX Include Ethernet header -v Increase verbose –vv –vvv more info back -c Only get x number of packets and stop -s tell tcpdump how much of the packet to record -S print absolute sequence numbers -e get Ethernet header -q show less protocol info -E Decrypt IPSEC traffic by providing an encryption key Packet, Segment, and Datagram TCP accepts data from a data stream, segments it into chucks, and adds a TCP header creating a TCP segment. UDP sends messages referred to as a datagram to other hosts on an Internet Protocol (IP) network without requiring prior communications to set up special transmission channels or data paths. Internet Protocol then creates its own datagram out of what it receives from TCP or UDP. If the TCP segment or UDP datagram plus IP’s headers are small enough to send in a single package on the wire then IP creates a packet. If they are too large and exceed the maximum transmission unit (MTU) of the media, IP will fragment the datagram into smaller packets suitable to the MTU. The fragmented packets are then reassembled by the destination. Tcpdump read and write to/from a file Tcpdump allows you to write data to a file using the –w option and to read from a file with the –r option. $ sudo tcpdump -i wlan0 -w dumpfile001 $ sudo tcpdump -r dumpfile.pcap Some people like to see the files as they are captured and have them saved to a file. Use the following options: tcpdump –n –I eth1 –s 1515 –l | tee output.txt This option tells tcpdump to make its output line-buffered, while piping the output to the tee utility sends output to the screen and the output.txt simultaneously. This command will display packets on the screen while writing data to an output file output.txt it will not be in binary libpcap format. The best way to do this is run a second instance of tcpdump. When tcpdump captures packets in libpcap format, it adds a timestamp entry to the record in each packet in the capture file. We can augment that data with the –tttt flag, which adds a date to the timestamp (See Figure #1). You can use the –tt flag to report the number of seconds and microseconds since the UNIX epoch of 00:00:00 UTC on January 1, 1970. If you are not sure you understand the time difference and need to be absolutely sure of time use the –tt option to show seconds and microseconds since the UNIX epoch (See Figure #2). Being able to cut the amount of traffic down to just what you are looking for is useful. Here are some useful expressions that can be helpful in tcpdump. Net – This will capture the traffic on a block of IPs ex 192.168.0.0/24 # tcpdump net 192.168.1.1/24 Src, dst – This will only capture packets form a source or destination. # tcpdump src 192.168.100.234 # tcpdump dst 10.10.24.56 Host – Capture only traffic based on the IP address # tcpdump host 10.10.253.34 Proto – Capture works for tcp, udp, and icmp # tcpdump tcp Port – Capture packets coming from or going to a port. # tcpdump port 21 Port ranges – capture packets # tcpdump port 20-25 Using expressions such as AND [&&], OR [||], & EXCEPT [!] # tcpdump –n –I eth1 host 10.10.253.34 and host 10.10.33.10 # tcpdump –n –I eht1 src net 10.10.253.0/24 and dst net 10.10.33.0/24 or 126.96.36.199 # tcpdump –n –I eth1 src net 10.10.30.0/24 and not icmp Searching for info on packets with tcpdump If you want to search for information in the packet you have to know where to look. Tcpdump starts counting bytes of header information at byte 0 and the 13th byte contains the TCP flags shown in Table #1 Now looking at byte 13 and if the SYN and ACK are set then your binary value would be 00010010 which are the same as decimal 18. We can search for packets looking for this type of data inside byte 13 shown here. # tcpdump –n –r dumpfile.lpc –c 10 ‘tcp == 18’ and host 172.16.183.2 Here is a sample of what this command will return shown in Figure #3 When capturing data using tcpdump one way to ignore the arp traffic is to put in a filter like so. # tcpdump –n –s 1515 –c 5 –I eth1 tcp or udp or icmp This will catch only tcp, udp, or icmp. If you want to find all the TCP packets with the SYN ACK flag set or other flags set take a look at Table #2 & tcpdump filter syntax shown below. flag Binary Decimal URG 00100000 32 ACK 00010000 16 PSH 00001000 8 RST 00000100 4 SYN 00000010 2 FIN 00000001 1 SYNACK 00010010 18 Tcpdump filter syntax Show all URGENT (URG) packets # tcpdump ‘tcp == 32’ Show all ACKNOWLEDGE (ACK) packets # tcpdump ‘tcp == 16’ Show all PUSH (PSH) packets # tcpdump ‘tcp == 8’ Show all RESET (RST) packets # tcpdump ‘tcp == 4’ Show all SYNCHRONIZE (SYN) packets # tcpdump ‘tcp ==2’ Show all FINISH (FIN) packets # tcpdump ‘tcp == 1’ Show all SYNCHRONIZE/ACKNOWLEDGE (SYNACK) packets # tcpdump ‘tcp == 18’ Using tcpdump in Incident Response When doing analysis on network traffic using a tool like tcpdump is critical. Below are some examples of using tcpdump to view a couple of different dump files to learn more about network problems or possible attack scenarios. The first is a binary dump file of a snort log and we are given the following information. The IP address of the Linux system is 192.168.100.45 and an attacker got in using a WU-FTPD vulnerability and deployed a backdoor. What can we find out about how the attack happened and what he did? First we will take a look at the file # tcpdump –xX –r snort001.log The log appears long at this point you may want to run the file in snort # snort –r snort001.log –A full –c /etc/snort/snort.conf This will give you some info like total packets processed, protocol breakdown, any alerts, etc. See Figure #4 & #5 Figure #4 Figure #5 Next extract the full snort log file for analysis # tcpdump –nxX –s 1515 –r snort001.log > tcpdump-full.dat This will give us a readable file to parse through. After looking through it we find ip-proto-11, which is Network Voice Protocol (NVP) traffic. Now we will search through the file looking for ip-proto-11. # tcpdump –r snort001.log –w NVP-traffic.log proto 11 This command will read the snort001.log file and look for ‘log proto 11’ and writes the contents to the file NVP-traffic.log. Next we need to be able to view the file because it is a binary file. # tcpdump –nxX –s 1515 –r NVP-traffic.log > nvp-traffic_log.dat This will be a file of both hex and ASCII, which is nice but we just want the IP address. Try this. # tcpdump –r NVP-traffic.log > nvp-traffic_log01.dat This will give us a list of IP address that were communicating using the Network Voice Protocol (NVP) (See Figure #6). Next we look at another snort dump file from a compromised windows box that was communicating with an IRC server. What IRC servers did the server at 172.16.134.191 communicate with? Look for TCP connections originating from the server toward the outside and we can use tcpdump with a filtering expression to capture SYN/ACK packets incoming from outside servers. # tcpdump -n -nn -r snort_log 'tcp and dst host 172.16.134.191 and tcp==18' This produces a long list of connections going from 172.16.134.191 to outside connections. (see Figure #7). Now we know that IRC communicate on port 6666 to 6669 so let’s add that and narrow down the search with the following command. # tcpdump -n -nn -r snort_log 'tcp and dst host 188.8.131.52 and tcp==18' and portrange 6666-6669 (See output in Figure #8 below) Now we have narrowed the list down to 3 IP’s that were communicating with the server using IRC. Tcpdump is a wonderful, general-purpose packet sniffer and incident response tool that should be in your tool shed. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo in Silicon Valley. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be! Internet of @ThingsExpo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, is co-located with 17th Cloud Expo and will feature technical sessions from a rock star conference faculty and the leading industry players in the world. The Internet of Things (IoT) is the most profound change in personal an... May. 24, 2015 03:00 AM EDT Reads: 2,326 DevOps tends to focus on the relationship between Dev and Ops, putting an emphasis on the ops and application infrastructure. But that’s changing with microservices architectures. In her session at DevOps Summit, Lori MacVittie, Evangelist for F5 Networks, will focus on how microservices are changing the underlying architectures needed to scale, secure and deliver applications based on highly distributed (micro) services and why that means an expansion into “the network” for DevOps. May. 24, 2015 03:00 AM EDT Reads: 2,785 How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ... May. 24, 2015 03:00 AM EDT Reads: 5,553 The 3rd International @ThingsExpo, co-located with the 16th International Cloud Expo – to be held June 9-11, 2015, at the Javits Center in New York City, NY – is now accepting Hackathon proposals. Hackathon sponsorship benefits include general brand exposure and increasing engagement with the developer ecosystem. At Cloud Expo 2014 Silicon Valley, IBM held the Bluemix Developer Playground on November 5 and ElasticBox held the DevOps Hackathon on November 6. Both events took place on the expo floor. The Bluemix Developer Playground, for developers of all levels, highlighted the ease of use of... May. 24, 2015 02:30 AM EDT Reads: 4,171 We’re no longer looking to the future for the IoT wave. It’s no longer a distant dream but a reality that has arrived. It’s now time to make sure the industry is in alignment to meet the IoT growing pains – cooperate and collaborate as well as innovate. In his session at @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, will examine the key ingredients to IoT success and identify solutions to challenges the industry is facing. The deep industry expertise behind this presentation will provide attendees with a leading edge view of rapidly emerging IoT oppor... May. 24, 2015 02:30 AM EDT Reads: 4,696 Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e... May. 24, 2015 02:00 AM EDT Reads: 6,012 We certainly live in interesting technological times. And no more interesting than the current competing IoT standards for connectivity. Various standards bodies, approaches, and ecosystems are vying for mindshare and positioning for a competitive edge. It is clear that when the dust settles, we will have new protocols, evolved protocols, that will change the way we interact with devices and infrastructure. We will also have evolved web protocols, like HTTP/2, that will be changing the very core of our infrastructures. At the same time, we have old approaches made new again like micro-services... May. 24, 2015 01:30 AM EDT Reads: 5,217 SYS-CON Events announced today that Gridstore™, the leader in hyper-converged infrastructure purpose-built to optimize Microsoft workloads, will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. Gridstore™ is the leader in hyper-converged infrastructure purpose-built for Microsoft workloads and designed to accelerate applications in virtualized environments. Gridstore’s hyper-converged infrastructure is the industry’s first all flash version of HyperConverged Appliances that include both compute and storag... May. 24, 2015 01:15 AM EDT Reads: 6,034 For years, we’ve relied too heavily on individual network functions or simplistic cloud controllers. However, they are no longer enough for today’s modern cloud data center. Businesses need a comprehensive platform architecture in order to deliver a complete networking suite for IoT environment based on OpenStack. In his session at @ThingsExpo, Dhiraj Sehgal from PLUMgrid will discuss what a holistic networking solution should really entail, and how to build a complete platform that is scalable, secure, agile and automated. May. 24, 2015 01:00 AM EDT Reads: 4,093 The industrial software market has treated data with the mentality of “collect everything now, worry about how to use it later.” We now find ourselves buried in data, with the pervasive connectivity of the (Industrial) Internet of Things only piling on more numbers. There’s too much data and not enough information. In his session at @ThingsExpo, Bob Gates, Global Marketing Director, GE’s Intelligent Platforms business, to discuss how realizing the power of IoT, software developers are now focused on understanding how industrial data can create intelligence for industrial operations. Imagine ... May. 24, 2015 01:00 AM EDT Reads: 4,965 Hadoop as a Service (as offered by handful of niche vendors now) is a cloud computing solution that makes medium and large-scale data processing accessible, easy, fast and inexpensive. In his session at Big Data Expo, Kumar Ramamurthy, Vice President and Chief Technologist, EIM & Big Data, at Virtusa, will discuss how this is achieved by eliminating the operational challenges of running Hadoop, so one can focus on business growth. The fragmented Hadoop distribution world and various PaaS solutions that provide a Hadoop flavor either make choices for customers very flexible in the name of opti... May. 24, 2015 12:30 AM EDT Reads: 3,482 In the consumer IoT, everything is new, and the IT world of bits and bytes holds sway. But industrial and commercial realms encompass operational technology (OT) that has been around for 25 or 50 years. This grittier, pre-IP, more hands-on world has much to gain from Industrial IoT (IIoT) applications and principles. But adding sensors and wireless connectivity won’t work in environments that demand unwavering reliability and performance. In his session at @ThingsExpo, Ron Sege, CEO of Echelon, will discuss how as enterprise IT embraces other IoT-related technology trends, enterprises with i... May. 24, 2015 12:00 AM EDT Reads: 4,066 Wearable devices have come of age. The primary applications of wearables so far have been "the Quantified Self" or the tracking of one's fitness and health status. We propose the evolution of wearables into social and emotional communication devices. Our BE(tm) sensor uses light to visualize the skin conductance response. Our sensors are very inexpensive and can be massively distributed to audiences or groups of any size, in order to gauge reactions to performances, video, or any kind of presentation. In her session at @ThingsExpo, Jocelyn Scheirer, CEO & Founder of Bionolux, will discuss ho... May. 23, 2015 09:00 PM EDT Reads: 4,949 The true value of the Internet of Things (IoT) lies not just in the data, but through the services that protect the data, perform the analysis and present findings in a usable way. With many IoT elements rooted in traditional IT components, Big Data and IoT isn’t just a play for enterprise. In fact, the IoT presents SMBs with the prospect of launching entirely new activities and exploring innovative areas. CompTIA research identifies several areas where IoT is expected to have the greatest impact. May. 23, 2015 09:00 PM EDT Reads: 4,654 Every day we read jaw-dropping stats on the explosion of data. We allocate significant resources to harness and better understand it. We build businesses around it. But we’ve only just begun. For big payoffs in Big Data, CIOs are turning to cognitive computing. Cognitive computing’s ability to securely extract insights, understand natural language, and get smarter each time it’s used is the next, logical step for Big Data. May. 23, 2015 08:00 PM EDT Reads: 1,777 We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i... May. 23, 2015 07:00 PM EDT Reads: 4,020 The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago. May. 23, 2015 07:00 PM EDT Reads: 1,316 17th Cloud Expo, taking place Nov 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday's debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy. Meanwhile, 94% of enterprises are using some form of XaaS – software, platform, and infrastructure as a service. May. 23, 2015 05:00 PM EDT Reads: 2,105 The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy. May. 23, 2015 04:00 PM EDT Reads: 4,672 The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t... May. 23, 2015 02:00 PM EDT Reads: 6,234
s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207927844.14/warc/CC-MAIN-20150521113207-00028-ip-10-180-206-219.ec2.internal.warc.gz
CC-MAIN-2015-22
21,373
155
http://wolf3d.darkbb.com/t3252-some-questions
code
Second Question:Can you incress that max number of levels/maper per epasode? Third Question:Can you incress the number of Epasodes? A friendly Wolfenstein 3D community, about Wolfenstein 3D, the game that gave birth to first person shooters... Simple google could help.@Kargan3033 wrote:First Question:What is SDL?
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917120092.26/warc/CC-MAIN-20170423031200-00085-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
314
4
https://geo-alps.com/methods/passive-seismic/esac-method/
code
It is a methodology which exploits spatial auto correlation through vertical sensors synchronization. A seismic antenna is made up of many seismic sensors placed on the ground according to non – linear geometries. The survey equipment used by Geo Alps S.r.l. for performing refracted seismic surveys is composed by: 4 seismograph Seismic Source, Model DAQLink III, 24 channel, scan resolution 24 bit, with the following specifications: Conversion A/D: sigma delta high speed converter 24 bit; Background noise: 0.2 microVolt RMS (at 2 msec); Accuracy trigger: +/- 1 microsecond at every frequency of sampling Sampling rate: from 0.0208 to 16,0 milliseconds; Sampling frequency: from 48.000 to 62.5 sample/second; Determination of soil parameters such as shear modulus and Bulk modulus
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655027.51/warc/CC-MAIN-20230608135911-20230608165911-00410.warc.gz
CC-MAIN-2023-23
786
8
https://utappia.org/category/howto/
code
In this tutorial, let’s assume that we have accidentally deleted our files, from an sd card, or a USB thumb drive. Then we will try to recover them using photorec app. In this short video tutorial, we will see how to use the new Snap package manager to search and install/remove Snap packages and some useful commands. The first part was an introduction to to Virt Manager and KVM. In this video tutorial, I will show you how to migrate your Virtual Box machines to the Virt Manager. In this video tutorial we will use a tool called virt-manager that allows you to use a graphical interface to interact with Kernel-based Virtual Machine (KVM) In this video tutorial, I am demonstrating how to do a full disk encryption by using Ubuntu Mini ISO (Netinstall) Using multiple cores and processors simultaneously to achieve faster compression and decompression rates is possible nowadays with the new generation of multi-core cpu’s. Using the following methods to create compressed backups of your files will be less time consuming. Abstract SHC is a free software (GPL v2) that takes a shell script and produces C source code. The generated source code is then compiled and linked to produce a stripped binary executable. Introduction There are some uncomfortable times when you will be asked to not distribute the source code of a shell script (eg.…
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917125881.93/warc/CC-MAIN-20170423031205-00047-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
1,352
7
http://onlinelibrary.wiley.com/doi/10.1111/j.1469-1795.2012.00533.x/full?globalMessage=0
code
Survival probability is a key parameter whose variation may have a substantial influence on the population asymptotic and realized growth rate (Caswell, 2001; Nichols & Hines, 2002). Estimation of survival in wild vertebrate populations has long been a challenge and has stimulated collaborations between biologists and statisticians (Williams, Nichols & Conroy, 2002), mostly because of difficulties in correcting observed proportions of survivors when not all the individuals alive and present in the study area are detected by investigators (i.e. detection probability is <1; Williams et al., 2002). Halstead et al. (2011) have estimated daily mortality risk in the giant gartersnake (Thamnophis gigas) and addressed spatial variation in survival. In Halstead et al., the difficulty was not detection probability: individuals were equipped with radio transmitters and detection probability approaches 1 in many telemetry studies (Williams et al., 2002). Halstead et al. used a standard approach in human demography based on hazard models (Hosmer, Lemeshow & May, 2011), where the hazard function accounts for the instantaneous rate of occurrence of the death event. They used a Bayesian approach to estimate a mixed version of their model; that is, a model with fixed (e.g. habitat type) and random effects (year, site). Read the Feature Paper: Bayesian shared frailty models for regional inference about wildlife survival Other Commentaries on this paper: Combining information in hierarchical models improves inferences in population ecology and demographic population analyses; Bayesian shared frailty models for regional inference about wildlife survival The model is a ‘shared frailty’ model. Frailty models have been developed to account for heterogeneity in populations: the latter consist of a mixture of individuals with different hazards. Biologists have long identified factors that co-vary with survival in wild populations (age, year, habitat quality, etc.). However, human demographers also pointed out that hazard estimates may be biased if some relevant sources of heterogeneity are ignored (Vaupel, Manton & Stallard, 1979; Aalen, Borgan & Gjessing, 2008): if investigators are unaware of the relevance of some sources of variation in mortality risk, if defining the variables to measure is conceptually challenging (e.g. ‘individual quality’; Wilson & Nussey, 2010) or if measuring them is technically difficult. Frailty models are individual random effects models that assume a distribution of individual hazards; this distribution accounts for the heterogeneity among individuals that remains once measured covariates have been taken into account, and its characteristics have to be assessed. Indeed, an individual hazard cannot be estimated using data from an individual because the death event is unique in the individual life history, but we can assess the distribution of individual hazards in the population. Using frailty models requires conceptual decisions whose relevance for a particular dataset cannot always be assessed because of the current limitations in statistical theory (e.g. which distribution to choose for individual hazards; Yashin et al., 2001). Moreover, how to assess the relevance of models with different parameterizations for random effects is still currently debated (Gelman, Meng & Stern, 1996; Gelfand & Ghosh, 1998; Spiegelhalter et al., 2002; Cai & Dunson, 2006; Plummer, 2008), and investigators have to choose from several methods when there is no dominating one. With some assumptions and constraints in model development, frailty models can be estimated (Yashin et al., 2001). Random effects models (including frailty models) have been used in a very large number of papers focusing on (longitudinal) data from humans (Banerjee, Carlin & Gelfand, 2003; Banerjee, Wall & Carlin, 2003; Gelman & Hill, 2007; Lawson, 2009). Halstead et al. (2011) explained why they treated the variable ‘Site’ as a random effect as follows: ‘we were not interested in site differences per se, but wanted a large-scale assessment and the average survival function of the giant gartersnake’. This is indeed a reason why investigators consider a variable as a random effect rather than a fixed one: the study sites are considered as a sample from a larger population of sites, and the goal is to draw inferences about the population-averaged response and the variance among sites. Treating ‘Site’ as a random effect had crucial consequences: the site-specific estimates of survival were more precise than if ‘Site’ had been treated as a fixed effect (Halstead et al., 2011). When ‘Site’ is treated as a fixed effect, sites are considered as independent and data from each site are used to estimate k site-specific mean hazards. Obviously, if the number of marked snakes per site is small, the estimated site-specific hazard rate is likely to be imprecise (large credible interval). Treating ‘Site’ as a random effect is using data from all the snakes from all the sites to draw inference about individual sites, which is sometimes described as ‘borrowing strength’ (Sauer & Link, 2002; Clark et al., 2005). Shared frailty models The distinguishing feature of the model used by Halstead et al. is that it is a shared frailty model: the individuals in each site share the same unobserved frailty. Shared frailty models are used when the number of subjects in each group (cluster) is small, or when there are good reasons to hypothesize that groups are homogeneous in terms of hazard (e.g. when data from several studies are used, a study can be treated as a cluster), or to take non-independence of observations from subjects in a cluster into account. Halstead et al. chose not to use a model with individual frailty because the age of the snakes was unknown, contrary to studies of birds marked as chicks for example (Marzolin, Charmantier & Gimenez, 2011). Frailty is assumed to reflect an individual deviation in the mortality risk from the baseline risk: it is important to use data from individuals that survived the same number of units of time before entering the study to assess this risk. The genuine distribution of individual frailties in the population cannot be accessed if the individuals with the largest mortality risks die before being captured, marked and released. Halstead et al. were concerned about such heterogeneity among snakes created by heterogeneity in age at capture. For this reason, they focused on the variation in hazard among sites and assumed homogeneity in frailty within sites. This variation would underestimate the dispersion of genuine hazards among sites if there was still heterogeneity in frailty within sites and if the age at marking and the proportion of individuals missed before being marked differed according to site. Addressing these hypotheses is virtually impossible when the number of individuals per site is small. Random effects models in demographic studies of wild vertebrates Individual random effects models have been used to estimate survival in animal demography studies (e.g. Cam et al., 2002; Clark et al., 2005; Royle, 2008; Gimenez & Choquet, 2010; Hawkes, 2010; Aubry et al., 2011). Modelling approaches that are generally developed in other areas of research are increasingly being used and modified to address hypotheses in wildlife ecology. There are several reasons for this. First, the criteria to assess the quality of wildlife ecology research are changing, thanks to cooperation with modellers. For example, the issue of non-independence of responses in adjacent spatial areas has long been considered in human health studies (Waller & Gotaway, 2004), and is now handled via spatially structured random effects in ecology (e.g. Ogle et al., 2006). Second, random effects can be clearly of interest in evolutionary demography: a random effect structured as a function of the degree of relatedness of individuals in a pedigree has been used to estimate the additive genetic variance in survival and heritability (Papaïx et al., 2010; Buoro, Gimenez & Prévot, 2012). Shared and correlated frailty models have also been used in human demography to address ‘resemblance’ in mortality risk in twins or families, and the possible genetic determinism of this risk (Yashin et al., 2001). Third, as emphasized by Halstead et al., data from several studies, populations or species can be combined in a joint analysis partitioning the variance in demographic parameters; such models can be used to address the co-variation in time series among species or populations (Lahoz-Montfort et al., 2010; Papadatou et al., 2011). Last, the development of free software designed to estimate mixed models has been a deciding factor (e.g. BUGS; Lunn et al., 2009; R Development Core Team, 2011). These pieces of software are very flexible and it is possible to specify user-defined structures for the variance-covariance matrix of random effects according to the levels of variation in the response relevant to the biological questions of interest. However, flexibility requires investigators to make many decisions: how to parameterize models, to estimate their coefficients, to assess model fit and compare models. This highlights the need for advanced courses in statistical modelling in university education in wildlife science.
s3://commoncrawl/crawl-data/CC-MAIN-2016-40/segments/1474738659680.65/warc/CC-MAIN-20160924173739-00298-ip-10-143-35-109.ec2.internal.warc.gz
CC-MAIN-2016-40
9,363
9
https://legalofficeguru.com/tag/indentation/
code
You don't need me to tell you what a paragraph is — it's a block of text that ends with a "hard return" you insert by pressing the Enter key. In Microsoft Word, paragraph formatting covers such attributes as justification, indentation, line spacing, and what WordPerfect calls "block protect" (called something else by Word, but we'll get to that later). Our first lesson in paragraph formatting focuses on justification and line spacing. Some of these instructions will be familiar to anyone who's worked with a Windows word processor before, but here's how you can set each of these attributes in Microsoft Word: Justification (some prefer the term "alignment") refers to how the paragraph is aligned horizontally. And it's super easy — here's how you do left-justify, right-justify, center, and full-justify in Microsoft Word (either with your mouse or your keyboard). Using the Ribbon Paragraph formatting is controlled by the Paragraph section on the Home tab of the Ribbon: You can control justification/alignment of a paragraph by clicking on the following buttons: Left-Justify - leaves a ragged right edge to the paragraph (like a typewriter would) Center - centers the text on each line Right-Justify - aligns the text even with the right-hand margin Full-Justify - gives paragraphs an even left and right margin by proportionally spacing the text Using the Paragraph dialog box More paragraph formatting commands (including those we'll be talking about below) are contained in the Paragraph dialog box. To open the dialog box, click the launcher (the small arrow in the lower right-hand corner of the Paragraph section of the Home tab of the Ribbon): That will bring up the Paragraph dialog box. The justification settings are near the top, in a drop-down box: Setting line spacing is easy, too, and you've got the same options here: Ribbon and keyboard. Setting line spacing on the Ribbon There are two places on Microsoft Word's Ribbon that you can adjust line spacing. You can either use the drop-down in the Paragraph section of the Home tab: ... or you can click the dialog launcher arrow in the Paragraph section of the Layout tab (called the Page Layout tab in Word 2010) to bring up the Paragraph dialog box: If you choose Multiple (see the area inside the red square above), you can use any positive number for the "at" value (the area inside the blue square above), and Word will adjust the spacing based on the number of lines ("3" would be triple-spacing, for example). If you need to be more precise, choose "Exactly" and use points, centimeters, or inches as your unit of measurement in the "at" box. Setting line spacing with the keyboard You speed typists out there can use the following shortcut keys for these standard line spacing options: Press this ... ... to do this: 1.5 line spacing Space Before/After Paragraphs In addition to setting line spacing <em>within</em> a paragraph, you can add extra space between paragraphs. This option comes in especially handy in a couple of situations: This, by the way, is the feature that's often behind this complaint: "I chose single spacing, but I still see extra spaces between my paragraphs!" If you run into this predicament, you'll soon know how to check (and correct) this setting. Again, you can access these settings via either the Ribbon or the keyboard. On the Ribbon, you can use the spinner (a field with up and down arrows on the side that enable you to change the value in the field up or down) in the Paragraph section of the Layout tab (called the Page Layout tab in Word 2010 and Word 365). You can also adjust these values in the same Paragraph dialog box shown above. That check box for "don't add space between paragraphs of the same style" deserves a special mention. It's handy when you're working with headings (you want space before and after a two-line heading, but not space between the first and second lines separated by a hard return). Where that check box becomes a bit of a pain in the neck is when it's checked for regular text paragraphs. Don't get frustrated if you reset before/after paragraph spacing and it doesn't seem to "take". Select all of the text that doesn't seem to be behaving properly, then head into the Paragraph dialog box (click the launcher arrow in the lower right-hand corner of the Paragraph section of either the Home or Layout/Page Layout tabs) to make sure this box isn't ticked. Here's what we've covered in this tutorial: This was kind of a long lesson! Thanks for hanging in there with me. The next tutorial in this series will cover ... Paragraph indentation and tab settings. See you then!
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251678287.60/warc/CC-MAIN-20200125161753-20200125190753-00529.warc.gz
CC-MAIN-2020-05
4,630
35
https://forum.zenon.org/t/telegram-main-moderation/1673
code
We are discussing ways to improve the Main Telegram channel. Big picture thoughts: - Improve the experience of a new community members in TG Main - Direct traffic to the interested channel ASAP Improve TG Experience The general experience in TG Main is not inviting to new contributors. How does everyone feel about implementing more strict guidelines to prevent this soft of behavior? If community members want to have “fun” in TG we can do that in the TG Community Channels. I personally think this is needed. We continue to get feedback from new users that TG Main is toxic and does not welcome new participants. TG Main could be used to onboard new community members / contributors and then we can direct them to different channels for more specific interactions. If someone wants to participate in marketing, they can request to join the Slack channel. If someone wants to get involved with development, they can request to join matrix. Etc. Good idea. We’ll have to hire mods. Personally I think we can implement this with the community we have today. As we grow and get more new people I think we will need help. But until then, I think we can impose some “rules” on our community and improve the user experience many times. I think we should make this a paid role to get consistent contribution. I’d think we need a few to cover the various timezones. If you vote to pay the mods, post your thoughts on the daily value paid to each mod. Preferably in USD value converted to ZNN. We don’t want to be paying today, what’s equivalent to $500 per day down the line. If we can’t find someone from community, I can find someone within hours via a paid job post. - Pay the mods - Don’t pay the mods Great idea, thanks for bringing up the subject. I can’t believe paying the mods is really an option. We have so many active community members in Telegram… an outsider won’t bring any value compared to an OG community member. I agree with this. As of now our community is still fairly small and has remained the same over the years, but if we want to make our main TG channel more welcoming to newcomers, we will need to moderate it more by keeping off-topic banter in the community channels. I’m an advocate for free speech, and the only thing that would worry me is that long term community members get “muted” for behaving as they always do, setting up strict guidelines could alienate them. But if we want to be seen as more professionally, having stricter guidelines is necessary in order to grow. For now it feels like keeping it NoM related should suffice in the Main TG channel, and if there’s a need for more moderation I’m happy to contribute (with the community’s blessing of course). Let’s be honest. There are a few people who make the environment bad. We need to establish rules to prevent that behavior and enforce it. We have 3 admins in there today. We are all on different time zones. It does not take too much effort to hit “ban” once a day. For example, when someone answers a question wrong on purpose to confuse or mock a new community member, that should be insta delete / ban. And, if we “hire” a mod how will they keep track of the alts and alts of alts. We will need to establish rules for a 3rd party to enforce regardless. My recommendation is we spend time to establish simple rules and try to enforce with the admins we have today. When those admins cannot handle the workload, we outsource. If a few community members are up for the responsibility then great, but we should be consistent with efforts. Most admins are already spread thin. I’m the type to put in place low-cost paid mods and not have to worry about the problem anymore. Plus I think anyone who is consistent deserves some reward for efforts, whether now or a promise for payment in the future once ZNN is more valuable. Or we hire a dev to write 2 GPT bots for us, one for Telegram and another for Discord. Feed it the moderation and enforcement rules. A one time cost that’ll never ask for a raise. Wholeheartedly support this. Humans are not the best mods GitHub - SoulNaturalist/AutoModeratorTelegramChatGPT: Automatic moderation telegram chats using ChatGPT🌟 should test it out. The AutoModeratorTelegramChatGPT is designed to automate the moderation of Telegram chats using OpenAI’s GPT. The bot offers the following functions: - Spam filtering through OpenAI’s ChatGPT model. - Customization of moderation rules according to user needs. - Translation of messages for moderation in non-English chats. - Deletion of messages identified as spam. - Optional banning of users who send spam messages. This bot is implemented in Python and uses the aiogram library to interact with the Telegram API, the googletrans library for translation, and the OpenAI API to generate responses from ChatGPT. It can be easily configured to suit different moderation needs and languages, making it a versatile tool for community management on Telegram. We can probably avoid a costly dependency on OpenAi if we train our own model. Huggingface already has some for spam detection. It should be pretty simple to build these bots for Telegram and Discord, if they’re not already available. AI combined with a number of experienced community volunteers should keep the individual workload very light, and with continuous coverage. AI bots for moderation aren’t widely available yet, at least I couldn’t find anything great (other than the one posted above). If the OpenAI credits < paying a salary = win. I think we should roll with stricter main rules with the community mods we have now and just direct off-topic posts to the original community channel (unofficially associated with main) - purely due to number of users already in that chat. Also we need to introduce a response bot ASAP, even if just for a few prompts (/ca for the Eth contract address, /wallet for links to Syrius, and maybe /az for the 3 latest AZ proposals)
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100942.92/warc/CC-MAIN-20231209170619-20231209200619-00743.warc.gz
CC-MAIN-2023-50
5,974
41
https://www.emberweekly.com/issues/243/
code
Readers' Questions - Are there any plans for Ember Data to embrace immutability? Dan Gebhardt elaborates on the future of immutable data structures in Ember Data. Detect, Diagnose and Defeat Errors 🏆 Rollbar detects when code breaks in real-time. Get stack trace and diagnostic data to defeat app errors. How to Create an Accessible Checkbox Component in Ember A detailed tutorial on creating an accessible checkbox component in Ember. Watch & Listen Ember Data Github Model updates, properly deprecates removed properties, allows consuming apps to use mirage factories/models/serializers and more.View on NPM Removes the align property of the nav-toggler component for Bootstrap 4 and fixes a memory leak on the modal.View on NPM Now supports photo posts and switches from ember-intl to ember-moment for date formatting.View on NPM Ember CLI Typescript Adds types to initializer and instance initializer blueprints, special-case handling for Mirage, and adds missing import and an export statement so ambient declarations work for the default blueprint for types/<my app>/index.d.ts.View on NPM The Official Semantic UI Addon for Ember applications (Maintainers Wanted).View on NPM Ember Closure Actions Polyfill Provides a polyfill for the Closure Actions feature added in Ember 1.13.View on NPM Ember CLI Puppet An addon that allows you to call lower level component's actions from a parent component.View on NPM Ember Heap Analytics An Ember addon that injects a service to track Heap analytics without the Heap dashboard.View on NPM Ember Liquid Sauce An Ember addon that creates several new Ember Liquid Fire transitions for card interfaces.View on NPM Ember CLI Flash Updates test-helper blueprint for ember versions >= 2.17 and now allows flash messages without text when preventDuplicates is false.View on NPM Ember Simple Auth Session restoration is now setup in an initializer (vs. an instance initializer) and the new acceptance test helpers introduced with 1.5.0 no longer need to manually set up the router.View on NPM ember g meetup Monthly Ember.js Houston Meetup Animating WebGl with ThreeJS & Ember Through the Looking Glass: Accessibility in Ember.js Learn how to FastBoot Ember.js Atlanta Meetup Ember Project Night Profiling an Ember App. Ember ATX Meetup Katie Gengler on Ember 3.0 and What You Need to Know
s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550249504746.91/warc/CC-MAIN-20190223142639-20190223164639-00344.warc.gz
CC-MAIN-2019-09
2,332
36
https://www.mirantis.com/solutions/industry-solutions/web-saas/
code
Open Cloud is not an option — it’s a requirement Every SaaS and Web 2.0 company lives or dies by the speed at which it introduces new software features.The deployment infrastructure platform has to solve two problems at once: provide a development platform that accelerates software development and at the same time keeps hosting both robust and efficient. Traditional ISVs moving to the cloud must do the same to make SaaS offerings succeed. Developers from these companies often roll their own infrastructure, jumping from public cloud to on-prem proprietary infrastructure, or a combination, in pursuit of agility and convenience. Before you know it, short term convenience breeds long-term complexity; new software becomes harder to roll out, operational problems become harder to fix quickly, APIs lock-in developers and competitive advantage falls behind. For best results, organizations need an open cloud, private and/or hybrid, that can be used for both developing and deploying applications. Open cloud means three things — open APIs, 100% open-source free of vendor lock-in and a vendor agnostic solution for maximum flexibility and choice. Mirantis OpenStack is the perfect foundation for an Open cloud. Better software faster Developers want to build software faster and avoid dealing with infrastructure, operating system and any other provisioning or configuration dependencies. However, it is not uncommon to submit a ticket for infrastructure requests and have it fulfilled 2-6 months later resulting in massive loss of productivity. This style of infrastructure also does not play well with continuous integration, devops, PaaS platforms and applications-on-demand. What developers need is the equivalent of AWS on-prem (or via a hybrid cloud) with a self-service programmatic mechanism to provision or tear-down virtual infrastructure in seconds as opposed to months (or years in the case of tear-down). Flexible yet rapid cloud deployment, hassle-free reliable operation at scale IT/ Ops wants to create an open cloud on their terms. The cloud might have a public and a private component. What IT/ Ops requires open cloud to be built as per their requirements with public cloud repatriation. They want to choose the hardware components (servers, storage, networking etc.), software components (host OS, hypervisor, SDN framework etc.) and platform level software (CMP, PaaS, on-demand applications, container orchestration etc.) that meet their unique needs. However, the flexibility should not result in deployment or operational complexity. Our pure-play approach allows us to be vendor agnostic with Mirantis OpenStack, giving you the maximum flexibility in infrastructure and platform choices. Fuel, the OpenStack deployment and management software dramatically simplifies the time it takes IT/ Ops to deploy and manage OpenStack. Finally our hardened core packages ensure that IT/ Ops can sleep easy at night! Create an Open Cloud Strategy Business leaders recognize the value of cloud technology. This is clearly evidenced by the fact that developers are increasingly using public clouds to rapidly provision infrastructure and get self-service through programmatic APIs. Exclusive use of public clouds, however, is not ideal. They do not provide the transparency (e.g. security logs) or control (e.g. replication), choice of geography or specific infrastructure choices. They are hard to integrate with your on-prem infrastructure. Moreover, public clouds also tend to get expensive at scale and, most alarmingly, lock-in developers into proprietary APIs. Mirantis OpenStack is the perfect foundation for an open cloud strategy by providing the private-cloud component. A Mirantis OpenStack private cloud will delight developers without stressing out IT/ ops. The private cloud can be further extended to form a hybrid cloud by using third party cloud management platform (CMP) such as Scalr, RightScale, Cliqr or Hashicorp Atlas and a public cloud. Or the cloud can be extended by using a container orchestration service such as Kubernetes along with the Google Container Engine public cloud. As the largest independent vendor of OpenStack services and technology, Mirantis has the real-world experience you need. We can: Stand up production-grade OpenStack in days, using one of many proven deployment configurations available through our Mirantis OpenStack. Provide systems integration to tune your OpenStack environment for your particular use cases and supporting systems Train your staff with the OpenStack skills they need through our top-selling public or private course offerings
s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720973.64/warc/CC-MAIN-20161020183840-00068-ip-10-171-6-4.ec2.internal.warc.gz
CC-MAIN-2016-44
4,621
13
http://ws2.binghamton.edu/fridrich/hints.html
code
Hints for speed cubing It is very important to customize each algorithm for your hands. Some of us are right handed, some left handed, some may prefer algorithms which use only 2 or 3 faces so that alternate twisting from left hand to right hand is avoided. Sometimes, it may be wise to perform an algorithm with the cube turned upside down, or twisted by 180 degrees. This adjustment must be done by each individual separately because everyone may have different views of which algorithms are user friendly and which are not. This takes a lot of time, but it may cut an important chunk from your total time. Multiple algorithmsAs you may notice, some positions in the last layer have several algorithms associated with them. I alternate between them to minimize turning the cube as a whole, thus cutting on time. Finger shortcutsMost speed cubists have also developed special sequences ("shortcuts or macros") of two to four moves which can be performed astonishingly fast by pushing the faces with your fingers. Yes, it does require some dexterity. On my video page you can watch me solve the cube a few times. I also perform some finger tricks. Move algorithms to your subconsciousnessIt is also important that your brain automatizes the algorithms into inseparable units - elementary actions, because then you will not have to think about individual moves. The individual moves will be performed "by your hands" rather than making your brain busy. At this stage, one can afford to think more about the next step rather than about the algorithm which is being performed. It is done for you automatically by your subconsciousness! I noticed that this automatization goes that far that if I am interrupted while performing some longer algorithm, I will not be able to finish it! In a sense, I do not know the sequence of moves and perceive the algorithm as one unit. This may sometimes create comical situatioins when somebody asks you about a specific move, and you will not able to show it slowly - and will get stuck after several moves having to start over again to see the remainder of the algorithm. No delays between algorithms Another thing which is very important is to cut on delays between consecutive algorithms. One should minimize the decision time to almost zero. This issue is strongly connected with another one - the question of twisting speed. Faster twisting does not have to mean shorter timesDogma: One needs to be especially dextered to be able to solve the cube that fast (in 17 seconds). I would be lying to say that some dexterity is not important, but I insist that an average person possesses the necessary dexterity to solve the cube in really short times. I believe that almost everybody can achieve the twisting speed of 3 twists per second. Remember, all you are required to do is to learn a finite set of algorithms perform quickly. This relates to the important issue of adjusting algorithms for your hands. So why is it possible that faster twisting speed may bring you longer times? By performing the moves really fast, one deprives him/herself of the [important] knowledge of what is actually happening to the cube. After performing an algorithm, one is then suddenly thrown into a new position and needs some time to decide which move to choose next. If you had turned the cube just little slower, you could actually see what is happening to the cube, and choose the best next move during the last couple of moves of the previous algorithm. If you compare the times: fast turns + delay between moves and slow turns + shorter delays, you will find out that the second summation may be shorter! Another argument for the second alternative is that it is very hard to turn the cube really fast, and one often encounters "stuck" cubicles, or breaks the cube to its atoms. This can slow you down as well as frustrate. Preparing the cube for record times.I have heard people recommend a variety of different lubricants for the cube. Among others, sillicon oil, graphite, and soap were mentioned. From my experience, sillicon oil worked best. Be careful before using other lubricants because some of them may be pretty aggressive and speed up the aging process of your cube. Intense twisting causes a fine dust to develope inside the cube. Some cubists say that this kind of natural lubricant is the best one. I recommend to grease the cube because a lubricated cube will turn easier and you will be able to "cut corners" while speed cubing. But be aware of the fact that putting lubricant into a cube will make the cube more vulnerable to an accidental dismemberment. I would like to end with a couple of more remarks on the cube. First, the secret of achieving amazingly short times is not just the algorithms themselves. After all, a system will never solve the cube. Humans do! Probably the most important factor is dedication and a lot of practicing. As you may notice, some positions in the last layer have several algorithms associated with them. I alternate between them to minimize turning the cube as a whole, thus cutting on time. So, what is the best system for speed cubing? I do not think that there is such thing as the best system. One system may better fit one person, other system may be more natural for somebody else. I believe that any system which is worked out into sufficient perfection is good. We should not be comparing systems but cubists. Those certainly are comparable. What are the limits of speed cubing?Any algorithmic set which can be performed by a human must be limited to a couple of hundreds at most thousands of algorithms. These algorithms need to be performed in a fast manner without too much thinking. This puts limits on the amount of time needed to solve the cube. If there was a hypothetical person who could see the shortest or the almost shortest algorithm right away in the beginning (which is quite improbable), he or she would need about 2 seconds, provided the farthest position is around 20 face moves at the twist rate of 10 moves per second. Since the assumption for this estimate will probably be unrealistic for many years to come, I estimate the limit for speed cubing at 5 seconds (the average time). One should totally abandon the concept of a record time since it has very little informational value. If somebody messes up the cube carelessly, one can take advantage of it and solve the cube in a few seconds. Therefore, for comparing purposes, I suggest to use an average of 10 consecutive times. For my system, I defined the concept of a modified record: I discarded record times whenever more than one stage was skipped during the cube solving. By skipping a stage, I mean: placing the four edges using less than 3 moves, too much luck for the four blocks (in the second layer), skipping the orientation of 8 cubicles from the last layer, skipping the permutation part in the last layer. For the first two layers, it is hard to estimate the probabilities, but the last layer can be calculated exactly. The probability that after solving the second layer, the last layer will have the correct color is 1/216, and the probablity that after orienting the cubes in the last layer one will not need to permute them, is 1/72. So, for example, if the last layer got assembled by chance right after the second layer, I discarded the time since the probability of that happening is too small: 1/(216*72). So, what is my modified record? It is 11 seconds. My best average out of ten was often 17 in 1983. I kept myself in a good shape for many years, and I can still get to an average of about 18 after all those years. Going back to 17 or lower would require a lot of effort, good cube, and a complete devotion that only a rookie can possess. So, good luck everybody and do not give up!
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475203.41/warc/CC-MAIN-20240301062009-20240301092009-00078.warc.gz
CC-MAIN-2024-10
7,772
124
https://android.stackexchange.com/questions/49799/how-do-i-install-apps-to-internal-external-sd-instead-of-phone-storage
code
I have a dual SIM mt6589 based Android 4.2.1 phone, and I have no idea where the apps are getting installed. My phone has an Internal SD card(built in), An External SD card and something called Phone Storage(/data ??) Anyway the apps are supposed to be saved to my External SD but they get saved to Phone Storage. Also the option move to External SD card is greyed out(It was available before!!). My vold.fstab file dev_mount sdcard /storage/sdcard0 emmc@fat /devices/platform/goldfish_mmc.0 /devices/platform/mtk-msdc.0/mmc_host dev_mount sdcard2 /storage/sdcard1 auto /devices/platform/goldfish_mmc.1 /devices/platform/mtk-msdc.1/mmc_host
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100912.91/warc/CC-MAIN-20231209134916-20231209164916-00728.warc.gz
CC-MAIN-2023-50
640
5
https://vimeo.com/irpc/likes/page:6/sort:date/format:detail
code
This is a project we just finished for a company out of Centralia, MO called Chance. We were hired to tell their story... who they are and where they came from. Production company: Tytancreates.com Jake OE, Brandon Larson, Cody Beiersdorf, and Danimals riding Elm Creek in Minnesota. Elm Creek is awesome! get out there. Stay tuned for constant web videos from The Darkside crew at vgsnow.com
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886107487.10/warc/CC-MAIN-20170821022354-20170821042354-00073.warc.gz
CC-MAIN-2017-34
392
4
https://www.phpbb.com/community/search.php?author_id=212689&sr=posts
code
Thanks...worksavaren wrote:Lele710: You haven't done the file edits, specifically the ones in includes/constants.php. Igor.I wrote:What is the problem? Haven't noticed anything strange actually.lele710 wrote:Problem is with login and when try to enter in ACP Igor.I wrote:What is the problem with this?lele710 wrote:This is good script... The only one problem is that don't support SID Yes but don't worksHighway of Life wrote:Okay, but that didn’t answer my question.lele710 wrote:Ok here is my viewtopic_body.php: And ads isn't showed.... Same file is in subsliver2 template and works fine Did you try those suggestions for getting it working?
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655896169.35/warc/CC-MAIN-20200708000016-20200708030016-00501.warc.gz
CC-MAIN-2020-29
647
8
http://myxpcar.com/system-cannot/system-cannot-execute.php
code
and if so, what to do? This usually means that the program you are trying to run was compiled against DLLs that are not on your system. Here is a partial log from good .exe following the same sequence upto this point: 0:00:01.203: GetProcAddress(0x7C800000 [c:windowssystem32KERNEL32.DLL], "FindActCtxSectionStringW") called from "c:windowswinsxsx86_microsoft.vc80.crt_1fc8b3b9a1e18e3b_8.0.50727.762_x-ww_6b128700MSVCR80.DLL" at address 0x78131DBE and returned 0x7C82FD4C by thread 1. current community chat Stack Overflow Meta Stack Overflow your communities Sign up or log in to customize your list. http://myxpcar.com/system-cannot/system-cannot-execute-regedit-exe.php HiJack This log attached. But I read another post that it is much easier to run the build engines via the Task Scheduler on Windows 2008. We use windows server 2003 r2 32 bit. Can proliferate be applied to loyalty counters? I tested here and only see a problem with one dll on my windows 7 system (IESHIMS.DLL) What version of windows? Any ideas what can We do to solve this issue? 2 answers Most liked answers ↑|Newest answers|Oldest answers 0 link Arne Bister (2.4k●20●32) | answered Dec 24 '12, 6:57 a.m. Quiz: Do you know your presidential IT history? The System Cannot Execute The Specified Program Batch File However ‘osagent’ failed to start with below message prompted. Manually testing probably won't solve it here on second thought.. 0 Message Author Comment by:charismatic1002011-06-01 Comment Utility Permalink(# a35891648) Unable to resolve the issue with this laptop, System is corrupted Php The System Cannot Execute The Specified Program Terms and Rules Curse Enjoy the game Not a Member? JAZZ DEVELOPER Not sure of the cause of this current issue. Being on call when you're in a salary position IT & Tech Careers What do other people think about this? It will not allow a downloaded file to be executed. https://software.intel.com/en-us/articles/windows-reports-the-system-cannot-execute-the-specified-program-error Software Restriction Policies Troubleshooting 0 Message Author Comment by:charismatic1002011-05-11 Comment Utility Permalink(# a35739447) I will not be able to work with this unit again until tomorrow 5/12/2011. The System Cannot Execute The Specified Program Windows Xp Debug versions of an application and Visual C++ libraries can only be deployed to another computer internal to your development site for the sole purpose of debugging and testing your application Error Executing The Specified Program Setup Exe I have updated to SP3. Here is the problem that I am trying to resolve: I have built an MFC application which uses Office 2007 automation and emits an Excel spread sheet, however, I cannot install http://myxpcar.com/system-cannot/the-system-cannot-execute-the-specified-program-openssl.php Browse other questions tagged c# or ask your own question. The manifest contains metadata that describes assembly dependencies of the executable.Every dependent assembly has a unique identity. Select the button in the lower left hand corner of the properties page that says "UNBLOCK". The System Cannot Execute The Specified Program Java Join & Ask a Question Need Help in Real-Time? Reply Jiv Kale says: August 16, 2007 at 12:05 pm I am trying to run an application created with VS 2005 C/C++ on a machine that does not have VS 2005 If a version of the assembly mentioned in the applications manifest is specified in the policy file, the loader looks for a version of this assembly specified in the manifest in http://myxpcar.com/system-cannot/sqlcmd-the-system-cannot-execute.php No, create an account now. Thank you! Errorlevel 9020 Successfully hooked module. This time, when I run the jsl -debug command I get this message: "The system cannot execute the specified program". RTC - Can I define dependency value on 'Found in' (Releases) attributes [closed] Adding Formal Project Management Features to SCRUM Template (Person Hours) Sharing project templates for existing Project Areas How No success. here r some my questions 1] why the app loads msvcr90.dll instead of msvcr90d.dll as my app is running in debug mode 2] when I open the debug exe from VS2008 System Cannot Execute The Specified Program Xp I ran a search on the server and I couldn't found those files. Why is looping over find's output bad practice? It can be either embedded inside the binary as a resource or saved as an external file in the application's local folder. Personally, I'd love to catch more than two hours of sleep a night. this content Event ID: 1530 Source: User Profile Service Windows detected your registry file is still in use by other applications or services. So can anyone help me out with this? In this section the most common reasons for a C/C++ application failing to load are described with proposed steps to resolve the problems. In the context of this quote, how many 'chips/sockets' do personal computers contain? Sign up for Free! I also have no control over the Firefox executable. It appears that when my DLL is installed, Firefox tries the load the VC8 runtime when it starts up, which of course produces the R6034 error. Enroll in a course and start learning today. More information is here, http://msdn2.microsoft.com/en-us/library/ms235299(VS.80).aspx Nikola Reply Joy says: April 20, 2007 at 5:55 am Dear Nikola, I thank you first of all for your valueable post. Join group Get this RSS feed Home Forum Blog Wikis Files Members Table of Contents Knowledge Base "CORBA::INITIALIZE" error when linking libvbsec module while using Oracle client libraries "lic cannot be work for me.....ReplyDeleteAnonymousNovember 8, 2012 at 12:39 AMIt worked for me too| Thanks a lot!ReplyDeleteAnonymousJanuary 12, 2016 at 10:40 AMThanks u the best manReplyDeleteAdd commentLoad more... [Due to much spam, comments What can be the reason that cause the service to fail? Karl Weinert commented Dec 27 '12, 8:10 a.m. So what can I do in this scenario, where I have a DLL that depends on the Microsoft.VC80.CRT assembly (and which has a correct manifest that reflects that), but which is There is an issue with opening IE8, "Internet explorer has stopped working. Each assembly specified in the manifest by its name, version number and processor architecture. Profile is local admin . Thanks in advance nevermind, it's fixed now. I need one clarification regarding to install Manifest file. And that explains why an error message “The system cannot execute the specific program” prompted. c# share|improve this question edited Sep 27 '12 at 5:29 Ria 7,12531840 asked Sep 27 '12 at 5:25 betechnical 2428 does your server have excel installed ? Seen: 6,523 times Last updated: Dec 27 '12, 9:42 a.m. All rights reserved.
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583512332.36/warc/CC-MAIN-20181019062113-20181019083613-00136.warc.gz
CC-MAIN-2018-43
6,733
16
https://datadryad.org:443/stash/dataset/doi:10.5061/dryad.hc54664
code
Data from: Order-level fern plastome phylogenomics: new insights from Hymenophyllales Cite this dataset Kuo, Li-Yaung; Qi, Xinping; Ma, Hong; Li, Fay-Wei (2019). Data from: Order-level fern plastome phylogenomics: new insights from Hymenophyllales [Dataset]. Dryad. https://doi.org/10.5061/dryad.hc54664 PREMISE OF THE STUDY: Filmy ferns (Hymenophyllales) are a highly specialized lineage, having mesophyll one cell layer thick and inhabiting particularly shaded and humid environments. The phylogenetic placement of Hymenophyllales has been inconclusive, and while over 87 whole fern plastomes have been published, none was from Hymenophyllales. To better understand the evolutionary history of filmy ferns, we sequenced the first complete plastome for this order. METHODS: We compiled a plastome phylogenomic dataset encompassing all eleven fern orders, and reconstructed phylogenies using different data types (nucleotides, codons, and amino acids) and partition schemes (codon positions and loci). To infer the evolution of fern plastome organization, we coded plastomic features, including inversions, inverted repeat boundary shifts, gene losses, and tRNA anticodon sequences as characters, and reconstructed the ancestral states for these characters. KEY RESULTS: We discovered a suite of novel, Hymenophyllales-specific plastome structures that likely resulted from repeated expansions and contractions of the inverted repeat regions. Our phylogenetic analyses reveal that Hymenophyllales is highly supported as either sister to Gleicheniales or to Gleicheniales + the remaining non-Osmundales leptosporangiates, depending on the data type and partition scheme. CONCLUSIONS: Although our analyses could not confidently resolve the phylogenetic position of Hymenophyalles, the results here highlight the danger of drawing conclusions from "all-in" phylogenomic dataset without exploring potential inconsistencies in the data. Finally, our first order-level reconstruction of fern plastome structural evolution provides a useful framework for future plastome research.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473871.23/warc/CC-MAIN-20240222225655-20240223015655-00867.warc.gz
CC-MAIN-2024-10
2,074
4
https://memrise.zendesk.com/hc/en-us/articles/360012580598-How-can-I-learn-Grammar-
code
On our iOS and Android apps, you can learn Grammar on the following language pairs. |⚠️ Please note: Grammar is not currently available on the website.| In this article: - Find courses with Grammar if you speak English 🇬🇧🇺🇸 - Find courses with Grammar if you speak Spanish (Spain) 🇪🇸 - Find courses with Grammar if you speak Chinese 🇨🇳, French 🇫🇷, German 🇩🇪, Japanese 🇯🇵, Korean 🇰🇷, Italian 🇮🇹 or Russian 🇷🇺 - How Grammar works For UK and US English speakers: - French 1 & 2 - Spanish (Spain) 1 & 2 - German 1 & 2 - Italian 1 & 2 - Japanese 0, 1 & 2 - Mandarin Chinese 1 & 2 - Russian 1 & 2 - Korean 1 & 2 For Spanish (Spain) speakers: For Chinese, French, German, Japanese, Korean, Italian & Russian speakers: How it works You can find Grammar levels in your course's dashboard. To start, simply tap the relevant levels and select "Learn Grammar". You will be shown a grammar rule and a few examples. You will then be asked to complete multiple choice, tapping and typing tests about the grammar point. If you get stuck, simply tap the💡lightbulb icon at the bottom for a quick recap.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320301592.29/warc/CC-MAIN-20220119215632-20220120005632-00446.warc.gz
CC-MAIN-2022-05
1,151
22
https://coghlan.me/community-metrics/
code
Disclaimer: I am a community practictioner. I follow the practice of community management closely and apply what I learn to the work I do and the communities I build. I do not take credit for the creation of these ideas. I have tried to give the credit to the original sources as much as possible. As luck would have it, I had the chance to follow last week’s FlylessWeekly with DevRel legend Phil Legetter who covered one of the most popular topics in DevRel, Defining DevRel roles, by tackling another widely-discussed topic in DevRel, How to measure the impact of community. Here are my notes from that conversation. Community metrics are a popular topic in DevRel because they’re hard. They’re hard to define, hard to measure, and hard to get right. A dollar of revenue is roughly equal in value across companies, the value of a user (LTV) to a company is fairly easy to quantify for most businesses, but the value of an interaction with a community member can vary pretty widely across communities and even within a community. How does the impact of a person attending a meetup to hear you speak compare to the value of a merge request that quickly fixes a bug impacting your users? Community metrics 101 The difficulty with defining community metrics sometimes, many times?, leads to a set of metrics that fail to capture the true impact of a community. For example, maybe you have evaluated the value of or event tracked GitHub stars for your project, Twitter followers, Meetup attendees, or blog posts published. All of those metrics are directionally helpful. If those numbers are growing, you are probably doing something right. But these metrics don’t tell the whole story and as your community grows so does the need to invest in it. Tell me your favorite community metric. Wrong answers only.— John Coghlan (@john_cogs) March 2, 2021 If I missed any of your (least) favorite metrics, make sure to reply to the tweet above and let me know. Emerging community metrics for tech communities In 2016 as a newcomer to community management and developer relations, the so-called “vanity metrics” (note: I don’t love this term as these metrics are often quite important for new communities) seemed to be the best we could do. In the last year, however, I’ve seen the tooling and availability of community metrics shift in an exciting new direction. One cool new trend is the emergence of new ways to track community impact through a more wholistic lense. Tools like Orbit and Bitergia allow businesses and project maintainers to view “atomic” level activity (Kudos to Sean Goggin from CHAOSS who introduced me to this term) to build out a more complete view of the contributions from a developer or a community at large. Mary Thengvall’s DevRel Qualified leads is a new metric that looks to take an established business metric (XQLs) and adapt it to measure the impact of community and DevRel activities. I would be remiss not to mention CHAOSS, a Linux Foundation project that focuses on metrics to track community health. While presenting at Flyless, I also learned about how some other teams are tracking their metrics. My notes include: - Ryan MacLean tracks engagement with the content (how many people are raising their hand to ask me a question?) as a KPI - Lorna Mitchell pointed to Phil Legetter’s AAARRRP framework as model of more advanced metrics - Caroline Lewko and others pointed to Airtable, Marketo, Mixpanel, and Hubspot as tools to help track community impact Each of these tools and metrics aim to quantify the impact of interactions with a community member, rather than count the number of interactions. We have taken similar steps at GitLab, including the creation of a new metric that we are using to track the impact of our community. Community at GitLab Before jumping into the metrics, I want to offer some background on GitLab and why we take community so seriously. Gitlab started in 2011 as an open source project and the company launched as a Show HN post from our CEO the following year. Our CEO remains an active member of the community on Hacker News. Our well-known transparency, “everyone can contribute” mission, and open source stewardship create many opportunities for our community to contribute to GitLab. Our long standing community KPI: Wider community MRs per release Our first and longest standing company-level community metric is Wider Community merged MRs per release. Essentially, this tracks the number of new contributions to our product from the community on a monthly cadence. These contributions are essential to GitLab’s dual flywheels strategy. This strategy defines the virtuous cycle of community contributions creating more features leading to more users and revenue which enbables us to invest in new features which attract more users and contributors extending the cycle. The new kid on the block: MRARR Our latest community-focused metric, though it is owned by our Engineering org, is MRARR. At GitLab, we affectionately call this the pirate metric. If that doesn’t make sense to you, try pronouncing it as a single word. :) When thinking about next level community metrics, this is a favorite for me. MRARR tracks how frequently customers are contributing to GitLab by measuring the number of merge requests from customers multiplied by their Annual Run Rate. By contributing to GitLab, these customers are making the product better suited to their needs (increasing stickiness/retention) and likely better for other enterprises (improving product market fit). I love it and would love for other OS companies to consider adopting this metric. As DevRel evolves as a practice and new tools and metrics emerge, I hope that we’ll see increasing standardization of metrics across communities. When I posed this idea to today’s FlylessWeekly attendees, the response was mixed. Some felt that the variability of communities necessitates a wide variety of metrics. While I trust their judgement, I remain optimistic that better metrics will lead to increased invest in community. If you’ve got ideas, please drop me a line on Twitter. The other “top 3” topic is Where does DevRel belong in an organization? and I’m excited to see who will volunteer to tackle that one.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947473598.4/warc/CC-MAIN-20240221234056-20240222024056-00309.warc.gz
CC-MAIN-2024-10
6,275
28
https://docs.diladele.com/tutorials/policy_based_routing_squid/conclusion.html
code
We have successfully set up policy based routing of HTTP and HTTPS traffic from our router to separate proxy box. Both HTTP and HTTPS traffic can now be filtered for adult language and unwanted sites. Original Ubuntu router article at https://arstechnica.com/gadgets/2016/04/the-ars-guide-to-building-a-linux-router-from-scratch Explanation how iptables firewall processes packets http://www.faqs.org/docs/iptables/traversingoftables.html SquidWiki policy based routing article https://wiki.squid-cache.org/ConfigExamples/Intercept/IptablesPolicyRoute If you are using Google chrome, you are advised to block QUIC protocol on your router, otherwise Chrome will be able to bypass the transparently redirected proxy when going to QUIC enabled sites, like google.com, youtube.com, etc. For more information see Squid Wiki article http://wiki.squid-cache.org/KnowledgeBase/Block%20QUIC%20protocol. Adding REJECT rules for UDP protocol on outgoing port 80 and port 443 should be enough.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100381.14/warc/CC-MAIN-20231202073445-20231202103445-00870.warc.gz
CC-MAIN-2023-50
981
5
https://www.techadvisor.co.uk/forum/helproom-1/if-finding-pca-site-slowbrowser-change-needed-4192908/
code
Google Chrome works fine for me on this website, where IE does not. If you have been following some of the postings over the past few weeks, it would appear that some people have problems with certain devices and browsers when others do not. In some cases, suggestions have been offered by people like Woolwell and Bremner et al. I normally run IE8 at home and at work, but sometimes have used Chrome at home. I find PCA not just slow but sometimes stopped. Also Ebay. At present having to type this message using the scroll bar to refresh the screen in order to see what I am typing. Heaven knows (or should that be Microsft) what's going on today. I have no doubt that some browsers will work better with PCAdvisor web site, but why should this be? I use IE8, and it's acceptably fast with all of the web sites I visit, including other forums, except for PCA. Not everyone with a computer is tech-savvy enough to download and use another browser (or wants to use another browser) other than the one that their PC came with, which in 99.99% of cases must be IE. So why can't PCA's web site be programmed so that it runs smoothly and quickly on IE, just like most other web sites do? I find Firefox and Chrome the most reliable on all the daily websites I use, IE is well down my list, especially on the PCA website, which as you seem to say doesn't work at all very well for some reason with IE?. It would be nice to really know the answer to this?. This thread is now locked and can not be replied to.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578532929.54/warc/CC-MAIN-20190421215917-20190422000857-00055.warc.gz
CC-MAIN-2019-18
1,503
9
https://firelightwp.com/wordpress-lightbox-docs/pro-trigger-fancybox-with-a-url-hash/
code
Easy FancyBox Plugin Docs [Pro] Trigger FancyBox with a URL hash With the Pro extension, it is possible to open a light box automatically, depending on the URL hash used by the visitor. This can be useful for links in a mailing or when sharing links on social media, and you wish visitors that follow such a link, to be presented with an automatic light box popup. You can have different popups available on a single page or across your website (with an HTML widget in footer or sidebar) that will react to the URL hash code, the part after the URL following #.. For example, you could to show a detailed product description in a popup on a product page via a link like this https://your.site/products/my-product/#details from an advertisement e-mail, while this popup does not show when visitors browse to the product page (https://your.site/products/my-product/ without the #hash) via normal navigation on your site. The following steps assume you do not already have a link or button that opens a FancyBox light box on your site. If you have such a link already, you’ll have to adapt the existing link code to match this example. - Go to Settings > Media and find the FancyBox option Open on page load under Auto popup in the Miscellaneous section. Set it to “Link with ID matching URL hash” then set the value Delay in milliseconds to “0” (zero)* and Save Changes. - Go edit the page where you wish to be able to use this popup. If you want it to be available on every page on your site, follow the next steps but place all code in a HTML widget in the sidebar or footer. - Insert an HTML block with this code: <a href="#product-details" class="fancybox-inline" id="details">Details</a> - Next, on a new line paste this: <div class="fancybox-hidden"><div id="fancyboxID-1" class="hentry" style="width:460px;max-width:100%;"> - Below this HTML block, you can now insert content like your detailed product description or a subscription form (shortcode or block) or anything else. - Finally, place another HTML block below your popup content containing this: - Now save your page/post and view the result on the public side. Test the button/link to see if the popup works. If it doesn’t, edit your page/post again and verify your code. Test the URL hash: copy your page/post URL (web address), open a new browser tab, and paste the copied URL in the address bar without hitting enter. Then type #details after the URL and then hit Enter. The page should load and the popup should automatically open. Notes and tips: *) The default delay for automatic popups is one second but used in combination with a URL hash, a low value (can be 0) is best to prevent users interacting with the page before the popup opens. You can adjust this on Settings > Media under Auto popup in the Miscellaneous section. **) You can use a class like button (depends on your theme) or a custom class to style the link as a button. If you do not wish the link to be visible at all, hide it by removing the link text (“Details” in this example) or giving it the class ***) If you forget these closing div tags, your page will look very weird like missing sidebars or footer, or even completely unusable! Go back to edit the page/post (or widget) and make sure the two closing div tags are properly placed after the inline content.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947476397.24/warc/CC-MAIN-20240303174631-20240303204631-00023.warc.gz
CC-MAIN-2024-10
3,321
22
https://drobocommunity.m-ize.com/t/ssd-850-sata/144161
code
My mSata is about 10 months old. The Drobo Dashboard said the drive had 7% health left. I contacted Samsung for warranty repair. They wiped the data, ran it through a number of tests, updated the firmware, and sent it back. They said it passed all of their tests. It came back today. I turned off my 5N, popped in the mSata, and watched the Drobo Dashboard as it booted. At first the mSata health said 100% and I was pleased. After booting, the dashboard now says “Good (Life 7%)” again. It is still green, but what should I do now? At this point I’m guessing I’ll leave it in until it fails. Could this be the same case as with regular drives where the Drobo is just more sensitive? But seriously, 100% to 7%. Samsung just called me to do a QA survey. I explained what happened and gave them a low score (1 out of 7). I then had them transfer me to technical support. I told them what happened and they are going to exchange the unit. Today the dashboard says the health is down to 6%. its interesting about the 7% (or 6% as later saw)… if the same msata was recognised as having a certain amount of cells used (or seen as working in a certain way for those cells), then it would be understandable if it continued from where it left off, though i would be interested in seeing how the new replacement goes, and also if it is a similar model or a different one. can you remember if it was 100% when you first installed it? (it might have been, though just wondering) also, if my basic maths (and 1 coffee) is ok … is that model 250GB raw? if so, 250 (*0.9 estimate) * ~80k read/writes = about 18000000 GB throughput? in about 10 months of usage, do you have any other info about your data and usage, and any database-related programs (or searching indexing) that might help to account for the amount used? ah thanks for the clarification, using the same coffee maths, it looks like about 8640000 GB of throughput, but maybe the max values are not always reached… if you can think of a way to work out approximately how much data your backups take, at the maximum, and how often the backup/scanning takes place, maybe there is a way to see if the values match up (including the most you ever copied or accessed your videos in a day and times it by how long you had it) - it might be interesting just to see if it would tally up in some way, but only if you have time or incliniation as might be better to just watch some more videos while you wait for your replacement msata
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945242.64/warc/CC-MAIN-20230324020038-20230324050038-00230.warc.gz
CC-MAIN-2023-14
2,487
13
http://www.insanelymac.com/forum/topic/24633-zhow-to-fix-boot-on-macosx/page-2
code
Oh Thank god! I have my blessed Mac Back! FYI You dont need the startupfiletool anymore. Read the readme file in: /usr/standalone/i386 on the install cd It has the instructions to do an fdisk to overwrite the MBR and if needed the partition boot sector too. It's pretty easy. I wouldnt have found that readme without your post though...so thanks a bunch! Also, I see a lot of people complaining about write problems to the drive...dont forget to umount the disk before writes. This solves the gptsync "data write failed at position 0: Bad file descriptor". Find out what is mounted with the "mount" command. Disclaimer I accept no responsibilty for a dead hard drive or loss of data but this should work for you i have had the same problem before. Boot into single user mode (-s at boot prompt) and type the following: dd if=/usr/standalone/i386/boot1h of=/dev/rdisk bs=512 count=1 /usr/sbin/startupfiletool /dev/rdisk1 /usr/standalone/i386/boot bless -device /dev/disk0 -setBoot This assumes you are using the first hard drive. I have compiled startupfiletool (from Apple's own open source pages) and included it in this post you need to place this file in /usr/sbin.
s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689490.64/warc/CC-MAIN-20170923052100-20170923072100-00213.warc.gz
CC-MAIN-2017-39
1,168
12
http://www.orangestripes.com/casestudy_ngpm.html
code
Moody’s KMV is the world’s leading provider of quantitative credit analysis tools to lenders, investors, and corporations. For the next major release of their best selling product, Portfolio Manager (desktop software used by major financial institutions use to measures portfolio risk and concentrations, manage economic capital, and trading opportunities), the project sponsors wanted to drastically improve the usability, provide more useful features, and build the product using using .NET platform and Agile Scrum development process. As the user experience lead for the project, my responsibilities included: interaction design, user research, managing visual design contractors, and participate in the long-term product strategy. From the start of the project, I worked very closely with the executives, product managers, and engineers to understand the business goals and applying the agile development process with traditional usability engineering process. Based on my experience designing the products that was built using agile development process, I managed to convince the management to allocate several weeks for user research before the project launch, and use Basecamp as the portal for all user experience related issues and deliverables. During the user research, I worked with my team to analyze competitor’s solutions, perform heuristic evaluation of the existing version of the product, and conduct online surveys and phone interviews with target users. Based on information from the user research , I worked with my team to create the personas (target user profiles) and scenarios to design the prototypes in small iterations that engineers can immediately use. Some of the user experience improvements to the product were following: Role based user interface, intuitive tool to build economical simulations, interactive reporting, customizable dashboards, and heat maps that shows portfolio summary. View more Screenshots Beta version of the product have received rave reviews from industry analysts and leading financial institutions. In addition, some of the major financial institutions have already purchased the product, and plan to deploy the product through out its business units.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368705976722/warc/CC-MAIN-20130516120616-00061-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
2,217
10
https://mail.python.org/pipermail/ironpython-users/2006-June/002432.html
code
[IronPython] IronPython support In Visual Studio 2005 April VSSDK? Lesley & Mitch Barnett mbarnett at uniserve.com Thu Jun 1 19:23:41 CEST 2006 Here is the code in the interpreter to show Types in Types: does work or is it a bug? IronPython 1.0.2328 (Beta) on .NET 2.0.50727.42 Copyright (c) Microsoft Corporation. All rights reserved. >>> from System.Reflection import * >>> a = Assembly.LoadFrom("mapack.dll") >>> Types = a.GetTypes() >>> for Type in Types: ... print Types >>> Types = a.GetTypes() >>> for Types in Types: ... print Types Also, when running the debugger in VS, when it gets to the break point, it asks for the IronPython source code at: Should I be pointing something in VS at the IronPython source code? Or is this a bug? From: Dino Viehland [mailto:dinov at exchange.microsoft.com] Sent: May 30, 2006 8:44 AM To: Discussion of IronPython Subject: Re: [IronPython] IronPython support In Visual Studio 2005 April The two files that you see is being done through partial class support which is a language feature that both C# & VB support. Python doesn't have such a feature, and so we've been discussing internally ways we could do this - unfortunately we haven't come up with the ideal solution yet. Ultimately we want to have a very similar experience to C# & VB, but it certainly may not be the same. I believe our next round of VS integration work will enable the drop-down list for types & members. Debugging should work today, although you won't get the greatest display for your locals or classes always - but you can at least step through. I don't believe we have any specific plans to improve debugging immediately. You should be able to place your code anywhere in Form1.py. The CodeDom parser should just merge generated code in along w/ your code. If you run into any issues there let us know :-). Is the issue w/ the code the one Vagmi pointed out (types in types)? This should work. One suggestion would be to look in the Data property of the exception, and find the Python version of the exception - it may contain more meaningful information. From: users-bounces at lists.ironpython.com [mailto:users-bounces at lists.ironpython.com] On Behalf Of Lesley & Mitch Sent: Tuesday, May 30, 2006 5:02 AM To: users at lists.ironpython.com Subject: [IronPython] IronPython support In Visual Studio 2005 April VSSDK? Hi, using the April 2005 VSSDK, I am able fire up a new Visual Studio PythonProject (Experimental Hive) using the Windows Application template. I can drag and drop UI controls from the Toolbox onto the design surface. However, I notice the IronPython code being generated for both the designer and the form is placed in a single file called Form1.py. In a C# WinForm project, the code is separated into 2 files, Form1.cs and From1.Designer.cs. Is the plan to have IronPython fully integrated into Visual Studio in the same way as a C# project, including drop down list for types and members in the code editor? What about debugger support? I ask this as I am trying to build a simple IronPython Windows app and while being new to the Python language I am also finding it difficult to figure out exactly where I put my code in Form1.py cause the both the design code and "regular" code are in one place instead of separated as in a C# project. Finally, I cannot get some simple code to work as it throws an exception, "A first chance exception of type IronPython.Runtime.ArgumentTypeException' occurred in IronPython.dll. When I set the breakpoint to step through the code I get into disassembly and I am not good at reading IL. Then the program aborts with the error above. My project is real simple, has a Windows form and a listBox control. All I am doing is using System.Reflection to GetTypes from a DLL and put them in a In C# it looks like: private void GetMyTypes() Assembly myAssembly = Type types = myAssembly.GetTypes(); foreach (Type type in types) myAssembly = System.Reflection.Assembly.LoadFrom("mapack.dll") types = myAssembly.GetTypes() for types in types: In both cases, the method is being called right after InitializeComponent() Any ideas as to why this IronPython code won't run? Thanks in advance, -------------- next part -------------- An HTML attachment was scrubbed... More information about the Ironpython-users
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721468.57/warc/CC-MAIN-20190425134058-20190425155213-00046.warc.gz
CC-MAIN-2019-18
4,281
80
http://www.gathered.tv/en/benedetto-de-martino
code
“It is wonderful, when you get a deep insight into things.” De Martino graduated in biotechnology and pharmacology from the University of Naples in 2003. He obtained his PhD in neuroscience from the Wellcome Trust Centre for Neuroimaging at the University College London. He is researching human decision-making by integrating economic models with computational and cognitive neuroscience tools. He is trying to elaborate and upgrade economic concepts with complementary knowledge. For two years, he has been working as a post-graduate researcher at the Economics Department of the California Institute of Technology (Caltech). Currently, he is employed as a researcher at the Division of Psychology and Language Sciences at the University College London.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00582.warc.gz
CC-MAIN-2021-43
759
4
http://linux.softpedia.com/get/Communications/Filesharing/GShare-13790.shtml
code
GShare project allows users to easily share files between computers. Shared files are published on the network and display automaticaly in Nautilus Network Servers browser. An internal FTP server is used to share the files with other users. · GTK+ version 2.8.x · Mono Gtk# · Gnome# DBus (w/ Mono bindings) · Avahi (w/ Mono bindings) What's New in This Release: · Use Mono 2.0 compiler · Dropped dbus-sharp, now using NDesk DBUS · Corrected some Ubuntu dependencies · Other small bug fixes
s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802775394.157/warc/CC-MAIN-20141217075255-00085-ip-10-231-17-201.ec2.internal.warc.gz
CC-MAIN-2014-52
497
12
http://eduardoieran.pages10.com/An-Unbiased-View-of-python-project-help-20085147
code
Even so, the website is mostly managed by volunteers, we don't give any precise Services Amount Settlement, and as might be expected for an enormous dispersed technique, issues can and from time to time do go Improper. See our position webpage for recent and previous outages and incidents. When you have higher availability needs for your personal offer index, consider either a mirror or A personal index. How can I lead to PyPI? Open resource program is built greater when consumers can easily add code and documentation to repair bugs and include functions. Python strongly encourages Group involvement in improving upon the program. Find out more regarding how to produce Python improved for everybody. Suggestion: Even though you obtain a Prepared-manufactured binary in your System, it makes sense to also download the resource. five to present. The project title has become explicitly prohibited with the PyPI directors. One example is, pip put in prerequisites.txt is a standard typo for pip set up -r demands.txt, and should not shock the person using a destructive offer. The project identify has become registered by another consumer, but no releases have been designed. How do I claim an abandoned or Earlier registered project name? PyPI is run by Warehouse and by a range of resources and solutions supplied by our generous sponsors. Am i able to depend upon PyPI remaining available? We use numerous conditions to describe software available on PyPI, like "project", "release", "file", and "offer". From time to time Those people phrases are complicated because they're employed to explain various things in other contexts. Here's how we utilize them on PyPI: A "project" on PyPI is the identify of a group of releases and documents, and specifics of them. Projects on PyPI are made and shared by other customers of your Python community so that you could rely on them. You can find at the moment no recognized system for accomplishing this administrative endeavor that is express and truthful for all get-togethers. Should you no more have entry to the email tackle affiliated with your account, file a difficulty on our tracker. If you've this article neglected your PyPI password however , you keep in mind your e mail deal with or username, comply with these measures to reset your password: Head over to reset your password. PyPI will reject uploads if the description fails to render. To examine an outline domestically for validity, you could use readme_renderer, which is similar description renderer used by PyPI. How do I receive a file measurement limit exemption or enhance for my project? The plaintext password isn't saved by PyPI or submitted to the Have I Been Pwned API. PyPI will not allow for this kind of passwords to be used when environment a password at registration or updating your password. If you receive an mistake message saying that "This password seems in the breach or has long been compromised and can't be applied", you ought to transform all of it other sites that you utilize it as quickly as possible. For those who have gained this error although seeking to log in or upload to PyPI, then your password continues to be reset and You can not log in to PyPI till you reset your password. Integrating Transportation Layer Security, or TLS, is part of how we ensure that connections between your Personal computer and PyPI are personal and safe. It is a cryptographic protocol that is experienced various variations over time. PyPI turned off assistance for TLS versions 1.0 and one.one in April 2018 (reason). When you are getting difficulties with pip put in and obtain a No matching distribution identified or Could not fetch URL mistake, test adding -v on the command for getting more details: pip set up --up grade -v pip If you see an mistake like There was a challenge confirming the ssl certification or tlsv1 alert protocol Variation or TLSV1_ALERT_PROTOCOL_VERSION, you might want to be connecting to PyPI with a more moderen TLS assist library. Nonetheless, one is presently in development for every PEP 541. PEP 541 has become accepted, and PyPI is developing a workflow that can be documented right here. How am i able to upload a description in another format?
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257361.12/warc/CC-MAIN-20190523184048-20190523210048-00427.warc.gz
CC-MAIN-2019-22
4,224
13
http://mac.freecode.com/2014/6/17
code
SmartGit/Hg is a graphical user interface for Git and Mercurial which can work with SVN repositories. It supports cloning from common repository providers (e.g., GitHub, Assembla), assists Git newbies, and also offers the advanced, powerful Git features. It provides several tools to help create clean commits, for example by allowing the user to commit just parts of changes files and reordering and squashing unpushed commits. If you are using GitHub or GitHub Enterprise, SmartGit/Hg can work easily with pull requests (creation, resolving) and commit comments. SmartGit/Hg ships with a built-in SSH client, file comparer, and merge tool which are capable of syntax coloring for many languages. Release Notes: This build fixes a couple of bugs. Because of a regression, Help|Check for New Version must be used to get it (or download the new bundle). JPPF makes it easy to parallelize computationally intensive tasks and execute them on a Grid. Release Notes: This beta release brings major new features, focused on the ease of use, consistency, and scalability of the client API. Jailer is a database subsetting and browsing tool. It is a tool for data exporting, schema browsing, and rendering. It exports consistent, referentially intact row-sets from relational databases. It removes obsolete data without violating integrity. It is DBMS agnostic (by using JDBC), platform independent, and generates DbUnit datasets, hierarchically structured XML, and topologically sorted SQL-DML. Release Notes: An incompatibility with Java 8 was fixed. Aspose.Pdf is a .NET PDF component to write PDF documents without using Adobe Acrobat. It supports form field creation, document, text and page properties, color space, text, heading, and attachment settings. It lets you create PDF documents by using its API with XML templates and XSL-FO files. It also converts HTML, XSL-FO, and MS Word to PDF. Other features include image formats and security features, hyperlinks, the ability to add footnotes, automatic fitting to content in a table, decimal Tab stops, HTML tags, and keeping paragraphs together when breaking pages. Release Notes: This release added support for converting XFA forms to standard forms. In order to cater for this requirement, the values from an enumeration named FormType can be passed to the Document.Form.Type property. Developers can now use a single approach to set access privileges when creating new PDF documents or manipulating existing PDF files. Moreover, adding layers to PDF documents and getting the destination URL of a hyperlink in PDF documents are also supported. It also enhanced PDF to image conversion, saving PDF files in a particular PDF version, deleting images from PDF files, and much more. Impro-Visor is a music notation and playback tool for helping jazz musicians learn to improvise. It features a notation GUI, automated playback of chords and rhythm using MIDI, and improvisation advice provided in a variety of ways, including being able to improvise jazz itself. Data are stored as open-format text files. MIDI and MusicXML export are also available. Release Notes: The Style/Section editor has been changed to allow greater flexibility. There is a virtual keyboard for entering notes on the screen. There is now an option within Import MIDI tracks from File to infer chords. The Style Extractor no longer requires a leadsheet file to specify chords; extraction is done only from MIDI files. The grammar formalism contains some new constructs, including the ability to specify relative pitches (rather than just abstract notes), and other built-ins. The roadmap analysis algorithm has been changed to use harmonic tempo. The ServerStatus application will display a window that shows the status of a list of servers, NAS, routers, etc. ServerStatus will 'ping' each server/network device once per minute to determine if it is 'online' or 'offline'. Release Notes: A Ping command is now used rather than using the isReachable() method. Gitblit is a pure Java stack for managing, viewing, and serving Git repositories. It's designed primarily as a tool for small workgroups which want to host centralized repositories. Release Notes: This release features a My Tickets page, Milestone CRUD pages, overhauled New Repository and Edit Repository workflows, and overhauled User Profile pages with SSH key management and user preferences. DKPro WSD provides UIMA components which encapsulate corpus readers, linguistic annotators, lexical semantic resources, WSD algorithms, and evaluation and reporting tools. You configure the components, or write new ones, and arrange them into a data processing pipeline. DKPro WSD is modular and flexible. Components which provide the same functionality can be freely swapped. You can easily run the same algorithm on different data sets, or test several different algorithms on the same data set. Release Notes: Evaluators now permit chaining of backoff algorithms. There are now annotators that allow for disambiguating the complete text collectively. There is now a weighted MFS baseline. The sense cluster evaluator now computes McNemar's test. The sense cluster evaluator now handles the case where there are multiple gold-standard senses, and includes undisambiguated instances in the confusion matrix. Bugs were fixed. ToPIA (short for Tools for Portable and Independent Architecture) is a technical platform abstraction framework. It consists of a persistence module and services for migration, replication, and security. Release Notes: This version refactors generated code for delete on relation without association entity, enables schema initialization by default, deprecates the findByTopiaId, findByNaturalId, and tryFindByTopiaId methods, adds a forNaturalId() method to dao, and adds isEmptyCollection(property) to HqlAndParametersBuilder. It also fixes bugs that prevented multiple migration from being resumed in case of failure and a bug on migration lifecycle.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121665.69/warc/CC-MAIN-20170423031201-00446-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
5,961
18
http://pawelgoscicki.com/archives/2008/03/mongrel_cluster-not-starting-after-hard-reboot/
code
Does the following error sound familiar? ** !!! PID file log/mongrel.pid already exists. Mongrel could be running already. Check your log/mongrel.log for errors. ** !!! Exiting with error. You must stop mongrel and clear the .pid before I'll attempt a start. It usually happens when the server crashes. After that you need to ssh into it, remove the mongrel pid files and start the cluster manually. No more. I assume you have mongrel_cluster setup properly, ie: project’s config file is in /etc/mongrel_cluster and the mongrel_cluster script has been copied from: /etc/init.d directory. You need to edit the /etc/init.d/mongrel_cluster file: Change this two bits: start) # Create pid directory mkdir -p $PID_DIR chown $USER:$USER $PID_DIR mongrel_cluster_ctl start -c $CONF_DIR RETVAL=$? ;; restart) mongrel_cluster_ctl restart -c $CONF_DIR RETVAL=$? ;; start) # Create pid directory mkdir -p $PID_DIR chown $USER:$USER $PID_DIR mongrel_cluster_ctl start --clean -c $CONF_DIR RETVAL=$? ;; restart) mongrel_cluster_ctl restart --clean -c $CONF_DIR RETVAL=$? ;; --clean option makes the mongrel_cluster_ctl script first check whether mongrel_rails processes are running and if not, checks for existing pid files and deletes them before proceeding. You must be using the mongrel_cluster version 1.0.5+ for it to work as advertised (previous versions were buggy). To upgrade do: gem install mongrel_cluster gem cleanup mongrel_cluster Here’s the related mongrel_cluster changeset.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711376.47/warc/CC-MAIN-20221209011720-20221209041720-00647.warc.gz
CC-MAIN-2022-49
1,481
20
http://languagelog.ldc.upenn.edu/nll/?p=4473
code
She needn't've played with homophones, which surely would have to've been marked by some sort of pragmatics. (I'm picturing stress on "you" followed by a short pause.) The sentence "I want you more than all the riches of the world" is already ambiguous between "I want you more than (I want) all the R.O.T.W.") and "I want you more than all the R.O.T.W. (want you)." The latter sentence will almost always be true. There was a similar thing in Dilbert when his girlfriend asks "Do you love me more than that computer?", and Dilbert responds, "No, I love you more than *that* computer". But he thinks, "Don't ask me about the laptop…" Has there been an LL post on the common 'then'~'than' typo? I can't work out what's causing it. Are the two actually homophones in any English accent? Those in the far East that have the mat-met merger? Does the Northern Cities Vowel Shift get you anywhere close? And does that even help, since the vowel in 'than' is generally schwa anyway? As I understand it, it's because treating and as homophones is metanalysis. They're the same word in origin (I'm using etymonline as my source), and my wildly uneducated guess is that they're both subject to a wide variety of pronunciation variations by dialect and stress. @Pflaumbaum: Native Utah English speaker here. I would only ever pronounce than differently from then in some rare moment when I must emphasize the contrast. In ordinary speech both words have /ɛ/. But as a spelling stickler, I never ever spell either word worng. ClockwerkMao: Your example words disappeared, probably because you wanted them in italics but used arrows or other code in a way that made them invisible (other commenters will know). The absence of the words would have been evident in the window that reproduces the text as it will appear. If you are not sure how to fix this, use " " instead of italics and be sure to check the window under "Submit Comment" before you do submit. @Pflaumbaum. I don't think the "then" for "than" thing is an accent thing. I think most speakers whatever the accent will usually use a reduced form of "than", with a schwa, /ðən/, since the word is usually unstressed. And while some of us will always think of it as /ðæn/ when typing or writing, and thus never spell it as "then" apparently, some folks don't do that and think it in it's unstressed form, thus leaving themselves open to misspelling it as "then", since, unstressed, "then" and "than" are identical or nearly so. If any speakers have these words both having the same pronunciation in stressed form, I think it's not a matter of accent, but of not having learned the standard stressed pronunciation of "than". According to the Longman Pronunciation Dictionary, "than" has a strong form with a TRAP vowel and a weak form with a schwa, while "then" only has a strong form with a DRESS vowel. So they are never homophones in this description, in agreement with dw's comment above. You can read the relevant discussion on John Wells's phonetic blog, where Wells (the editor of LPD) explains in the comments that he doesn't have a weak from for "then", but perhaps ought to include it in future editions as a possibility (though accompanied by some sort of warning for the benefit of EFL transcribers). The comment thread on John Wells' post has someone claiming that than/then confusion in writing is rare in the UK but common in the US, which is certainly consistent with the notion that the words are homophonous for many US speakers (including the cartoonist whose work is shown above and that cartoonist's assumed audience) even if not in most/all BrEng varieties. I take Wells' point that "than" is pronounced in reduced form the vast majority of the time, but pronouncing unreduced/strong "than" with the TRAP vowel rather than the DRESS vowel sounds like a hypercorrection/pretension/spelling-pronunciation to my AmEng ear. I *think* this means that I conceptualize the words as homophonous in the abstract even if pronounced slightly differently in practice because of the ubiquitous schwaification of "than," which "then" is significantly less prone to for, um, presumably some mix of syntactic and prosodic factors. In my type of American English, I think the unstressed vowel in than may be [ɛ]-like at times. However, than is unstressed the vast majority of the time and then usually has some degree of stress I think. So the [ɛ]-like phone which occurs in than at times is probably shorter and more centralized than the [ɛ] in then. Stressed then and than are pretty clearly distinct for me (even though I rarely stress the latter). Although my pre-nasal allophone of /æ/ may be somewhat raised, I still make a distinction between pre-nasal /æ/ and /ɛ/. I do, however, think the capital of Austria is /vi'ænə/. Also mayhem and cayenne are /'meɪhæm/ and /kaɪ'æn/, respectively, for me. I don't know why though. Those might just be idiolectal oddities. Surprisingly though, other people haven't seemed to notice them. Some Americans (but not me) seem to have /kɛn/ for can (as in You should try to do it if you can.). I'm from a part of the Midwest that is supposedly accentless, although we know that that's not possible from a linguistic standpoint. @Ellen K: I can easily think of situations where "then" is unstressed. In particular, in "If…then…" constructions. Not for me. I have been racking my brains to try to think of some context in which "then" could be unstressed (to schwa): all I can come up with are meaningless sentence-initial space-fillers such as "Well then" and "Now then". I'm not even sure about them. I'm skeptical about introspective claims that two vowels are phonetically distinct. Vowels don't occupy discrete points in vowel space; if you plot a bunch of tokens, you see some dispersion around the mean value. See this graph for instance—there's a shit-ton of overlap. It would be interesting to record some spontaneous speech from, say, dw above. I'd bet there's substantially more overlap than his/her intuitions suggest. The stuff about the TRAP vowel and the DRESS vowel represents a slightly idealized view, I think. My own unreliable intuition is that I don't think I distinguish them consistently. It feels like both have a vowel somewhere in the [ɛ ~ ɨ ~ ɘ] space (note–that's not a schwa). I bet a lot of Northern Cities speakers are similar. I also definitely have an unstressed "then" in utterances like if you don't like it then leave! (I speak a variety that seems to have a lot of reduction to unstressed syllables.) I'm skeptical about introspective claims that two vowels are phonetically distinct. Vowels don't occupy discrete points in vowel space; if you plot a bunch of tokens, you see some dispersion around the mean value. See this graph for instance—there's a shit-ton of overlap. It would be interesting to record some spontaneous speech from, say, dw above. I'd bet there's substantially more overlap than his/her intuitions suggest. Fair enough. Put it this way: I have the same degree of certainty that "then" is distinct from "than" as that "end" is distinct from "and". You're right. There is a lot of overlap there. But do those vowels overlap when they occur in the same phonetic environment? And if they do then is there some other way of distinguishing them, e.g., length (which that graph doesn't show)? I'm only an amateur in linguistics, so please don't be too harsh. I think in practice than does not have TRAP with any real frequency for most speakers. Generally, the only time we would use it in it's strong form (with TRAP) is when citing the word, or in mathematics, when labeling , as well as, sometimes, when talking about the mathematical operations represented by those symbols. Otherwise, it's a weak form with a reduced vowel, which very well may have a ɛ-like quality. Let me clarify. In the above, I'm not assuming that all speakers have TRAP for the strong form of than. Rather, I'm noting that it only has TRAP in it's strong form. And also, I'm talking about speech, not how we hear things in our head when reading or writing. @GeorgeW, I'm also having trouble with the punctuation in the original strip. Couldn't she just be saying "You're in first place, and all the riches in the world are in second place"? Which ought to be a good enough compliment for anybody. I'm almost certain there are no American dialects in which /æ/ and /ɛ/ are merged, before nasals or anywhere else. However, there are dialects in which historical /æ/ has been replaced by /ɛ/ in a small set of function words such as "can" and "am", and "than" might as well be in the same class. I'm almost certain there are no American dialects in which /æ/ and /ɛ/ are merged, before nasals or anywhere else. Before /ŋ/ perhaps. The vowel in words like "gang" often seems to approach /e/ in closeness. However, there are few words with the DRESS vowel before /ŋ/: one might expect it in "length" or "strength", but many speakers have plain /n/ in these words. After a quick Google search, I found this paper by Douglas Bigham. It talks about a possible "pen-pan merger" in Southern Illinois English. I haven't read it yet though. FWIW, the vowels of gang [geɪŋ] and strength [stɹeɪŋkθ] do sound the same to me (AmEng speaker). But length, oddly enough, seems to have a different vowel than strength. It sounds more like [lɪŋkθ]. That's just my impression of my own accent though. And I'm not trying to claim that all Americans talk like me. AJD's explanation is the best so far. It puts [then for than] in the same area as [of for 've], which is an explicable error because they're usually complete homophones (pace Ellen K., who if I recall right would not agree with this for her own idiolect). And it doesn't require a general merger of /æ/ and /ɛ/ (or /ə/ and /ɛ/) before /n/, or anything like that. But does it explain [than for then]? There are 7.6 million Google hits for "It was than that". Are we saying that the two are homophonous but people know there are two spellings, so they're just guessing? In which case, why are there hardly any hits for things like "friend've mine"? You must have misread something I wrote, Pflaumbaum. I certainly did not claim they aren't homophones, and did not make any claims at all about my speech in particular. Certainly my own speech is a factor in any generalization I make, of course, but, still, my comments weren't about my own speech in particular. And, furthermore, the generalization I made is that then and than, when unstressed, are identical, or nearly so. Pflaumbaum: AJD's explanation is the best so far. It puts [then for than] in the same area as [of for 've], which is an explicable error because they're usually complete homophones Another similarity is the words "of" and "from". These generally have the STRUT vowel when stressed in North American accents (while in other accents they generally have the LOT vowel). The usual explanation for this is restressing of the unstressed forms with schwa. For Mount Kosciuszko, Wikipedia says the traditional pronunciation has the LOT vowel in the first syllable of Kosciuszko, but the Macquarie Dictionary says the first vowel is STRUT if I recall correctly. My theory for the latter pronunciation is that the LOT vowel tends towards the schwa since it doesn't receive primary stress (as is common in Australian pronunciation, e.g. the word 'Australia' itself). But it cannot be completely reduced since the 'Teutonic rule' of English stress would be violated if the first two syllables were both unstressed (the primary stress of 'Kosciuszko' falls on the letter 'u', and 'i' is a separate syllable in the traditional pronunciation). So LOT turns into schwa but is re-stressed as STRUT.
s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917121355.9/warc/CC-MAIN-20170423031201-00389-ip-10-145-167-34.ec2.internal.warc.gz
CC-MAIN-2017-17
11,768
38
https://www.marincc.org/can-you-send-gnt-to-ledger-stax/
code
Ledger Stax Review – Everything You Should Know If you are interested in buying a Ledger Stax to meet your requirements There are some points to consider before you purchase. We’ll explain all you should know, including how to receive discounts and the features that are offered with this device. What exactly is Ledger Stax? Ledger Stax is a hardware wallet designed to help users manage their digital assets. It’s a credit card-shaped device, which is a bit different from the previous versions of the Ledger Hardware wallets. The latest version comes with a curved E Ink screen, a magnet system, Bluetooth connectivity, and an alarm screen. The Ledger Stax supports more than 5000 cryptocurrencies, which include Litecoin, Dogecoin, Ripple and Cardano. This device also includes NFT (Non-fungible token) support. These are specialized tokens that are able to be kept and traded using Ledger Stax. The security features on the Ledger Stax are supported by the Secure Element chip, which can be the exact chip that is used in passports and credit cards. The Secure Element chip makes sure that your crypto assets are protected from thieves. The Ledger Stax is equipped with Bluetooth to connect with the Ledger Live Mobile app. Users can also use Ledger Stax’s wireless Qi charging function. In addition to that, the Ledger Stax has a magnetic system, which allows it to be stacked together with other similar devices. It also has offline mode, meaning that it is powered on even when it is shut off. The Ledger Stax is a new hardware wallet that is created to assist you in managing all your online assets. It offers an unbeatable amount of customization as well as a curved touchscreen and safe elements. It’s capable of working with Mac, Windows, and Ubuntu and can support more than 500 cryptocurrencies. Ledger Stax comes with Bluetooth 5.2, a USB-C port and a 200 mah battery. The device also comes with embedded magnets that allow you to stack multiple Stax units together. A crypto wallet, the Ledger Stax allows you to safely keep and manage your cryptographic keys. It can also be used as an alternative to a cold storage device making it less vulnerable to attempts at hacking. The device runs a proprietary OS, allowing you to alter it to your preferences. You can also choose a custom image or name for the lock screen. In addition, Ledger Stax is equipped with an efficient Bluetooth antenna. Ledger Stax Ledger Stax is built to meet industry-leading security standards. It has the Secure Element chip, which is certified to CC EAL5+ standards. It’s also capable of clear signing, which allows users to verify transaction details and minimize the risk of attack by phishing. How does it work? Ledger Stax is the latest hardware crypto wallet from French start-up Ledger. It has a stylish style, total lock-in of keys, as well as the capacity to store more than 5500 tokens. The device has the capability of a touch screen as well as an E-Ink display which can accommodate up to 500 cryptocurrency. There is also a battery that lasts from months or even weeks. You can connect it to a smartphone or laptop via Bluetooth. To make the experience more enjoyable for users, Ledger has designed a customized lock screen that can be customized. Furthermore it has magnets that allow you to connect multiple devices together. This makes it easy to carry around. To ensure the security of the clients, Ledger uses a secure architecture and a chip that is secure, known as Secure Element. Secure Element. This offers institutional-grade security of digital data. The most significant feature of Ledger is its simple signing. It allows users to confirm the details of transactions without risking their privacy. This helps you avoid scams and attacks of phishing. Ledger has a special feature that allows you to see NFTs directly on the device. For example, you can look up a QR code an image, or even a label image. Ledger Stax is the Ledger Stax can be described as the latest hardware wallet made by the company that developed the famous Ledger Nano S and Nano X. These wallets are ideal for long-term investment and are simple for carrying around. They’re extremely secure and will protect your personal key. They also have distinct security features, for example, an offline mode. If you are not online, you can still access your cryptocurrency, thanks to the built-in magnet. A single of the thrilling characteristics in the Ledger Stax is its touchscreen. This curved e-ink display offers more space for reading transactions and browse NFT collections. It is additionally fully compatible with iOS, Windows, and macOS. This device runs on Bluetooth 5.2 that lets it connect to Ledger Live through a phone. You can also set up an access code to allow for faster access. There’s a special “Magnet Shell” protective case available. If you live outside in the United States, Ledger products usually are shipped for free. However, you might need to pay taxes for the product depending on the location you live in. The Ledger Stax is hardware wallet that is designed to combine the experience of a physical device with the convenience of an electronic display. It’s designed to make it easy to manage and store your cryptocurrency. Ledger Stax is a powerful wallet. Ledger Stax is a powerful wallet that can handle 500+ coins, tokens and NFTs. You can also personalize the display using an image or your preferred NFT. A curved touchscreen screen helps you keep the track of your cryptocurrency portfolio. The Ledger Stax wallet is also linked directly to Ledger Live platform, which gives you a convenient option to verify the details of transactions and to buy or sell your coins. Having a hardware wallet also gives you security on-chain, since your private keys won’t be accessible to unauthorised users. In addition to storing and managing your cryptocurrencies, the Ledger Stax can be used in managing NFT collections on multiple blockchains. These include Ethereum and Polygon-based NFTs. The Ledger Stax comes with Bluetooth, which allows you to connect it to your smartphone , or desktop. It can also be used as a screen cover. To ensure security the device comes with an integrated battery. In the event that the device is not active the battery will last up to three months. The Ledger Stax is a breakthrough consumer gadget. It comes with a 4 inch touchscreen, magnets, and a secure architecture. Comparatively to its predecessors Stax is more secure. Stax offers more protection and gives a better user experience. In fact, it’s better than other hardware wallets! The Ledger Stax can hold more than 5 hundred cryptocurrency, including Ethereum and Polygon-based NFTs. It also comes with a 4000mAh battery with a touch-screen interface and a snazzy magnetic docs kit for maximum capacity to stack. However, if you’re looking to purchase one of the new wallets, you’ll have wait a few more months. There’s no need to panic, though. It’s true that the Ledger Stax remains a few years away from becoming available on the market. For now, users will have to purchase the Ledger Wallet application, which is the Ledger Nano S, or Ledger S. Ledger S. It is possible to purchase the latter two items on the official website of the manufacturer. If using the Ledger Wallet app is the best option but there’s a much more advanced way to set up your new crypto-holding machine. It involves downloading the firm’s application, installing it onto your personal computer, and permitting it to access your crypto assets. Ledger Stax is a premium hardware wallet designed in collaboration with the former Apple iPod creator Tony Fadell and design agency LAYER. The wallet was designed to be simple to use, it features an innovative E-Ink display that provides a clear visual interface. It also has an ergonomically curved display and embedded magnets which make it stronger and more convenient. The Ledger Stax supports a wide selection of cryptocurrencies, such as Litecoin, Dash, Dogecoin, Monero, Ethereum, Binance Coin, BTC, and XRP. It’s designed to connect to mobile devices using Bluetooth as well as a USB-C connection. Apart from being able to store and transmit a variety of digital assets and digital assets, the Stax is also built to guard against non-fungible tokens (NFTs). Because it keeps private keys in a secure environment, it’s less vulnerable to hacking attempts than other crypto wallets. Users can customize their lock screen. They can pick a preferred image, set an unique title for the gadget and track its status when it locks and unlocks. Furthermore Stax Stax supports Qi wireless charging allowing users to charge it from the Qi charger of their choice. Battery life is surprisingly great. The Ledger Stax’s battery can last for months with a single charge, based on how much you use the device. Ledger Stax is the latest generation of hardware wallet from Ledger the company that created Ledger Live, the very first wallet that was digital. It blends the security of Ledger Live with the convenience of use of a touchscreen. The new model comes with various features that make it easy to send and receive cryptocurrency. The Ledger Stax’s screen is comprised of a credit card-sized E-ink display. It offers a clear comfortable and natural read experience, while avoiding reflections and offering a greater contrast. In addition, the screen uses a curved touch interface to allow users to review and sign transactions with the click of a button. Ledger Stax Ledger Stax also features a built-in rechargeable lithium-ion battery. This allows the device to be fully functional for months after a single charge. Additionally it is able to be charged wirelessly or with a USB-C cable. In terms of security, the Ledger Stax features the secure element chip to add an additional protection. Furthermore, the device is certified to the CC EAL5+ standards. Additionally, the device is also backed by Bluetooth as well as a safe USB-C port that allows you to connect with the Ledger Live mobile application. Ledger Stax vs Nano X vs Nano S Plus Ledger Stax Vs Nano X Vs Nano S Plus, which one is better? These devices are the best hardware cryptocurrency wallets. They are simple to use and include numerous features. They are easy to use and come with a variety of features. Nano X, Nano S Plus, and Stax are ideal for advanced and novice users alike. If you’re looking for a safe, reliable, and easy-to-use device, this Ledger Stax is the right way to go. It was created by iPod designer Tony Fadell, and has all the same functionality as that of the Nano X, but at an affordable cost. This version has a touchscreen as well as a rechargeable battery that makes it last for longer. Ledger is an early pioneer in the field of cryptocurrency. They’ve developed a variety of wallets that are hardware, and each has specific options to safeguard private keys. With NanoX Nano X, you can transfer cryptocurrency to your phone. This is a great solution for users who are on the move. In addition, it has Bluetooth support. You can also connect your phone to Nano X via USB C. However, you’ll need to turn on the device by pressing the left button for 3 minutes. If you’re in the market for a crypto wallet, you’ll be glad to learn that Ledger has recently unveiled their Ledger Stax. The new hardware wallet blends an e-ink-curved screen and magnets to create a reliable, portable device. Ledger Stax Ledger Stax is a credit card-sized crypto wallet that will allow you to safely keep track of and organize your digital assets. It comes with a unique minimalist design, comes with a variety of other features. Some of the features include an electronic-ink display as well as NFC capabilities. Ledger Stax Ledger Stax is built on an aluminum chassis, with a rounded edge and a soft concave surface. There is also an integrated magnet which helps to increase the stackability of the device. In addition, it has a Bluetooth(r) antenna and an effective Bluetooth(r) battery. It is compatible It is compatible with iOS 12 and up and Windows 10 and up. It connects to your smartphone via Bluetooth and a secure USB-C cable. You can set up your Ledger Stax with one button. It has the ability to touch E Ink display that creates a clear visual interface. It allows you to alter the lockscreen as well as view your pictures. Ledger Stax Ledger Stax is a brand new hardware crypto wallet with industry-leading security. This new crypto wallet provides users with a user-friendly experience that allows them to complete transactions quickly and safely. The device is built using an industry-leading Secure Element chip. This chip is used in passports and credit cards, as well as in payment systems, and many other critical data. Another major feature of Ledger Stax is the E Ink touchscreen. The curved touchscreen is a first in the industry. It features Bluetooth connectivity and is linked to Ledger Live mobile app. If it’s not being used, the device conceals itself with magnets. A key feature that is a major feature of Ledger Stax lies in the capability to alter the lock screen. It allows users to choose their preferred NFT and photo. Users can also use the Infinity Pass to get their free NFT and utilities. Ledger Stax is designed to appeal to those who are passionate about their digital assets. They are looking to put their money into them and shield against theft. Although this wallet is a higher price than some of its competitors but it’s worthwhile for serious crypto-investors. Ledger Stax can be described as the newest hardware wallet by French firm Ledger. It’s a touchscreen device with E Ink technology. It has other interesting features too. Ledger Stax Ledger Stax features a curved 3.7 inch touchscreen. This makes it more convenient to use. It also allows users to personalize the screen. A magnet lets users connect the device to phones. The Ledger Stax is also capable of generating QR codes. You can utilize it to make cashless transactions. Also, it can be used with mobile devices cord-free. Ledger Stax Ledger Stax works with iOS 13+ as well as Android 9and up. It is a certified chip that is CC EAL5+ security system. However, its price is expensive. At $279US, it’s more than twice the cost of Nano X. Nano X. One of the biggest selling features can be the screen. The display is made of E Ink which is curved. Another major feature is the capacity to store NFTs, or tokens that are non-fungible. It’s compatible with more than 5,000 cryptocurrency, including Ethereum, Polygon, and Nano X. It also works with the DeFi suite of protocols. Ledger Stax is a brand-new hardware wallet that helps users keep track of their assets digitally. It is designed to look similar to a credit card, the Stax gives you users the ease of carrying an electronic wallet that is crypto-friendly, while maintaining security. With a magnetic case a touchscreen display and a powerful 200 mAh battery, the Stax is small enough to fit comfortably in it’s place in your palm. The device has an 3.7 inch E Ink curved touchscreen. It is easy to see transactions and read messages. The display also has Bluetooth 5.2 This feature can help you connect to your mobile. The Stax works for iOS 10+, Android 9+, and Ubuntu. Users can also customize the screen that locks. They can also monitor the state of lock and unlock on their device, and also monitor the percentage of the battery. A integrated Ledger Live app allows users to manage their portfolio of more than 500 tokens and coins. The Ledger Stax features an e-case that allows you to stack more than one device on top of it. Like the Nano S Plus, the Stax is specifically designed to be used at home. If you’ve used the Ledger Nano for some time It could be the right an appropriate time to look into the next generation crypto device that is the Ledger Stax. The new model joins a storied collection of hardware wallets that have become well-known in the Crypto space. The Ledger Nano Stax, like its predecessor it is a device designed to be used in mobile lifestyles. It’s a compact device that supports a variety of cryptocurrency. The device is able to handle up to 5,500 different tokens, including Polygon NFTs and Ethereum NFTs. One of the biggest advantages of this Ledger Stax is its touchscreen. The screen of the E Ink2 has curvature that makes it easier to read. Another significant aspect in the Ledger Stax is its rechargeable battery. A single charge can keep the device operating for weeks or months. With the Ledger Stax, you’ll be able to store your digital assets in safe manner. The Ledger Stax is compatible with iOS 13+ and macOS 12and up. You can pair it with your smartphone via Bluetooth or via the USB-C cable. Ledger Stax is the latest technology in the cryptocurrency physical wallet marketplace. The sleek, curvaceous e-ink touchscreen wallet allows users to secure your digital possessions. It can handle more than 5,000 tokens and coins and is optimized for users’ experience. The wallet can be charged via it’s Qi wireless charging protocol. It also uses a magnetic locking system. There’s a USB Type C port. A 3.7-inch electronic-ink display lets you see and sign transactions. The wallet comes with a built-in Ledger Live app. It also allows users to use third-party apps to expand their portfolio. Compared against Nano S Plus Nano S Plus, the Ledger Stax has a larger screen and features. The Stax features a rounded edge chassis as well as a powerful Bluetooth antenna. It also has an Secure Element chip, the same one used on the Nano X. These elements ensure that the device is secure from hackers. With its big screen and magnetic system, the Stax offers unrivaled interactivity. In contrast to other wallets, it lets you customize the touch screen by using images of NFT. It also allows you to create QR codes. Does it really make sense to buy it? The Ledger Stax is an upcoming hardware wallet that promises a lot. It’s designed to enhance security, accessibility, and interactivity while maintaining a high-end user experience. If you’re considering purchasing one, here are some important things to know. First of all, it has a curved ink touchscreen. It will be the only one of its kind. It can be used to sign transactions and view NFT collections. In addition to this it also has Bluetooth. There’s also an embedded magnet system that allows you to stack several Stax devices. Another interesting feature is that it can be charged using Qi wireless charging, the same technology used for charging by Samsung as well as Apple. It also comes with an internal battery that can last for weeks or months. That’s a really big deal with regards to a crypto hardware wallet. Overall, it’s an excellent device. There are a few downsides to it. For starters, it’s quite expensive. However, if you’re willing to invest in a hardware wallet, it’s worth it. [sspostsincat category=”Ledger Stax”]
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00744.warc.gz
CC-MAIN-2023-14
19,006
85
https://forum.renoise.com/t/render-selection-to-sample-options/26437/2
code
When you render selection to sample the process is immediate, there are no options to specify. I can see how many like the way this is handled. Rendering is done through the master channel. If you have processors set up on the master you do not always want these processors to effect the sample, so you need to switch them off before rendering to sample. What I would like to suggest is to add an option to the preferences where you can choose whether you want to specify options when rendering to sample or not. This way the current behaviour is preserved and those who choose it get a confirmation screen with options. The option list should include (but does not have to be limited to): Bypass FX (bypasses FX on track) Bypass Master FX What do you think?
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224649343.34/warc/CC-MAIN-20230603201228-20230603231228-00298.warc.gz
CC-MAIN-2023-23
758
7
https://www.phpbb.com/community/viewtopic.php?f=1&t=321371&p=1749439
code
Template(s) used: ca_aphrodite Any and all MODs: Many Do you use a port of phpBB: Nope. Version of phpBB: 2.0.16 (was going to update - but waiting to fix this error) Which database server and version: MySQL Did someone install this for you/who: I did Is this an upgrade/from what to what: nope Is this a conversion/from what to what: nope Have you searched for your problem: yes. If so, what terms did you try: support forums (other topics), KB, and tried to do what i think could have fixed it. State the nature of your problem: Ok. This is messed up. But when I go to go and delete, edit, or anything else to a message / topic / post. I get this error :: Post_not_exist, Post : Do you have a test account for us: Not at the moment... if you need one, please contact me via PM.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250605075.24/warc/CC-MAIN-20200121192553-20200121221553-00376.warc.gz
CC-MAIN-2020-05
779
12
https://community.spicecrm.io/t/customize-krest-api/177
code
I want to customize KREST API. Can you tell me how to customize the KREST API? Is there any feature in administration setting to customize KREST API? there is no administration feature for that. Upcoming KREST version will allow to include more custom KREST capabilities but customizing will remain a developer job. Documentation for this is not ready yet. Are you sure you need to customize?
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103945490.54/warc/CC-MAIN-20220701185955-20220701215955-00617.warc.gz
CC-MAIN-2022-27
392
5
https://ux.stackexchange.com/questions/9287/is-it-good-practice-to-have-a-change-contrast-or-higher-contrast-buttons-for-use
code
Considering both these issues can be solved by the browser, or host operating system, it seems like overkill and potentially a lot of extra work, depending on how complex your site design is. Assuming you're not using tiny fonts and really low contrast colours, I would say it's safe to assume that if a user with vision problems is having trouble reading your site, they're probably also suffering from the same issues on most other sites as well. To that end, they would probably already be using accessibility tools that would improve their situation such as magnifiers, large fonts and high contrast overrides. What I would recommend instead is that you make sure your HTML markup is as clean as possible with proper headings, paragraphs, annotations, etc. This will not only make it easier for people to override your styles, but also make things far more accessibly to screen readers for the blind.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679102697.89/warc/CC-MAIN-20231210221943-20231211011943-00623.warc.gz
CC-MAIN-2023-50
904
3
https://ko.ifixit.com/Answers/History/184882
code
원문 게시자: jema , Why does my phone now not charge at all after dock replacement? My phone originally would only charge when turned off. So I replaced the dock, and headphone jack (for a separate issue). Now it will not charge at all. Any ideas? Did I forget to connect something, or not connect it back properly? I am bumfuzzled.
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585696.21/warc/CC-MAIN-20211023130922-20211023160922-00549.warc.gz
CC-MAIN-2021-43
337
3
https://ideas.woocommerce.com/forums/133476-woocommerce/suggestions/38695546-switch-to-using-the-shipstation-api
code
Switch to using the ShipStation API In the documentation there are two ways to connect to ShipStation: https://www.shipstation.com/developer-api/ We currently use the Custom Store Integration which limits the amount of information we can send to ShipStation. There is also another option to use the ShipStation API, which offers a lot more field / features. Should we be switching to the API to allow better communication between the two? A concrete example is sending over line item dimensions or possibly the dimensions of the package for the shipping rate. This is not possible with the Custom Store Integration, but is supported in the API. I too would like to see a more robust integration between ShipStation and WooCommerce. Another example use-case where using the ShipStation API could allow for more robust communication between the two platforms would be that when an order is Cancelled in ShipStation, that order status is sent back to WooCommerce to update the order's status in WooCommerce to Cancelled.
s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141168074.3/warc/CC-MAIN-20201123211528-20201124001528-00603.warc.gz
CC-MAIN-2020-50
1,017
6
https://www.mssqltips.com/sqlservertip/3190/sql-server-performance-monitoring-tools-free/
code
Tackle SQL Server Performance Issues for Free with DPA Free We have SQL Server performance problems that have been plaguing particular on-premises applications with unexplainable slowdowns along with unhappy users and customers. Our environment consists of both physical and virtual machines, which often presents some challenges to determine the exact SQL Server performance issue. Unfortunately, on a daily basis I am very busy trying to manage our SQL Server environment and without a tool, it is difficult to capture enough data to determine the problem. Are there any options I can research to tame our SQL Server performance problems for free? SQL Server performance problems can be evasive and truly plague an application so much that employees dread using the application. It is an unfortunate set of events that all too often results in employees losing faith in the application and impacting morale. There are a number of alternatives to consider for addressing your Microsoft SQL Server performance issue. Let's dig into these to see which one could make sense for you to best identify the root cause of your performance bottlenecks: - Hardware - Generally considered the "quick and dirty option" to patch a short-term problem to avoid downtime. Often times this does not resolve the issue, only delay an actual resolution. Other times the hardware is insufficient for the workload and does need to be sized appropriately. It is imperative to understand the issue and avoid throwing hardware at a software problem. - Upgrade hardware - Add CPU's, memory, SSD's, new SAN or add resources to a virtual machine - New hardware - Server or upgrade a component such as your disk drives with solid state solution or SAN - Training - Opportunity to learn more about performance needs and the Microsoft SQL Server engine, but your specific issue may not be covered in the training. Further, there are a lot of situations where environment\application specific factors can lead to a better answer, which is difficult to cover in a training class. - Free options - Webcasts, tutorials, articles, etc. - Paid training - Traditional classroom, online, conferences, etc. - Consultant - If you do not have the time, expertise or are unable to get training, an external resource can be contracted to help remediate a one-time issue or perform database performance auditing. Fixing all of the issues are not always possible based on time frames, budget, politics, external dependencies, - Consultant - Identify, correct, code, test and deploy high performing code - SQL Server Monitoring Tools - There are a number of monitoring tools on the market to choose from to help identify SQL Server performance issues to correct query performance, disk latency, etc. Some are included natively with SQL Server and others are built by another company. Some of this boils down to a build versus buy decision with limited resources. Which option is the best to support your business? - Scripts for point in time collection and analysis - Time sensitive to collect data which generally builds an incomplete picture - Home grown monitoring solution - Consider your total cost of ownership to build and maintain monitoring software across your enterprise. How much time will it take to build and maintain a solution across all of your network devices, storage, hardware, operating system, SQL Server and code? Can that time be better used to focus on core business needs? - Purchase a paid product - Need to validate prior to purchase and be trained on the product. - Free products - Concern generally is "you get what you pay for", but is that always the case for performance monitoring tools? With all of your responsibilities, the severity of your SQL Server performance monitoring issue as you describe it and a mixed environment of physical and virtual machines, you may want to take a look at Database Performance Analyzer Free (DPA Free). This is a free version of SolarWinds Database Performance Analyzer for SQL Server. DPA Free provides insight into real time metrics with the ability to drill down into the SQL queries on your SQL Servers for proper optimization. Let's take a look at DPA Free to see if it can help you. How can DPA Free help to monitor SQL Server instances? In a nutshell, DPA Free is a free tool focused on wait based analytics to identify performance problems with SQL queries on your SQL Server instance. This is achieved with very low overhead and no software installed on the monitored instance. These metrics provide a clear-cut set of data to focus on for troubleshooting performance issues. With DPA Free you have a visual representation of the performance issues over the last hour with the ability to drill into the details to understand your code, the performance bottlenecks and work towards resolving the response time issue. DPA Free also includes real time monitoring of your SQL Server database instance. DPA Free includes the following database performance monitoring features: - Top SQL Statements - Wait based analytics - SQL Server session data - Application and User data - Locking and Blocking that cause Deadlocks - Real time monitoring - Server Metrics - CPU, Memory, Disk Space, Network Bandwidth - Virtual Machine Metrics - Visualization of SQL Code to Hardware Utilization - Customizable Reporting How does DPA Free work? One great aspect of DPA Free is that it is the same architecture and interface of SolarWinds Database Performance Analyzer for SQL Server, not a separate tool. So, when you install DPA Free, you install the full database monitoring product for free and have a baseline set of functionalities. Although installing DPA Free is quick, some planning is recommended: - Find a utility server where you can install and configure a web server. DPA Free is completely web based and needs a web server to interact with the product. - You need a SQL Server (which can be SQL Server Express Edition) where you can install a repository database to store the performance data collected by DPA Free. - You need a machine where you can work with the product. This is most likely your desktop or laptop, although since DPA Free is browser based, you should be able to use just about any computer, tablet, smartphone, etc. to begin your performance analysis by seeing the wait times which impact application performance. How can I get value from DPA Free? Let me demonstrate a few of the core sets of functionality with DPA Free: - Performance Dashboard - Performance Details - Resources Graphs - Virtualization Metrics Please note: this is just a subset of the functionality available with DPA Free. Click here for a full feature set including support for Oracle, SQL Server, Azure SQL Database, Amazon AWS \ RDS and MySQL. DPA Free SQL Server Performance Dashboard Many of the interfaces in DPA Free could be considered summary dashboards with prioritized metrics. From these intuitive dashboards you can access lower level data to actually work towards solving your SQL Server Performance issue. One favorite is the Top SQL Statements interface where data can be viewed for a particular period of time based on a specific interval, to visually determine particular queries to optimize and see some of the corresponding high-level metrics. Hovering a mouse over the bar charts provides additional performance metrics and a preview of the SQL code. Figure 1 - DPA Free Performance Dashboard DPA Free Performance Details In the bar graph above, each individual slice is a different query and by clicking on any of the individual slices of Figure 1, we can see the corresponding performance metrics as well as the associated databases, waits, users, query plans and more. This provides the critical metrics to understand the issue and to begin to tune and test alternatives to resolve the issue. Figure 2 - DPA Free SQL Server Wait Stats Current SQL Server Activity in DPA Free DPA Free includes real time analysis of SQL Server workload. Rather than fumbling around to find and run scripts, just run DPA free to get a snapshot of the activity in the environment. Figure 3 - DPA Free Current Activity SQL Server Blocking in DPA Free SQL Server blocking can be a menace. DPA Free includes insights into blocking in your databases. Figure 4 - DPA Free Blocking Metrics DPA Free Resource Graphs DPA Free has six sets of resource graphs (CPU usage, Memory, Disk, Network, Sessions and Waits) to provide a visual representation of the performance metrics for your IT infrastructure over a period of time. These graphs provide value to understand the trends, outliers and are even color coded with yellow and red to serve as warnings and problems. In this situation, the graph paints the picture to understand the key metrics at a high level, then research the issues to begin tuning. Figure 5 - DPA Free CPU Resource Graph DPA Free SQL Code to Virtualization Metrics With the large number of SQL Server instances running on VMWare, SQL Server DBAs have little to no insight into the underlying resources. DPA Free resolves that issue and even correlates SQL code with CPU, Memory, Disk and Network utilization to determine code running during high utilization periods. Figure 6 - DPA Free Virtualization Download Product Trial - SolarWinds DPA - Where can I learn more about DPA Free? - Download DPA Free - DPA Free Home Page - For any additional product questions, contact SolarWinds Sales @ +1-866-530-8100 or email@example.com MSSQLTips.com Product Spotlight sponsored by SolarWinds, makers of DPA Free. Last Updated: 2020-10-05 About the author View all my tips
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153223.30/warc/CC-MAIN-20210727072531-20210727102531-00540.warc.gz
CC-MAIN-2021-31
9,600
90
http://forums.adobe.com/message/4309770?tstart=0
code
The string fields can only contain as much as 255 characters, if an user fills in a form or submits a web app item and puts more than 255 characters in such a field the characters after the 255-th are lost and will not be saved inside the database. You say "the string fields," but are you just referring to web apps? What other fields in other modules are limited? There are character limits on all fields, except the editor field (description fields). Multiline textboxes are set to 1024 and single line input fields to 255.
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654003/warc/CC-MAIN-20140305060734-00096-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
526
3
https://xmenrevolution.com/wiki/Plots
code
Plots make the world go 'round! But, sometimes it can be hard to keep track of them or know how to plug yourself in! To that end, here is a repository for the various threads of plotting that are being woven through our game. If you want to add plot information here, please include what the plot is, how players might hook into it, and who to contact about getting involved. Every so often, someone in the government will propose a new and horrifying plan for dealing with mutants. Lock them all in camps. Execute them. Put them in labs to find out what makes them tick. As yet, the most extreme of these have all been shot down -- publicly, anyway. Though it is kept extremely hush-hush and government involvement kept even moreso, the past years have seen a quiet rise in mutant disappearances. Some resurface, eventually, some do not, but stories have trickled out of mutants taken off to laboratories, locked away for months or years of grueling experimentation. Perhaps someone is trying to weaponize them, perhaps control them, perhaps just find a way to nullify their powers; whatever the case, it is certainly true that these labs are out there, scattered in hidden locations and leaving quite a few mutants with a lifetime of nightmares, if they are lucky enough to get out. Prometheus involves doctors and scientists of all stripes, and there is always room to work it into people's backstories (or future stories!), either as experimenter or experimentee. Contact Rasheed or Shane for all your mad scientist (or labrat) needs!Prometheus has many associated adoptable NPCs, as well, although people are welcome and encouraged to make their own chars with Prometheus ties! NPCs associated with Prometheus can be found here.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817491.77/warc/CC-MAIN-20240420060257-20240420090257-00512.warc.gz
CC-MAIN-2024-18
1,733
5
https://jira.lsstcorp.org/browse/DM-10444
code
On Brian Selvy's request I installed the TestRail JIRA integration as we are evaluating for various verification purposes. This introduced a TestRail panel to all tickets on our JIRA system. After deploying a TestRails test instance (lsst.testrails.net) I was able to configure the JIRA integration to restrict itself to only tickets in the LOPS (Brian's test) project. PS. I only specced 5 users for the TestRails server as this is only an evaluation at this point.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488539764.83/warc/CC-MAIN-20210623165014-20210623195014-00639.warc.gz
CC-MAIN-2021-25
466
3
http://forums.pcsx2.net/Thread-PES-2011-PROBLEM
code
HEY AM BACK\ I PLAY PES 2011 ON PCSX 0.9.6 WHEN EVER I SET THE CYCLE RATE TO X3, IT PLAYS BUT SUDDENLY THE LOADING TAKES TO ETERNITY ANY HELP SOLVING DIS PROBLEM I'm reviewing your old posts. (12-03-2011, 01:21 PM)Uzumaki Naruto Wrote: Intel® pentium®4 CPU 3.00 GHz System type:32 0perating system Quote:Score: 0 out of 4 Minumum Req. :FAILED C2D or AMD x2~ at 3.0 Ghz and gddr2 equivalent Video card(Except 64-bit bandwidth video card) Recommended Req. :FAILED For i5 or i7 at 3.2Ghz(Except P4,PDC,C2D,C2Q,i3),For AMD x2~ at 4.0Ghz and gddr5 equivalent video card Maximum Req. :FAILED i7 Extreme series unlocked by 5Ghz and gddr5~ Super-OC edition video card or Nvidia GTX-Ti Super-OC edition and excess of 256-bit bandwidth Priority Req. :FAILED *If your desktop/laptop's has a C2D,Core-i BUT it's video card is a Intel-GMA,Gxx express chipset,Sandy Bridge's intergrated video card or other built-ined VGA that takes priority of a failure system requirement. **If your desktop/laptop's video card is gddr2,gddr3,gddr5 or other best video card BUT it's CPU is a Dual/quad-core less than 2.66Ghz, a Pentium 3,4 or AMD without X,athlon 64,Turion,Celpron,486,586,Intel-Atom,Intel-Centrino,Intel-Celeron,Intel-Core-Solo,ALL types of Netbook or other single-core CPU that takes priority of a failure system requirement.. And unfortunatelly you're too far and off-track to meet at least one of the requirements...thats the reason loading takes forever. get a new PC now or you're stuck in the actual PS2. Main Hub:i5-4670(3.4Ghz Factory Clocked),ATi Radeon HD7770(GDDR5+128-bit+1GB),Win 10 SL(x64),ASUS H8M-E,8GB DDR3 RAM
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190236.99/warc/CC-MAIN-20170322212950-00056-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
1,618
23
http://www.reddit.com/r/sysadmin/comments/1dfiaq/user_account_locked_out_on_ad_some_time_after_she/
code
I have one user that can log in to windows just fine but after a while will get locked out. If some time after she logs in(haven't found a time frame yet) she goes to the internet (which automatically authenticates with ad through our IronPort Webfilter) it will ask her to log in. Which it is only doing because the account is locked out. Email seems to work while the account is locked out so I assume outlook just hasn't check the authentication yet? After I get in AD and unlock her, she will be fine for the rest of the day. I have to unlock her account basically every day. I'm trying to look through logs but am failing to look in the right place I guess since I can't find any that pertain to her.
s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1394011239452/warc/CC-MAIN-20140305092039-00053-ip-10-183-142-35.ec2.internal.warc.gz
CC-MAIN-2014-10
705
4
https://cdcvs.fnal.gov/redmine/projects/fast/wiki/Problems_and_Solutions
code
msync() and mincore() can both be used to validate the memory that libunwind is going to access to ensure that the address is valid in the context of the process being profiled and that the address represents mapped memory msync() is a system call that synchronizes memory with physical storage. libunwind calls msync() with the MS_ASYNC flag which returns almost immediately. Until kernel version 2.5.67, I/O would be performed to synchronize memory with physical storage, but in newer kernel versions, I/O isn't started. Until kernel version 2.6.17, this function would also mark pages dirty, but in newer kernel versions, dirty pages are properly tracked, so this too no longer happens. On these more recent kernels, msync() only performs two tasks when MS_ASYNC is set: locks mapped pages temporarily in memory and returns an error on unmapped pages or bad address ranges. It is this second action that libunwind depends on when validating memory. mincore() is a system call that determines if pages are resident in memory and whether they can be accessed without causing a page fault. As with msync(), libunwind uses this call simply to determine whether the address range is invalid or contains unmapped pages. libunwind requires this additional information about the memory page it is about to access as attempting to read from a virtual address without a corresponding physical memory address will cause a segmentation fault. mincore() is not supported on all kernels and configurations and specifically it does not return correct information for anonymous mappings, nonlinear mappings, and migration entries on kernels older than 2.6.21 (this was patched on Feb 12 2007 here) Because the majority of the pages libunwind is attempting to access are being accessed anonymously, mincore will return an error for many pages that are properly mapped. libunwind interprets this to mean that the page is unmapped and by default, libunwind makes no attempt to access potentially unmapped pages. This causes unw_local_init to fail every time it is called on an x86_64 system On i386 systems, the first call to mincore() when stepping through a chain of recursive calls succeeds, but following mincore() calls fail, resulting in the apparent collapse of the recursive call chain (functions that call recursively only show up as a single call in the profdata_x_y_paths file) and callpaths are greatly shortened (depth <= 3). glibc version <= 2.5-24 + x86_64 OS + Executable compiled with optimization >= -O1¶ Profiling runs to completion but profdata_x_y_<etc> files are empty or contain very few lines and profdata_x_y raw file is mostly 0's. total_empty_stacks in profdata_x_y_debugging is very high Versions of glibc prior to glibc-2.5-24 did not include unwind information for the function __restore_rt which is used to return after a system call libunwind depends on proper detection of signal frames to be able to locate unwind information as some signals point to the previous instruction and some to the next libunwind was incorrectly decrementing the address in its internal instruction pointer variable and instead getting unwind info for the function before __restore_rt (killpg in the case of glibc-2.5-24). This causes libunwind to get an incorrect return address, fail at locating further information, and return an error, resulting in the truncation of call chains at the signal handler Profiling runs of several hours sometimes fail after running for a long time. Upgrade to a later version of SimpleProfiler. An internal vector was missing bounds checking for stacks with depth > 1000. Normally these stack sizes are very rare, but in the case of missing/incorrect unwind info, libunwind can occasionally attempt to unwind through invalid memory addresses (such as those of the heap and stack), resulting in an essentially unending unwind until resources are exhausted or variables overflow, causing the segmentation fault. In the case of exceptionally long runs, these rare occasions become commonplace, although they occur at unpredictable moments.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00288.warc.gz
CC-MAIN-2020-34
4,066
20
http://www.ancquest.com/translate.htm
code
Language Modules and Translations Visit often for the most current information on language files for Ancestral Quest Watch a Video Tutorial explaining the Translation capability For those that have translated Ancestral Quest into another language, you can keep aware of ongoing changes to the items to be translated. Information as of 9/27/2017 - Both Danish and German are available with full translations including the help files. (Danish requires a fee to acquire.) French, Spanish and Norwegian translations are complete except for the help files -- all screens and reports are translated. (These translations are free.) Chinese (Traditional), Finnish, Hungarian, Polish, Portuguese and Swedish are partially translated -- the most commonly used screens and reports have been translated, but the translation is not complete. (These translations are free.) You can download most of these language modules directly from within AQ. Click here for instructions. - If you create a translation file, and wish Incline Software to publicize it on this page, please contact us at email@example.com. To Create a Language File: - Under the Tools menu, go to Language > New Language File - In the Language box, select the language for which you wish to translate. This can be any language, including English. Any translation files already installed on your machine for that language will appear in the Installed Language Files box. - If you have chosen any language except for English (United States), simply type in a brief name or description in the Enter Name of New File box. Then click the Create button. - If you have chosen the English (United States) language, you will notice that there is always a preinstalled language file named *Ancestral Quest Native Translation. You cannot overwrite this file. It is the English translation that is built into AQ. All other translation files are based on this file. If you wish to create an alternate English Translation (so that you can change some English words or phrases), you will want to type a different name in the Enter Name of New File field, then click on Create. To Select a Language File: There are two methods for selecting a language module. The easiest method is to select the language from the dropdown box in the main screen of the program. Click here for instructions. Alternately, follow these instructions: - Under the Tools menu, go to Language > Select Language File - If you have received a language file from another user, or over the Internet or through some other means (once these become available), click on Install another language file… Once you browse for and locate that language file, AQ will move it to the appropriate location, so that in the future it will show up in the list of translation files. - Select a language from the Language dropdown list. Any translation files already installed on your machine for that language will appear in the Installed Language Files box. - Highlight a language file from the list. - Click the Select button. Until you change this selection to something else, this selected file will determine which language is used by Ancestral Quest. (If you select any language file other than the Native language file, you will see the description of the file in Ancestral Quest's title bar.) - Note that if you choose the *Ancestral Quest Native Translation file, you cannot edit this file. You can edit any other Translation file. - Warning: You may receive a translation file from someone else – you may even pay for such a file. AQ will not stop you from editing any language file except for the Native translation. Make such changes at your own risk. To Edit (or Translate) a Language File: - Under the Tools menu, go to Language > Edit (Translate) Language File - If you have not yet selected a language file other than AQ's native language, this option will not be available. You will need to either create a new language file or select one to be able to edit it. - With the Translate from Original English… screen up, you can work on changes to the selected file. Following are some guidelines: - Many screens have a combination of the actual screen (with various labels to be edited), and of other supporting text that may be shown on the screen. In order to fully translate a screen to another language, you will need to translate both the screen and its associated text. - Reports are simply a collection of various pieces of text. If you use the default option, we have grouped pieces of text together under the name of a report. Sometimes, when a text phrase is used by more than one report, it is not shown under all report groups – it may be shown only with one or two of the reports to which it is associated. If you have translated all the text for a report, and then find that some text has not yet been translated, you may want to change your options to show all the text phrases in a single alphabetical list – this should help you locate the text to translate. - We have tried to group screens together, with supporting screens underneath a more major screen. If you have trouble finding a particular screen, try changing the options to show the screens in a single alphabetized list. - If you edit a text phrase, then are not sure that you have captured the meaning correctly, you can usually show the original English version of the text, or even reset the text back to the original English. - Many, if not all, of the text phrases have a limited space in which they can fit. We have optimized AQ for the native English, and in many cases tried to provide just a little extra room in case the translated text requires more space that the English. If your edited text does not fit, you will need to choose alternate phrasing, or abbreviate. - When a text phrase is unclear as to what it means, AQ has the ability to provide guidance. For example, you may simple see the letter “B”, which might be used for “Birth” or “Burial”. If you come to an unclear text phrase, send us a note at firstname.lastname@example.org, and we'll try to provide a description in the next build of AQ. - Sometimes a text phrase on a screen is ALWAYS replaced by other text. In these cases, AQ can hide this phrase so you don't have to translate it. If you come across a phrase on a screen which, after translating it, never shows as you translated it, send us a note at email@example.com, and we'll hide this in the next build of AQ. - If you are unclear about any part of this translation process, please e-mail us at firstname.lastname@example.org, and we'll try to address the concern. - Some screens and text phrases are not intended to be translated. We have blocked these.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055601.25/warc/CC-MAIN-20210917055515-20210917085515-00505.warc.gz
CC-MAIN-2021-39
6,702
34
https://community.adobe.com/t5/digital-editions-discussions/can-t-see-annotations-after-moving-digital-editions-folder/m-p/10966099
code
We have a brand new look! Take a tour with us and explore the latest updates on Adobe Support Community. I moved my Digital Editions folder to a new location on my hard drive, and when I went to open the book I have been reading and annotating for the course I teach, none of the annotations were there. The annotation file is still in the folder, but the link to the book seems to be lost. When I tried to open the book from URLLINK.acsm, the book was re-downloaded to my Documents folder and a new annotations file created. Is there any way to re-connect the annotations file to my book? I am using version 1.7.2 with Mac OS 10.6.4. Read somewhere else that you have to keep the folder in the Documents folder, so I moved it back and everything seems fine again. What I was trying to do was move the DE folder to my Dropbox, so I would be able to read the same book (with the same annotations) on my desktop computer or my other laptop. Is there any way to be able to read the same copy of the book (see annotations on other computers), or is the annotated version tied only to the one computer that I originally started annotating it on? The link to the annotations are kept in the manifest.xml file (look for something like) ADE expects and enforces that the manifest file and the annotations are in the (My) Digital Editions folder in your (My) Documents directory ( My is added on windows, but no on Mac). There are no options to change this location. It doesn't seem right for me not to be able to see my annotations when I'm using my laptop or desktop or ereader. It's my intellectual property, isn't it? I wonder if there is a way to synch that file across platforms. Why use the transfer feature if you can't refer to the annotations and highlights from the other platform. I just figured it out! In the adobe application, click "View in Folder" and go to the annotations file for your book then click "Open with - Text edit" and you'll see the code or whatever it's called for all your annotations, copy it! Then, in the redownloaded book, make an annotation so that the file is created on your computer and open THAT annotation file. Then all you have to do is highlight the text edit of the new annotation file and paste the text from your old annotation file, save and close and it should be updated/reflected in the newly downloaded book! I have a paper due tomorrow and was finally moving to writing on the book I had downloaded after all the other sources and almost had a heart attack when I thought my carefully crafted notes were gone forever. Hope this helps! It does! Many thanks! 🙂
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780055808.78/warc/CC-MAIN-20210917212307-20210918002307-00301.warc.gz
CC-MAIN-2021-39
2,607
9
http://www.ukoln.ac.uk/metadata/education/registry/intro.html
code
As part of the process of developing consensus on means of describing learning resources, UKOLN has created entries for some metadata schemas for educational materials in a metadata registry. The DESIRE Registry was created as part of a collaborative EU-funded project to enhance resource discovery for researchers. The DESIRE registry has been implemented using a relational database (mySQL) and utilises a data model which supports the registration of data elements from multiple vocabularies or namespaces. Information about valid values for data elements may be recorded by associating elements with schemes. The model incorporates the concept of application profiles to describe the (re-)use of data elements for a particular purpose or application. An application profile does not define new data elements; it draws together previously defined elements from (specific versions of) multiple vocabularies/namespaces. Further, it may The concept of the application profile has proved particularly useful in the context of schemas for educational resources. Many of the schemas use subsets of elements drawn from other metadata element sets (typically IMS, IEEE-LOM, Dublin Core): in terms of the DESIRE model, they are application profiles. The registry can record correspondences between the elements of these namespace vocabularies and the units of an underlying "semantic layer", which permits the generation of "cross-walks" between namespaces. In practice, this has proved to be slightly problematic. It requires careful interpretation of the semantics of individual elements. Also it requires that either the semantic layer includes units of similar specificity to the namespace elements (the shortcomings of the use of a general-purpose semantic unit set are highlighted when dealing with domain-specific schemas, as in the case of MEG) or that the registry incorporates a mechanism to describe partial or "fuzzy" semantic correspondences. The purpose of creating entries in a metadata registry is to make information about the structure and semantics of metadata element sets available in both human-readable and machine-readable form, so that they are available both to In its present form the DESIRE registry fulfils only the first of these objectives. Another registry project of which UKOLN is a partner, SCHEMAS, is working on the construction of a registry which stores the content in RDF/XML-based forms. The goal of the SCHEMAS registry is to make its content available in forms which are useful both to the human browser and to a software agent. The DESIRE registry contains several MEG-related entries.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00352.warc.gz
CC-MAIN-2022-27
2,623
8
http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000055.html
code
[goals][tc] community-wide goals for train development cycle doug at doughellmann.com Tue Nov 20 00:28:37 UTC 2018 Last week at the Forum in Berlin we met to discuss potential community-wide goals for the Train development cycle. We had a good list of proposals , although some of them do not meet the criteria for the goals program as currently defined . We came up with a few suggestions that do fit the criteria, listed here in the order we discussed them: * Ensure that each project has contributor on-boarding documentation in a discoverable place in their documentation tree, and link to that from the contributor portal at openstack.org/community. (proposed by This should be relatively light-weight for teams to do, and will uncover some good opportunities for documentation reuse. * Deletion of project resources. (proposed by tobberydberg and adriant) This has been a long-standing request from operators, as discussed in forum session earlier in the week . The propose next-steps are to establish a way for the existing os-purge program to accept plugins so that project teams can declare the relationships between resources that need to be deleted, to make orchestration easier. See the etherpad from the session for more details. * Finish moving legacy python-*client CLIs to python-openstackclient (proposed by mriedem on behalf of Tim Bell) This is another initiative that has been going on for a few years, and completing it would give users more clarity as well as providing a more consistent user interface. The general idea was to finish implementing all of the commands within OSC using the existing client libs, then update the SDK to support those features so we can drop the client libs from OSC as well, giving us 1 SDK and 1 CLI. That’s a multi-step process, so I think for the first phase targeting completing all of the commands within OSC using the various client libraries would be enough for 1 cycle. * Ensure all projects pass request-id when calling one another. (proposed by Pavlo Shchelokovskyy) Passing request-id allows us to chain requests together and trace them for debugging and profiling. We started this a while back and some projects are doing it but not all. * Rolling out oslo health-check middleware so each project provides that API. (proposed by mugsie) This came up at the PTG in Denver earlier this year . There will be some up-front work to make the health check API able to support real backend checks for the database, message bus, etc. Then we would need to add the middleware to all services. In all of these cases, we have a bit more up-front work to do to be ready to accept the idea as a goal. There should be time during the remainder of the Stein cycle to make enough progress on that work to know if we could accept it as a goal. Our next steps are for the people proposing these goals to ensure that up-front work is done and be prepared to write up the goal document for review. If the proposes aren’t able/willing to act as goal champions, we need to identify someone else to fill that role. I propose we set a target of early March for having the formal goal proposals ready for review. That will give us time to approve them well before the next Summit/PTG, so that teams can use time during the PTG for planning their implementation. 1. https://etherpad.openstack.org/p/api-sig-stein-ptg (line 20) More information about the openstack-discuss
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027330968.54/warc/CC-MAIN-20190826042816-20190826064816-00397.warc.gz
CC-MAIN-2019-35
3,414
57
https://forum.serverless.com/t/should-variables-go-to-provider-or-custom-section-in-serverless-yml/6179
code
I want to declare a variable in serverless.yml like this threshold: 10. Should i put it in provider or custom section? Seem to me both sections should work, but what is the standard or best practices? Besides, what should go into provider section? I can’t really find any documentation about that. I read through https://serverless.com/framework/docs/providers/aws/guide/intro/ but it doesn’t have talk about that.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103821173.44/warc/CC-MAIN-20220630122857-20220630152857-00180.warc.gz
CC-MAIN-2022-27
418
3
https://www.my.freelancer.com/work/iax-softphone-video-support/
code
...small business provider of support to U.S. government missions and the initiatives of partner nations. Since 2011, Matlock has applied its expertise in custom software design, full-scope database and IT network services, Intel Analyst, Technical Security Systems engineering, construction, logistics and rapid response in support of customers around the We are producing a live Bollywood multimedia theater show and are seeking a creative editor with vision and a love of Hindi Film to create the film edits that will accompany live dance on stage. Prefer someone local to Los Angeles to be able to meet in person to review edits as the project evolves. [log masuk untuk melihat URL] The aim of this project is to explore the tools provided by the programming languages Julia, Python, R, and Octave to handle missing data. Fo...involves the easiness of manipulation of Julia, Python, R, and Octave and the time consumption during their execution. Besides, you have to study if one of the following software support handling missing data: I need someone to help me set up the correct vb.net syntax for a POST controller request using Postman. Simple setup for a person that knows the...Public Sub PostValue(ByVal value As String) Dim ret As String = SendDataSQL(value) End Sub but Postman says "Message": "The requested resource does not support http method 'POST'." will send video and all info required, need someone available to upload info. please post availability and price per hour or per video Hi, I am looking for a graphic designer/animator, who can help build a couple of animated videos/doodlers for my company. The core contents would be around product illustrations and explanations. Interested ones, please drop me a note. Cheers !! Please send a proposal with links to your previous works If I like your previous works, I will share more details. ...n username: super(atsymbol)[log masuk untuk melihat URL] Password: test123 This Page above does not fit my requirements enough. Many issues are happening, many payment gateways mistakes, bad support from employee, and important features are missing that i need like seeing trade live. I want to stop using the current one and start fresh with the below Laravel, or if you Need 1000.. 15 to 30 sec videos to be created using offeo software. Your requirements: - Very Good English Conversation Skills - Have Facebook Messenger ( For Personal Calls and Video Calls With Our Tester) - Android Studio Experience With ALARM Service And UI Design Skills. Misc: - Changing the functions, or features that already exist in the application (even a single line) without requesting to do so, will result Oracle Integration Developer -Minimum three years of Oracle EBS application development and support experience -Minimum three years of experience working with Oracle Cloud ERP application -Experience of ERP or Cloud Data Conversion processes inbound and outbound interfaces, integration using Oracle SOA, BI Publisher, PaaS Oracle Integration Cloud ...and participate in the overall application lifecycle - Focus on coding and debugging - Define and communicate technical and design requirements - Provide training, help and support to other team members - Build high-quality reusable code that can be used in thew future - Develop functional and sustainable web applications with clean codes - Troubleshoot I am using the progress-request module in Node and it happens that on a server if I download the mp4 videos, but on another server not, the file is created but empty and the progress-request 'end' event is automatically terminated. I don't know what it could be, I tried everything, but I always think it is empty. let path = "/home/admin/web/[log masuk untuk melihat URL]";... Need to Devlope Google Drive API Script For Downloadign And Uploading A File W...backup Option File Downloaded Direct Using Multiple Account Which Is Filled. Clonning And Deleting Clonning And Downloading Direct. Which Is Shared in website or Script check Video below: What i want. [log masuk untuk melihat URL] ...for all pages - Images/Video Optimized - Social Media intergration - "Breadcrumbs" - All titles (title tags, meta tags etc) - Redirects (301) when necessary - XML Sitemaps - Anchor Text - Google Analytics - CLEAN CODE , too many times there is a lot of garbage that sticks around We want the wow factor. Something exciting, video game-ish, easy and exciting Hey, If you have some free time I'd like to use your help for sourc...the music. It should similar to this::[log masuk untuk melihat URL] sure you listen to REALLY get an idea. The price will probably be 3$/used song in a video. I'll ask for one song before we start working so as to understand if you got the music style I need. ...team member in our IT department, we need someone that is a web developer and specilizes on wordpress and drupal, that can speak english fluently and can help us with our IT support work as well. We have been working with freelancers from here for the past 15 years and many of our assiciates have almost 10 years with us. We are looking for a long term ...European freelancers (ongoing cooperation) for these REMOTE positions: Junior eCommerce Growth Specialist Senior eCommerce Growth Expert Advertising Copywriter (part-time) Video Editor (for eCommerce ads) Interested?! Apply here: [log masuk untuk melihat URL] . gency/careers/ About Us: A Boutique A-level team crazy for challenges, bringing results & believing that ...making lips match words I say, and these videos will be 30 minutes each. Budget $50 per video 3 times per week , every week. The max budget is $50 per video , we cannot pay anymore so dont ask since each video gets trashed every week, only last 7 days per video. this series is based on a horse giving out news from behind a desk and by walking in talking ...task is 5.00 USD only (Please do not apply if you do not agree). I'm looking for freelancers to do short (10~15 seconds) self video testimonial clip. - Must be fluent in English. - Must provide good HD recording. - The self video testimonial must share greetings and success wishes clip to an event program. The text script: " Hi, my name is ......., from I already have a trekking video of mine. I want nice blog out of the video. I will also provide with some additional information about the trek. Languages used in my video are a mix of Bengali, English and Hindi. I will need an Indian (preferably Bengali) writer for this work. Following is the video for which I want the blog content: https:[Remov ...the main menu, please check it on the website to see the idea. The website is for IT support company. It needs to be very clean, following WCAG Compliance. The homepage is: [log masuk untuk melihat URL] The contact page is: [log masuk untuk melihat URL] Above is only the idea of pages but they don't look so good / clean, u can send new idea Czesc Szukam osoby ktora wykona 10 roznych reklam video dla gry Domino. - After Effects, Deliverables: .aep i mp4 w rozdzielczosci 1920x1080 - 15-30 sekund - Przyklady reklam konkurencji - [log masuk untuk melihat URL] - graficzne assety i dzwieki z https://elements I need a well dressed man to walk up to an open door. enter. door closes behind him and zooms in on a plaque. more detail to follow. the mans race should not be identifiable. Preferred no facial features. predominantly black and white except the door which will be a textured South African flag. would like to discuss the angles you may have in mind. max 6/7 seconds. £100 negotiated Need Google ads expert to generate call for tech support. -> should have experience how to run and manage Google ads campaign for Tech support. We are seeking an experienced full stack C# web developer to join a team of developers working on a number of projects,. You will need to support ongoing changes that need to be made to existing projects based on customer feedback and also engage in the design stages of new greenfield projects. This is a full time position for an initial period of We are looking for DevOps (or DevSecOps) person for a long-term co-operation. We need someone who can support us in various activities, but not on permanent basis. Most of the projects we have are in AWS, so experience with cloud technologies is desired, but we are open to everybody who wants to learn and gain some experience. You are a good fit if ...my account. You will have to: * Convert the theme to IOS & Android apps (preferably via Flutter) * Work on updates for these apps (bugs, features, improvements) * Provide support for the apps The apps will be: * Sold from my Envato Account * Apps will be maintained from my Github This is a long term partnership, as you can see my theme has a lot of Hola! Estoy en busca de una persona que pueda realizar branding para el...pueda realizar branding para el desarrollo de un emprendimiento (ya sea logo, tipografia y paleta de colores) . Asimismo, necesito otra persona que se encargue de hacer un video empresarial sobre el negocio en español e ingles. El emprendimiento se relaciona con criptomonedas. Location : Germany Language : German Only professionals residing in Germany can apply. Manage Windows Server Operating System 2003/2008 R2/2012/2016 · File Servers Management. · Maintain and administer DNS, DHCP, NFS, NIS, DFS roots, and group policy ·  ... We need a data visualization that can be loaded with different data sets and visualize the data relationship. Review this video for full idea of what we need. [log masuk untuk melihat URL] The goal is that it can display in a section of web page or separate pop-out. We are not looking to pay someone to learn how We are a fintech company selling wealth management solutions. We want to make a 2 min high-quality, introductory corporate video about the company. We want animation not still photos using video of real objects (no sketches). Hello we are socialfex and we want a 5 minutes app explainer video with voice over in english...script is ready ...This is for our customers how they can use our application ...and we want a very interactive and engaging video....I have over 30 more videos to be animated Hi I have a about 20 pcs of "not fast" android tablets - all have motion detectors. I need these to show video and images in a given order. Your job is to setup this on one of the tablets - I will copy to the rest. I have Anydesk installed on them. A note is that I would like them to run at given times + and/or activated via the build in motiondetector ...of new blockchain on two servers. Setting up Block explorer of new blockchain on server Setting up Mining pool of new blockchain Nft platform-superrare, Setting up wallet support for MetaMask browser wallet. Desktop Wallets – Mac, Windows, and Ubuntu Mobile Wallets – iOS and Android Coin Website – basic content. Frontend development Create react app Buscamos una persona creativa e innovadora con gran sentido visual que le apasione el mundo del diseño y la ilustración. Funciones: - Saber desenvolverse tanto en el diseño de personajes como de escenarios. - Conocimientos de vídeo. Es importante que el profesional trabaje con Slack, ya que todos los proyectos se gestionarán por ahí.
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038077843.17/warc/CC-MAIN-20210414155517-20210414185517-00488.warc.gz
CC-MAIN-2021-17
11,224
36
https://www.qoutube.com/watch/Rm2FYYvrRkpBEFt
code
5 Steps to Simple Data Visualization for Nonprofits Data visualization is an effective way to communicate your nonprofits impact because humans are able to process images 60,000x faster than we can process text. Thus, images are a powerful tool while conveying your nonprofits message. You will learn: -how to make data visualizations on Google Sheets -how to choose the right type of graph -how to customize and organize a more effective visualization -how to embed and use this interactive data viz We know that it is not always the most accessible thing to for non-coders, but luckily, we have found a simple way for data viz using Google Sheets, which makes it extremely simple, accessible, and easy to update. 1. Use the right type of graph (01:03) What’s your goal with the viz? Is it to highlight relationships or to show composition? Nailing down these questions will help you decide on which graph to use. Here are the options on Google Sheets, and how we would recommend using them: Gauge chart: show goal and progress Map: show breadth of your programs Motion chart: emphasize progress/growth over time 2. Organize and clean (01:39) After choosing your graph type, Google Sheets will generate that chart for you, but all its features may not be relevant in your situation. Some questions that you may want to ask during this stage include: Are background lines necessary? Does the sorting order make sense? Remove or sort things more clearly to match your needs after answering these questions. 3. Format labels and legends (02:09) The next step is to go in even closer and edit the labels and legends so that people can better understand what exactly the chart is showing. This is the stage that you should: Add title and name of axis Remove legend if necessary 4. Customize design (02:22) Although Google Sheets does not have the best tools to manipulate the colors since there is not a custom option, you can get pretty close by trying to match your colors to the palette provided. 5. Publish and embed (02:43) Now just publish this graph on Google Sheets. Once you've done that, you can copy the embed code and paste it onto your site. Embedding the interactive graph is the same process as embedding a Youtube video, so it's You can assemble these graphs together and tie them with a narrative to show the impact that your nonprofit has made. Get out there are start making some awesome graphs to show off your great work! Whole Whale is a digital agency that leverages data and technology to increase the impact of nonprofits. In the same way the Inuits used every part of whale, Whole Whale leverages existing resources to see, "What else can this do for us?" By using data analysis, digital strategy, web development, and training, WW builds a 'Data Culture' within every nonprofit organization they work with. Check us out on Facebook : Visit our website:
s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526818.17/warc/CC-MAIN-20190721020230-20190721042230-00261.warc.gz
CC-MAIN-2019-30
2,877
31
http://www.linuxquestions.org/questions/gentoo-87/missing-libwrap-for-gnome-emerge-764541/
code
GentooThis forum is for the discussion of Gentoo Linux. Welcome to LinuxQuestions.org, a friendly and active Linux Community. You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today! Note that registered members see fewer ads, and ContentLink is completely disabled once you log in. If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here. Having a problem logging in? Please visit this page to clear all LQ-related cookies. Introduction to Linux - A Hands on Guide This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter. For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own. Click Here to receive this Complete Guide absolutely free. I'm trying to install Gnome, and I keep getting the "die 'econf failed'" error for gnome-base/gnome-session-2.20.3. Here's the error: checking for TCP wrappers... configure: error: "libwrap not found!" ERROR: gnome-base/gnome-session-2.20.3 failed ebuild.sh, line 49: Called src_compile environment, line 2572: Called gnome2_src_compile environment, line 1957: Called gnome2_src_configure The specific snippet of code: die "econf failed" The die message: I did a little research and found it may be to me missing tcp-wrappers. I remerged tcp-wrappers successfully, but I'm still having the same issue. Any ideas as to what's going on? well, I've tried "emerge -pv libwrap" but there is no such ebuild. The question is if there is something wrong with your tcp-wrapper. I had sometimes the problem that such a package needed more (or other) USE-flags. But the tcp-wrapper has only "-ipv6". Are there other failing dependencies in the logfile? js-sandbox jsec # emerge -pv gnome-session These are the packages that would be merged, in order: Calculating dependencies... done! [ebuild N ] gnome-base/gnome-session-2.20.3 USE="branding esd ipv6 tcpd -debug" 0 kB Total: 1 package (1 new), Size of downloads: 0 kB
s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189534.68/warc/CC-MAIN-20170322212949-00319-ip-10-233-31-227.ec2.internal.warc.gz
CC-MAIN-2017-13
2,698
27
https://customercare.igloosoftware.com/playbook/bp_center/bp_evolve_stage/best_practice_how_to_inform_your_organization
code
- Join Now Communicating to your organization effectively (BP) When communicating key information to your employees, consider the different methods available to you within your digital workplace. Depending on the information you're pushing out and the impact of that information, you'll likely be considering using the broadcast functionality, a channel employees are all subscribed to, or a quick post on a microblog. RecommendationsHere are some considerations to help you decide how to communicate key information to your organization: - If this is a one time item that everyone at the organization should know, broadcasting is a great method. What broadcasting gives you is an email notification as well as a message alert in the userbar so if users have a spam filter set up or are not actively checking their email they can still see an indication of something new by browsing in their digital workplace. This is a great option if there is immediate action that you want the widest possible way to inform staff about. An example may be a big announcement about the company. - If there is information that periodically comes out that the organization should be aware of, subscribing all members to the channel is the better option. You can do this by clicking on the channel actions and subscriptions. To prevent being overloaded with email notifications if this is an active area try setting a daily or weekly cadence instead of instant notifications. You can also do this on a specific article if you want your staff to see all comments on that content. An example could be new hires in the company. - If this is a quick alert that will expire then a microblog will be the tool of choice. An example is company t-shirts are available with reception. Within a few days those shirts would probably be unavailable or at least not with reception. - If you have a few pieces of content that have been created and only now has been asked to be shared more broadly, again microblogs are the way go sharing from within the userbar (the paper and pencil icon). Browse to the content in question and click that icon. When you click that button there is a checkbox called "Reference current page". By using this you don't need to remember a URL and anyone who views the microblog can tell what type of content you are sharing and probably your expected action. An example would be a mandatory meeting that now you need RSVPs for to know the number of salads to order. Viewed 302 times
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038460648.48/warc/CC-MAIN-20210417132441-20210417162441-00366.warc.gz
CC-MAIN-2021-17
2,480
9
https://answers.sap.com/questions/9470282/blocking-creation-of-po-without-material-number.html
code
I want to block creation of purchase orders without material numbers for certain users, but they should be able to do everything else. Please let me know if this would be possible using something in the authorization part. I was also looking at the configuration for function authorization for buyers. But I am not able to understand it completely. Looking at the configuration does it mean that I need to create a new function authorization which has all the check boxes checked except for "W/o Material" and assign that function authorization to the user masters of those particular users?
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304876.16/warc/CC-MAIN-20220125220353-20220126010353-00393.warc.gz
CC-MAIN-2022-05
591
2
https://www.learnxhosa.co.za/contact/
code
Thanks for your interest, and patience. We will still make every effort to get back to you ASAP. Office hours: We are generally on our laptops in office hours and often far beyond, but sometimes we are in rural villages or mountain tops without network for a few days, sometimes even weeks depending on time of year. So please be patient. Sicela ube nomonde! Do please stay SUBSCRIBED at either or preferably both of these: We communicate latest info, events and opportunities to those interested via our mailing list and newsletter service. Join this admins-only posting Learn Xhosa News Group: GET A QUOTE: For an official quote, please email: - which levels you would prefer to do (more info here): - probable number of delegates / learners - preferred days of the week and times - preferred starting dates - and your organisation’s address Connect With Us: Message UBuntu Bridge on WhatsApp: https://wa.me/message/MEX6UNEEF34QB1 Physical Address: Cape Town, South Africa You can also keep up-to-date with us via: Siyabulela / We are grateful!
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100540.62/warc/CC-MAIN-20231205010358-20231205040358-00671.warc.gz
CC-MAIN-2023-50
1,048
18
https://www.pocketbikeplanet.com/threads/exhaust-pipe-leak-fix.158/
code
I have an X3 with a tuned pipe. It leaks oil between the manifold (not the gasket) and the pipe getting it all over my shoes. My first guess is that I'm using too much oil (25:1, which is "normal") but if oil leaks then air leaks out also, maybe loosing back pressure? Anyway, can someone suggest of a way to seal it. Maybe JB Weld it?
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00430.warc.gz
CC-MAIN-2022-40
335
1
https://hyperchange.com/products/hyperchange-dad-hat
code
The iconic HyperChange dad hat. Comes in all black, with an embroidered hot pink HyperChange logo. - Adds a flair of HyperChange swag to any fit - Improves scheming by 69% - Increases chance of an Elon Musk twitter reply by 420% 2 custom HyperChange pins are included in every dad hat order :) 🎁 Limited edition. Only 420 made. #artisanal
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511364.23/warc/CC-MAIN-20231004084230-20231004114230-00009.warc.gz
CC-MAIN-2023-40
341
6
https://brigade.codeforamerica.org/about/2021-dei-report/
code
Diversity, Equity, and Inclusion in the Brigades: 2021 Report At the beginning of 2019, the Brigade Network embarked on our first-ever Brigade census to gather demographic data to establish a benchmark for the diversity of the Brigade Network. We deployed the census in the first quarter of 2019 and over 1,000 people responded in the first three months. Our analysis resulted in a few takeaways: - In 2019, our Network was 37% people of color. The racial diversity of our Brigade Network was roughly on par with that of the population of the country as a whole. - 27% of our Network did not work in tech at the time, representing more than one quarter of our Network. The fact that a significant portion of our Network works in other fields represents a diversity of industry that adds strength to the talents and capabilities represented in our ranks. - At the time of the census, 37% of our Network were over 40 years old. Statistics show that the tech field often skews towards a younger demographic. This showed us that our Network is more inclusive of people of all ages relative to the industry. You can view the full results of the census here. While our analysis showed that the Network was diverse in race, skills, and age, it also showed areas that needed serious work: - We significantly lacked representation from underrepresented groups in tech such as Black, Latinx, and Indigenous folks - Our gender representation heavily skewed male - Our income statistics showed vast overrepresentation of higher incomes Because of these findings, we focused on three main goals for ourselves: Increasing Racial Diversity, Increasing Gender Diversity and Increasing Income Diversity within our Network. To take action, we outlined a path forward for reaching the above goals (it should be noted that many of the strategies we employed were focused on increasing racial and gender diversity). Below are the strategies we committed to using, and the successes we had over the last two years executing them: Create concrete goals and action plans behind said goals to make progress - We set a goal to increase our racial diversity in our National Advisory Council (NAC), and increased our racial and gender representation with a NAC that is now 33% BIPOC, and 66% women/non-binary. Provide resources to Brigade leaders for intentional recruitment and ways to foster a welcoming environment for all community members to participate - At each of our in-person trainings and events, we recruited guest speakers to address the conference at large and provide resources to help Brigade leaders increase diversity within local Brigades - We invited Brigade leaders to join the Racial Privilege Accountability and Learning cohort started in 2020 led by Code for America’s DEI Committee - We shared these anti-racism, accountability, and support resources with the Network as part of the racial reckoning that came to prominence in the summer of 2020 - The Colors of America group was created to foster community among Brigade members who identify as people of color through regular meetings and a dedicated Slack channel - As part of our Rapid Response Priority Action Area in 2020, Local Transparency, Accountability, and the Fight for Racial Justice project canvasses were created to help Brigades be a part of further racial justice initiatives in their communities Ensuring a diverse presence at our in-person events (such as Code for America Summit and Brigade Congress) to foster a diverse and inclusive environment when we meet on a national stage, in addition to the presence we have in our communities - At 2019 Brigade Congress (our only in-person event we’ve been able to hold since the release of our 2019 Brigade Census results), we increased our racial diversity to 50% of attendees that identified as non-white, a 25% increase from 2018 We reviewed data collected through the Brigade Census since its initial deployment in 2019 to measure our progress and identify areas that need continued growth. Our review revealed: Age skewed younger, it appears that there are slightly more people in our Network who are under 40 than there were in 2019. Census Respondents by Age There are significantly more women and non-binary/third gender folks than in 2019. Census Respondents by GEnder There is a slight increase in folks in a lower income bracket than in 2019. Census Respondents by income There is an increase in folks that identify as BIPOC. Census Respondents by Race/Ethnicity Given these results, and our commitment to continuing this work to strengthen our Network, we’ve identified three areas that we will focus our efforts over in 2021 and 2022, as well as strategies to execute these focus areas and continue to work towards our goals. - Partner with organizations serving BIPOC individuals and women and non-binary folks in tech. If you have connections to any organizations, or suggestions on who we can work with, please let us know! - Enable Brigade leaders and members to table and present at conferences that serve underrepresented individuals. - Support underrepresented individuals to attend our national events (Summit, National Day of Civic Hacking, Brigade Congress) through digital marketing, promotion, direct outreach, and financial support. - Provide training resources for Brigade leaders on DEI, including but not limited to encouraging folks to join Code for America’s the Racial Privilege Accountability and Learning cohort. Create opportunities for community and increase retention - We will continue to hold Colors of America meetings and experiment with meeting formats that increase connection and reach wider audiences. - Project focus: Highlight projects that serve communities we are looking to build membership from. We fully embrace that this journey will never be complete, and that equity, fairness, and justice are never stagnant or fully achieved. We know creating a diverse and inclusive Network is an ongoing journey and we are committed to doing the work to make our Network stronger and build diverse teams of volunteers doing important work in our communities.
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224652116.60/warc/CC-MAIN-20230605121635-20230605151635-00709.warc.gz
CC-MAIN-2023-23
6,120
40
https://forums.adobe.com/thread/2154354
code
My opinion hard disk drives are fine for back-up and archiving, get modern and go SSD Yes, Bill's advice is good. Only have OS and programs on your "boot drive" , which should be a quality SATA III SSD, like a 500GB or larger Samsung 850Pro. Then ALL ELSE should go on a new Samsung 950 Pro PCI SSD which runs at over 2 GB /sec read and 1.5 GB/sec write. You may need an "adapter card" in order to plug this new drive into an available PCI slot. Then, a large, spinning HDD can be used to backup and archive completed files and valuable media. Or, you can just install two,or even three quality SSDs in a fast RAID 0 off the motherboard to give you a higher capacity single volume for your "media drive",or" project drive" ALL files not on the boot drive would go here for best performance and again, you would have a large, spinning HDD also installed to serve as an archive and backup drive.....preferably an "enterprise level" 7200RPM hard drive with a large capacity, at least as big as the RAID 0 volume A RAID 0 of three Samsung 850 Pro SSDs should give at least 1.5 GB/sec. read and 950MB/sec write speed. Spinning HDDs are just too slow in comparison for use in current video editing, and are often the main cause of performance "bottlenecks". They are good for backing up and archiving.
s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221212910.25/warc/CC-MAIN-20180817202237-20180817222237-00680.warc.gz
CC-MAIN-2018-34
1,295
5
https://meta.stackexchange.com/questions/337971/change-the-community-wiki-thingy-so-that-it-wont-deadname-users
code
I recently realized that, even though I changed my username a while ago (more than six months), some of my old community wiki answers are still displaying the wrong username. We are now 9 years later and I do believe it's time for SE Inc to re-think this. I know SE Inc is trying to be more inclusive and that the new Code of Conduct is part of it. That's why I believe not deadnaming users should be something they would like to do too. I am (mostly) fine with old comments deadnaming me. However, seeing the "wiki thing" doing the same things isn't as nice. Especially since old answers of mine do use my updated username. To be clear, I don't like seeing my old username in comments. But I can understand that changing that would cost a lot. And, anyway, since comments are supposed to be temporary, I can always flag to delete them. Also, there is a reason for the fact I changed my username and I don't want to attract unnecessary attention about this several months later. So, just "editing all my old wiki answers" isn't really an option for me (not one I like, anyway).
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510903.85/warc/CC-MAIN-20231001141548-20231001171548-00707.warc.gz
CC-MAIN-2023-40
1,077
6
https://highvolsubs-amazon.icims.com/jobs/606287/software-engineer%2C-robotics/job
code
• Masters in Computer Science or related field. • Experience investigating, designing, prototyping, and delivering new and innovative system solutions • Excellent judgment, organizational, and problem solving skills • Comfortable taking initiative and working across teams • Experience with Robot Operating System (ROS) • Knowledge of real-time software development • Strong Python scripting skills • Can thrive in a dynamic environment with multiple, changing priorities • Excellent communication skills including verbal, written, and listening. Lab126 is part of the Amazon.com, Inc. group of companies and is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125945604.91/warc/CC-MAIN-20180422135010-20180422155010-00528.warc.gz
CC-MAIN-2018-17
762
10
https://community.infiniteflight.com/t/spicy-infinite-flight-movie/552002?page=2
code
Oh ok, I followed you on Omlet and texted you. Can we discuss it on Omlet arcade I just texted you there Can we text each other on Omlet arcade to discuss it? I’ll download it again ;) My username on Omlet arcade is Itsablacklineyt (neat movie btw) - Thx soo much - I saw peoples doing this… So I tought it’s allowed. No worries! Take my word for future reference 😉 Same here lol nice vid!!!
s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038074941.13/warc/CC-MAIN-20210413183055-20210413213055-00087.warc.gz
CC-MAIN-2021-17
400
9
https://postmansmtp.com/forums/topic/fatal-error-call-to-undefined-function-guzzlehttpchoose_handler/
code
The O365 Extension worked well, that’s nothing to do with tokens, connection with outlook API… in fact, the plugin is sending mails correctly! But if for instance I edit an order (change status) it gives error 500. When I enable debug and repeat the same “quick order edit” on orders page, it shows the following error (I even don’t know why this error is showing up here, on orders page, as this plugin has nothing to do on this case…) Error (first domain folder ommited from urls for privacy): Fatal error: Uncaught Error: Call to undefined function GuzzleHttp\choose_handler() in wp-content/plugins/post-smtp-extension-office365/vendor/guzzlehttp/guzzle/src/HandlerStack.php:40 Stack trace: #0 wp-content/plugins/post-smtp-extension-office365/vendor/guzzlehttp/guzzle/src/Client.php(65): GuzzleHttp\HandlerStack::create() #1 wp-content/plugins/post-smtp-extension-office365/vendor/league/oauth2-client/src/Provider/AbstractProvider.php(132): GuzzleHttp\Client->__construct(Array) #2 wp-content/plugins/post-smtp-extension-office365/vendor/league/oauth2-client/src/Provider/GenericProvider.php(99): League\OAuth2\Client\Provider\AbstractProvider->__construct(Array, Array) #3 wp-content/plugins/post-smtp-extension-office365/inc/TokenCache.php(64): League\OAuth2\Client\Provider\GenericProvider->__construct(Array) #4 wp-content/plugins/post-smtp-extension-office365/inc/Engine.php(88): Office365PostSMTP\ in wp-content/plugins/post-smtp-extension-office365/vendor/guzzlehttp/guzzle/src/HandlerStack.php on line 40 Seems some problem with Guzzle library, I have seen some reports with other plugins having the same error. I also tried to find a new version of the o365 extension, just in case you updated it, but I found the extension is retired ¿?
s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572286.44/warc/CC-MAIN-20220816090541-20220816120541-00696.warc.gz
CC-MAIN-2022-33
1,763
7
https://forum.rclone.org/t/new-server-for-rclone-with-plex-which-cpu/7195
code
I’ve been using rclone with plex for maybe 18 months now, it’s been working really great. Got API bans earlier and switched to rclone/Plexdrive set up (still encrypted with rclone) but recently switched to fully rclone with the cache. No bans so far. I’m planning to upgrade to a new server, and was wondering if you could help me choose. They are basically identical, but one is a Xeon and one is an i7 CPU. Which one would you pick? I have maybe 3-4 concurrent streams going at most, and usually it’s just 1 stream. Quick start of playback is a priority, and of course it should be able to handle at least 4 streams. What are you devices? it really depends if you are direct streaming/direct playing or transcoding. The servers are the same price, so I’m only after the performance difference. I would say I often need transcoding. I stream to a combination of the devices below, the top 2 are used every day: Samsung smart TV (mostly direct play) Mobile (android and iPhone) Chromecast (not as often) Older smart TV (transcoding) This server is also the same price, so its between the i7-4770 and the Xeon E3-1246V3 I’d do the i7 as that’s what I have and I use hardware transcoding for almost everything if it needs it. It’s a very low CPU hit. Hardware transcoding, are you referring to the Plex setting? Yes, you have to click the box and if you play something that transcodes, you’ll see in Tautulli a (hw) at the end of the line to show it’s hardware transcoding. You can also turn up the plex logs to debug and you’ll see it in the logs as well (first way is much easier). Thanks for your help! Edit: found a new server just a tad more expensive (0.5 more € per month). ECC ram and with datacenter SSD (?) and Intel iNIC. Slightly less SSD space though (240 vs 256). Can a Xeon CPU not do hardware transcoding, while an i7 can? Because of built in graphics. What do you think? I went with the Intel® Xeon® CPU E5-2680 v3, it can handle multiple streams like a champ. Have you considered re-encoding media as it comes in to minimize transcoding and maximize direct play? What I found was setting using H264 with a high profile at 4.1 and adding AAC stereo to the existing DTS track enabled 80% of devices to direct play. For older devices, you might want to use a main profile instead of high. A project like , https://github.com/mdhiggins/sickbeard_mp4_automator , is very useful in normalizing your library.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103877410.46/warc/CC-MAIN-20220630183616-20220630213616-00785.warc.gz
CC-MAIN-2022-27
2,443
32
https://www.webnoob.dev/
code
Webnoob, Vue.js and other related stuff 10 tips for you about what you can do, use, and avoid, which helps you develop a more efficient and readable code. An overview about almost every basic concept of Vue.js How you can turn your Vue Web App into a PWA - The easy way These tips help you develop a more efficient project that's easier to maintain and share for small, mid-sized, and large projects. This tutorial is all about, what Nuxt.js in detail is, which features you are getting, and how you can create your first project My new blog I've built and deployed on Netlify
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816893.9/warc/CC-MAIN-20240414192536-20240414222536-00359.warc.gz
CC-MAIN-2024-18
576
7
https://www.afralisp.net/archive/Tips/code81.htm
code
Would you like to use AutoCAD commands from Visual Basic or VBA? Use the SendCommand method on the document object. This example sends a command to the AutoCAD command line of a particular drawing for evaluation. We will create a Circle in the active drawing and will zoom to display the entire circle 'CODING STARTS HERE & vbCr & "2,2,0" & vbCr & "4" & vbCr ThisDrawing.SendCommand "_zoom" & vbCr & "a" & MsgBox "A circle command has been sent to the command line of the current 'CODING ENDS HERE
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224645089.3/warc/CC-MAIN-20230530032334-20230530062334-00184.warc.gz
CC-MAIN-2023-23
497
11
https://applieddesigngroup8.wordpress.com/
code
With 4 weeks left before the project deadline, evaluation of our workflow and documentation strategy was key. Having a robust system for workflow is important, since in the industry if another team were to pick up the development of a project from where it had been delivered previously by another team, they would need to be able to know how the old team were working in order to best continue developing the project. This includes having complete access to all original assets used in the project, all notes, documents, and blueprints, in order to build a comprehensive picture of the original development strategy and be able to build upon this. Achieving this state of overall project comprehension is a key reason for why documentation is so important – if every step of the development process is documented, for example – considering why a decision has been made, or why a feature is not implemented, how a feature is to be built, etc, then a new team can easily build an overview of the process and save time in knowing what has already been tried and explored before by a previous team. Additionally, documentation of prior development helps when planning further development on a project as additions can be built to be easily implemented in with the old ones. This would also include information like program versions, as there is differences between older and newer versions of SWIFT for example (cite?), file formats, and other format specifications – such as the resolution for images and videos. Finally, the systems used for workflow and documentation need to be accessible and translatable in order for the new team to make use of it. A proprietary system for workflow may have benefits in terms of flexibility and privacy, but it needs to be documented well enough for the new team to make use of it. Alternatively, it could be argued that such a system may have disadvantages as it could be difficult to implement this into a new system. Documentation needs to be carried out in a systematic manner, which is easy to read and access – through a type of hierarchy or signposting. We had originally opted to use our MediaWiki to host files as it was adaptable, permission-controlled, and also hosted the rest of our main content for the project. However, we eventually switched to using Google Drive (cite) to host the asset files for our project, as we found that use of the MediaWiki as a main channel for quickly sharing files did not work so well; it was lengthy to access, supported a limited range of formats and had a poor system for indexing and accessing uploaded content. Arguably, this could have been remedied for longer term usage with custom modifications, plug-ins and other changes however this was potentially not appropriate for the situation at hand. Changing to Google Drive allowed easy upload and access of files due to the well-designed interface, worked natively on a variety of devices held by the team allowing ubiquitous access to files and assets as needed, and its position as a standard service meant that the team was already familiar with its usage, unlike the wiki – for the purpose of sharing files. We also found ourselves employing other methods such as USB sticks, email, and direct file transfers for more acute, situational usages, where appropriate. Primarily we communicated in person through arranged working sessions, however outside of this we used a private group session on Facebook. For similar reasons to Google Drive, we opted to use this as it was again, permission-controlled, offered a lot of very useful features, and most importantly was very already ubiquitous in its implementation. We all had Facebook integrated with our phones, laptops, tablets, and various other devices already, being frequent users, and we feel that this perpetual contact offered significant benefits for the project in terms of frequent, short progress updates and the sharing of minute details regarding the project which helped greatly in minimising gaps in communication. Due to the relatively un-indexed nature of communications on Facebook however, we also made good use of a page on the Wiki to post important updates and synthesise meeting notes to provide a steadfast central position from where the most important issues could be made clear to the team. We conducted the vast majority of project-defining decisions through once-weekly meetings and group working sessions which we conducted multiple times a week. We continued to use Trello as an online SCRUM board equivalent for the duration of the project to keep an overview on tasks in general, however made use of various pages on our Wiki to discuss semantics, such as what was happening on a day-by-day basis. We designated tasks to ourselves and each other which worked well as it beckoned an element of responsibility and accountability, meaning tasks were completed on time as expected. This also helped us in knowing what tasks were in progress, and how, and when, targets would come together. We documented meetings with the project manager using OneNote (cite) to quickly make notes of what was said throughout each agenda. This allowed for indexed, hierarchical, and accessible notes to be produced, which were then synthesised onto the Wiki and distributed to any other necessary channels, making changes to layout and content in order to improve accessibility for other team members. Tasks for the week were distributed to a page on the Wiki and to each team member, and client feedback was delivered to relevant areas (dependant on where the feedback was directed to) in order for changes to be made. We used a system of making notes individually, on the task at hand, which would then be discussed at team meetings if need be, or developed into fuller documentation on the Wiki. The Project Manager would collate any notes and thoughts into relevant areas on the Wiki for easier access throughout the duration of the project. Notes were also developed into fuller documentation to keep a development log of blog posts regarding the project on WordPress. Working documents were developed using software such as Microsoft Word, Calligra Words, Word Online, OneNote, Google Docs, and similar packages for their superior editing and proofing tools compared to the plain text entry used by the MediaWiki software, and then transferred to the Wiki when finished in order to make them easily readable and accessible for all members of the team, as a central part of the project. The Wiki became an invaluable cornerstone for documentation of the project as a structure soon developed listing all initial resources and assets from the client, the University, and individual contributions by team members including working documents from which features would be built and assessed, as well as documents showing content for the app, contact details for team members, roles, documentation for the use of the Wiki itself and to-do lists.
s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735867.93/warc/CC-MAIN-20200804102630-20200804132630-00453.warc.gz
CC-MAIN-2020-34
6,960
9
http://theleafsnation.com/users/7367
code
Fixing tapes with pencils since '82 July 20 2012 10:00AM LAST GAME - APR. 9TH, 2016 July 1st - FREE AGENCY - 12 PM Copyright © The Nation Network 2016 | Glyphicons Free licensed under CC BY 3.0 | Advertise with us!
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783391766.5/warc/CC-MAIN-20160624154951-00194-ip-10-164-35-72.ec2.internal.warc.gz
CC-MAIN-2016-26
215
7
https://dougs.livejournal.com/706806.html
code
His PC was equipped with Windows 2000. It doesn't work with versions of iTunes later than 7.3.2. Also, his PC had a dinky little 8GB disk, about 95% full. So yesterday I stuck a spare 250GB disk in there instead, and installed XP -- and then went on to install iTunes 7.4.3. Also AVG, Spybot S&D, AdAware, Firefox, MS Office, Acrobat Reader, Nero, ...
s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00543.warc.gz
CC-MAIN-2021-43
351
4
https://www.csun.edu/science-mathematics/biology/biology-careers
code
There are many excellent websites devoted to career exploration and preparation, as well as job searching in biological fields. Links to many relevant websites are listed below. This is not an exhaustive list, however, so keep in mind that many of these websites may also have relevant links. There’s lots of information out there; go for it! You may also want to see what the CSUN Career Center has to offer.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474661.10/warc/CC-MAIN-20240226162136-20240226192136-00050.warc.gz
CC-MAIN-2024-10
411
1
https://top.gg/bot/491769129318088714
code
StatbotServer stats bot ★Dashboard & graphs ★Channel counters w/ members, online, goals, clocks, statistics ★Roles using serverstats > levels Statbot is the best tool when it comes to keeping a pulse on your community, its health, and its growth! Some things you will love about Statbot: - ★ The most advanced, completely customizable channel counters like member counts, goals, and clocks! - ★ Tracking of individual members' message, voice, and game activity! - ★ Automagically Give and Remove roles based on activity over time! - ★ Responsive web dashboard for you server insights, fit for devices of all shapes and sizes! (Try it out below) - ★ Top-of-the-line support team and high frequency updates! - ★ An experience you can customize to best suit your needs! DEMO THE DASHBOARD AT https://statbot.net. - Historical data kept for as long as you need it. No more worrying about losing your stats after X amount of days. - No setup for message and voice tracking. Minimal for gaming/application tracking. - Tracking of individual members' message, voice, and even games and other applications activity with little to no effort. - Give roles based on this data with Statroles: Like leveling, but better in every way! Maintain constant activity or find inactive members by having the bot automatically give and remove roles based on their activity within a period of time. Averaging and top-based options coming soon! - Easily see and track your member growth with counts and dates. - Responsive dashboard that delivers your stats to you from your desktop to the smallest smart phone. - In-depth "Drilldown" dashboards to see pinpoint data on members and channels. - Many configuration options to get only the stats you want with many more in the pipeline. - See online, DND, idle, offline counts. - Always exciting with a fantabulous support team and more frequent updates than any other bot of this size. - Choose your own upgrade. No need to worry about getting roped into things you don't want. - Storing all this data isn't cheap, but Statbot offers a one-of-a-kind build your own premium experience. Never pay for what you don't want. Here are just a few of the commands you'll use often! s?help: Straight forward, no fuss help menu! s?stats server: Overall server statistics. s?top: A top text/voice activity for members and channels. s?user: A specific user's overall server stats. s?channel: An overview of the server's channel stats or a specific channels stats. s?messages: Stats and a graph of overall server text activity or a specific user. s?voice: Stats and a graph of overall server voice activity or a specific user. s?members: Stats and a graph of the server's memberflow (growth). - And much more!
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178389472.95/warc/CC-MAIN-20210309061538-20210309091538-00104.warc.gz
CC-MAIN-2021-10
2,738
32
https://community.atlassian.com/t5/Jira-Core-questions/Issue-detail-view-not-showing-any-fields/qaq-p/90811
code
When i click on an issue from the kanban board, structure or from the issue search page it open up the detailed view of the issue. It used to show all relevant information (description, fix versions, relates to etc.) Now it only showing the structure part with agile board link and HipChat button on right hand side. Everything else isnt showing up at all. This is happening on all projects. Screenshot of what i mean: http://screencast.com/t/TBqPLVJnBc What am I missing? I must have changed some setting but cant find anything anywhere. Thanks. EDIT: also thought I would mention that clicking the edit button for the issue shows all fields and i can change description, versions, labels etc. They're just not showing in detail view. This actually somehow magically solved itself and I have no idea why. View my Stack Overflow question where I actually got some answers for some more insight and so ideas on what the problem could have been (none seemed to work for me but they could fir anyone else reading this!) Hello Atlassian Community! My name is Emilee, and I’m a Product Marketing Manager for the Marketplace team. Starting with this post, I'm kicking off a monthly series of Spotlights to highlight Ma... Connect with like-minded Atlassian users at free events near you!Find a group Connect with like-minded Atlassian users at free events near you! Unfortunately there are no AUG chapters near you at the moment.Start an AUG We're bringing product updates and pro tips on teamwork to ten cities around the world.Save your spot
s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816370.72/warc/CC-MAIN-20180225110552-20180225130552-00479.warc.gz
CC-MAIN-2018-09
1,539
9
https://rd-alliance.org/pastevents.html?page=16
code
3 takeaways (and an interesting blog post) about the "Open Science 2020: Harmonizing Current Open Access practices with H2020 Guidelines" event in Pisa The Research Data Alliance invites you to join the 3rd RDA Plenary Meeting, in Dublin, Ireland. The 3rd Plenary concentrates on the theme ‘The Data Sharing Community: Playing YOUR part’. The program contains a mixture of keynotes, panels, networking, Working and Interest Groups as well as ‘BoF’ sessions on topics ranging from agriculture to particle physics, and from humanities to bioinformatics. Big Data Interoperability Framework Workshop: Building Robust Big Data Ecosystem 5-6 February 2014, New Delhi, India The Centre of Excellence for Digital Preservation, C-DAC, India and Alliance for Permanent Access (APA), are pleased to announce the APA International Conference on Digital Preservation and Development of Trusted Digital Repositories to be held at New Delhi, India. Supporting researchers in exploring and examining digitised artefacts presents many challenges in terms of understanding each researcher's needs, performing appropriate manipulation of and uplift from content, and in presenting a suite of useful research tools to facilitate exploration. This workshop will delve into these Digital Humanities challenges by examining the approaches taken in the CULTURA project (cultura-project.eu) to tackle the issues of: A town hall meeting at the American Geophysical Union Conference in San Francisco explores how RDA can both benefit and learn from the Earth science community.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583662124.0/warc/CC-MAIN-20190119034320-20190119060320-00043.warc.gz
CC-MAIN-2019-04
1,559
9
https://papierlogik.com/content/human-interface-devices
code
This section provides application examples of the previous courses detailling what can be buit using paper-based electronics and open-source softwares. Paper-based sensors can be used for instance to detect, quantify and even sometimes measure with some precision a given event, contact or movement. Check out these guides for more details. One can also build controller or Human Interface Devices that will use sensors to communicate with smart devices, such as DIgital Music Instruments that will send control events to generate or control computer music. This is the most inspiring application to start with isn't it?
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573184.25/warc/CC-MAIN-20190918044831-20190918070831-00070.warc.gz
CC-MAIN-2019-39
620
3
https://github.com/http4s/http4s/pull/2597
code
Join GitHub today GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up Optimize mutable iterations while encoding data #2597 rossabaker left a comment I added a crude benchmark to convince myself. Would be better if it used different sizes, but I was in a hurry. But it looks like you're right: cleaner and a little faster. Nice work. If you have an urgent need for the extra performance, I can cherry-pick this back to series/0.20. There's also a bugfix PR open, so we'll be doing a maintenance release soon. If it's just nice to have in the long run, it'll be released in the 0.21 series, and can be tried now with 0.21.0-SNAPSHOT.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316718.64/warc/CC-MAIN-20190822022401-20190822044401-00130.warc.gz
CC-MAIN-2019-35
720
6
https://wakkatu.github.io/dhero/en/hero/ESMERALDA.htm
code
Esmeralda and her trusted goat, Djali, take a stand with their allies on the battlefield, fighting back against their enemies. "Then it appears we've crowned the wrong fool. The only fool I see is you!" Specify number or math expression, e.g. x*1.2 Esmeralda bangs her tambourine and throws her fist in the air, rallying allies. Rallied allies are Energized for 10 seconds and their attack speeds are increased by 140% for 12 seconds. Allies Energized by Esmeralda gain 150 Energy each time they Basic Attack. Esmeralda can't use this skill if she has no allies. The Energize has a chance to fail on allies above level <level>. After the first 3 seconds of each wave, Esmeralda disappears in a puff of smoke, teleports to the back line, and switches to a ranged Basic Attack where she deals 500k + 5sp bonus damage with her Basic Attacks. Djali appears in Esmeralda's place with 6 stacks of Hardy and attacks enemies until his HP reaches 0. Djali has 1000k + 10sp HP and 1000k Basic Damage. Esmeralda sprays fire at enemies in front of her, dealing 1200k + 12sp damage initially and 800k + 8sp damage per second for 15 seconds to enemies hit. Esmeralda gains 250 Energy each time an ally is Silenced. When allies are Silenced, Esmeralda heals them for 700k + 10sp HP. While Esmeralda or her allies are Silenced and if they are healed by Esmeralda, they also receive 95% of the heal as a Shield that lasts 10 seconds. After Esmeralda uses "Smoke Switch", Djali Distracts enemies for 10 seconds, causing all enemies to target him. The Distract has a chance to fail against enemies above level <level>. Additional Stat Boosts: "Breathing Fire" Flames Blind Enemies Attack Faster When Djali in Fight
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224654097.42/warc/CC-MAIN-20230608035801-20230608065801-00323.warc.gz
CC-MAIN-2023-23
1,695
15
https://www.unity-studios.com/cases/cartoon-network/
code
Cartoon Network are the creators of popular children’s’ shows such as “Adventure Time”, “Gumball”, and “Regular Show”. The universes they create are wacky and colorful. It is their trademark, and millions of people love it. To keep their fans engaged in more ways they commissioned the creation of “Fusion Fall” – a massive multi-player online game (MMO) based on the characters from Cartoon Network and played by millions of kids all over the world, developed by Grigon Entertainment. Until the time of Unity Studios involvement, “Fusion Fall” was a downloadable client application. This is the typical choice for such games, however it does limit the user base, as not all potential users may have access to download a game onto the PC they are using – for instance if the machine is in a library. Grigon Entertainment therefore decided to evolve the game from being a downloadable client application into, instead, a browser-based game, using Unity 3D. Unity Studios dedication to deliver a good product meant sending a team of specialists to work alongside the game development team in South Korea for over six months. In order to allow the Korean development team to continue development using their existing technology and production pipeline, a setup of Fully Automated Project Conversion Build servers and asset conversion code was developed, while the Unity-based project was developed in parallel. After many months of on-site work, Unity Studios continued development and support remotely from their office in Aarhus, Denmark. After launch of the game Unity Studios remained involved with development of additional game functions and features.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890092.28/warc/CC-MAIN-20200706011013-20200706041013-00242.warc.gz
CC-MAIN-2020-29
1,682
4