url
stringlengths
13
4.35k
tag
stringclasses
1 value
text
stringlengths
109
628k
file_path
stringlengths
109
155
dump
stringclasses
96 values
file_size_in_byte
int64
112
630k
line_count
int64
1
3.76k
https://www.freelancer.co.id/projects/PHP-ASP/Code-for-Payment-processing-using/
code
I am looking for someone to write code to perform payment processing using First Data Global Gateway (aka LinkPoint). The code must be written in VB.Net and will be used on an ASP.Net website. The code should use their Web Services based API. I need functions to charge a credit card, check, and perform refunds. Sensible use of OOP principles appreciated. You can read all the details on their Code Wrappers and Manuals @ [url removed, login to view] (See FD Global Gateway Web Service API v1.8 for details). Signing up for an a test/staging account is free and should making testing relatively easy.
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864364.38/warc/CC-MAIN-20180622065204-20180622085204-00319.warc.gz
CC-MAIN-2018-26
601
4
https://mail.haskell.org/pipermail/haskell-cafe/2008-September/048135.html
code
[Haskell-cafe] pure Haskell database manlio_perillo at libero.it Thu Sep 25 17:09:11 EDT 2008 Graham Fawcett ha scritto: > On Wed, Sep 24, 2008 at 5:17 PM, Manlio Perillo > <manlio_perillo at libero.it> wrote: >> I need a simple, concurrent safe, database, written in Haskell. >> A database with the interface of Data.Map would be great, since what I need >> to to is atomically increment some integer values, and I would like to avoid >> to use SQLite. > If that's the entire requirement, and you're looking for something > really fast, perhaps you could use a shared-memory region between the > processes (your keys would map to addresses in shared memory), and use > a compare-and-set algorithm to handle the atomic increments. > If you're on Intel/Itanium, I believe there's a CMPXCHG instruction > that will do atomic compare-and-set on a memory address, and I'm not > sure you could get much faster than that. :-) I have an early draft of this type of database (written in D). Operations on integers use CMPXCHG, and for other operations a simple spin lock (implemented following the implementation in Nginx) is used. The problem is that it is a simple shared array! This means that you need to know in advance the data index in the array; for some type of applications this is true, I was trying to implement that database for using it in my HTTP Digest authentication order to improve security, using "not so restful" support in the RFC 2617. More information about the Haskell-Cafe
s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889473.61/warc/CC-MAIN-20180120063253-20180120083253-00206.warc.gz
CC-MAIN-2018-05
1,490
26
https://support.travelpayouts.com/hc/en-us/articles/360003418472-How-to-open-the-JavaScript-console
code
This post describes how to open consoles in different browsers and screenshot information. - Use Ctrl+Shift+J (for Windows / Linux) or Cmd+Opt+J (for Mac) - If DevTools is already open, select the Console tab In Firefox, the console is called the "web console" and is part of the developer tools. To open the web console: - Use Ctrl+Shift+K (for Windows / Linux) or Cmd+Opt+K (for Mac) - Or, select Web development > Web console from the browser menu In Safari, the console is called the "web inspector". To open the web inspector after enabling it: - Select Development menu > Show Web Inspector - Use Ctrl+Alt+I (for Win) - Use Cmd+Opt+C (for Mac) If you do not have this menu item or the web inspector doesn't start, go to the browser settings, select the Add-ons panel and tick the Show Development menu in the menu bar box. Depending on your computer, the developer tools can be opened by pressing F12. To open developer tools from the menu: - Open the menu (icon of three points) - Select Developer tools
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816586.79/warc/CC-MAIN-20240413051941-20240413081941-00331.warc.gz
CC-MAIN-2024-18
1,010
16
http://insideoutsource.blogspot.com/2011/06/its-major-award.html
code
But... silence seems to be a useful strategy for blogging. Even with a dearth of new posts, I just won a major award. (Props to you if you get the movie reference...) I was recently named one of the top bloggers in the shared services and outsourcing industry, as selected by the fine folks at The Shared Services and Outsourcing Network. (I posted about them some time ago, here.) That confirms what I hoped, which was that this pile of information that I wrote up when I was working on my (stalled and probably abandoned) book would be useful to people... Mission accomplished. At any rate, Inside Outsource has been honored at one of the top 15 blogs in the industry. And, for your reading pleasure, there is now a great list containing a link back to me, as well as to 14 other great blogs (many with content written, you know, recently) here, at SSON. Read and enjoy. Also, Major Award:
s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267859923.59/warc/CC-MAIN-20180618012148-20180618032148-00179.warc.gz
CC-MAIN-2018-26
891
7
http://www.nwhikers.net/forums/viewtopic.php?t=17899&view=previous
code
Joined: 04 Feb 2015 Posts: 14 | TRs |I'm thinking of heading up there for an overnight. Has anyone been up there lately? What are the snow conditions like? Are we talking trail buried in snow, difficult routefinding, patches of snow, ice/axe crampons, camping on snow, etc? It looks like it tops out around 6000 feet, so I imagine there's at least some snow.
s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583659654.11/warc/CC-MAIN-20190118005216-20190118031216-00573.warc.gz
CC-MAIN-2019-04
358
4
https://husting.com/2009/10/23/sharepoint-2009-conference-day-1/
code
I’m enjoying being in Las Vegas – attending Microsoft SharePoint 2009 conference. I’m going to present my personal view on this event not in any way associated with Microsoft, its partners and employees. 🙂 I’ll try to keep up with events here, however, due to my plan to attend as many sessions and events as I can, I may miss some posts. Before coming: It was very surprising to see that this event was sold out despite the current economic climate. I can imagine Microsoft was surprised as well. 1st day at the conference: As expected, the topic of this conference is SharePoint 2010. There was NO official announcement on release dates for SharePoint, Visual Studio, and Office. Interesting Announcements: Visual Studio 2010 Beta 2 is available on MSDN Microsoft Office 2010 beta is shown during presentations. The question remains if VS 2010 Tools for office 2010 are finally available. Microsoft Office SharePoint service has been rebranded as Microsoft SharePoint Foundation to keep in line with WCF(Communication) and WPF(Presentation). Overall feedback: Beginning from a key note (presented by Steve B), I have a pleasant feeling that this event was targeted for Developers first, IT personnel second and Business users third. The key note was technical enough to grab my attention. SharePoint 2010 seems like a step forward in the Microsoft Office family. Especially interesting aspect was introduction of the Microsoft Ribbon Bar to SharePoint. It is AJAX-driven user control in SharePoint 2010. Finally Microsoft introduced development tools for SharePoint that are actually close to industry standards. 1st Developers will be able to host MOSS 2010 on their client machines: MOSS 2010 is supported on Vista and Win 7 64bit only. Visual Studio 2010 will be able to deploy/debug and step through the code on developer machines the same way it does for Win Apps. Functionality similar to ASP.NET trace with additional features will be available for SharePoint 2010. Extended Results – Dev Manager
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00040.warc.gz
CC-MAIN-2020-40
2,023
11
https://freeonlinecourse.us/complete-guide-for-learning-english-from-hindipart-1/
code
Complete Guide for learning English from Hindi(Part -1) Get Complete Guide for learning English from Hindi(Part -1) course for free. Learn English from Hindi. Learn the fundamentals of language, Grammar, Vocabulary, Words, letters, and Sentence. All in one. This course is absolutely free.No coupon required to avail this course. What you’ll learn - English – Writing, Reading And Speaking skills (With a deep understanding of language and its grammar) - Basics of Language
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057882.56/warc/CC-MAIN-20210926144658-20210926174658-00569.warc.gz
CC-MAIN-2021-39
477
5
http://www.shiffman.net/2005/11/
code
I’m working on a command line emulator for Processing. . . This is somewhat of a useless experiment, and there’s a lot to be done to make it into a fully functional library. . . but at some point, i’ll finish it. . . Went to see Mark Napier’s show, Empire, at bitforms gallery yesterday. Mark is teaching a new class at ITP next semester using LWJGL (“lightweight java game library”) as means for exploring OpenGL as an artistic medium. Should be a great extension of the programming courses taught with processing at ITP. . . I was interested in Mark’s use of a “springy” architectural form as a paintbrush and started messing around with a “pendulum-y” paintbrush. . . Anyway, the show is great, go check it out! So, I’m working on a project at ITP that involves viewer comments on video that are tied to specific points of time in the video. It needs a new title, something less, i dunno, boring. . . I hope to post some videos and use this system at this site at some point. . . I hope a lot of things, though. . . Here is a video taken at Siggraph of some of my work presented there in 2004. . . . it’s old, sigh, someday i’ll have some time to show something new. . . I’ve been getting some e-mails asking about the ol’ ShakespeareMonkey@Home project inquiring about sample source code, etc. My Nature of Code course covers a brief introduction to genetic algorithms and includes a sample applet that evolves the phrase “To be or not to be.” This was made with Processing — I’ve been toying with the idea of generating a series of “how-to-program-with-processing” videos and vlogging them from here, we’ll see if I can find the time. . . Ok, so I’ve discovered the magic of wordpress and redone my site. . . who knows if it’s any better than before. But here it is. . . enjoy!
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368707773051/warc/CC-MAIN-20130516123613-00069-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
1,832
10
https://forum.openwrt.org/t/lede-adblock-noob/9360
code
I believe they do update at boot, although I'm not positive about that, if you'd like to update with a daily cron job instead of rebooting, you can just go to System>Scheduled tasks in Luci and enter something like this - 0 3 * * * /etc/init.d/adblock reload the developer for adblock, @dibdot, is really good about helping and answering questions, here's the thread for adblock help - Adblock support thread Most of the house love this adblock feature it's great! Thanks for your efforts, it's the best upgrade I've done so far! Believe it or not my wife likes doing swagbucks earning vouchers for watching adverts and doing surveys. Turns out the adblock actually gets her account suspended for 4 hours each time she tries it, It's been a subject of some hilarity but unhappy wife=unhappy life so... I've made her use a VM for swagbucks, Which gets re-cloned on a regular basis, to prevent malware infesting her PC. This VM has a static IP address reservation on my router, I would like to allow ads to\from just this VM, but continue to block ads to all our other devices. I'm hopefully this is possible, has anyone got a simple guide to help me configure it. Edit... I've forced the VM to use 18.104.22.168 for it's DNS. DHCP gives out OpenDNS. What i did was create a new LAN2 interface, on a seperate with a new vlan( vlan2). Then create 2x new wifi networks, on my router, one for each frequency (i now have 4 wifi networks 2 adblocked and 2 not). Then under the adblock section i've applied it the adblock only to the original LAN port services|adblock|Restrict interface trigger to certain interface(s) : br_lan now only br_lan and associated WIFI networks get adblocked. br_lan2: is not. The adblock trigger definition controls only the restart behaviour - nothing more. Crucial thing for adblocking is the used dns instance, probably you use now 2 instances, one with and the other without adblock. So to clarify then. I now have 4 wifi networks 2 on each frequency. Wifi network_1 on both frequencies (radio0 and radio1)is accociated with "br_lan" to this VLAN (eth0.5) i've allocated a DHCP address space of 22.214.171.124/24 and a DNS IP addresses for 126.96.36.199 and 188.8.131.52. The adblock has "br_lan" in the LUCI service "Restrict interface trigger to certain intercase(s)" as it's only target." box. Wifi network_2 on both frequencies is associated with "br_lan2", to this VLAN (eth0.2) i've allocated a DHCP address space of 192.168.2.1/24 and DNS IP addresses for 184.108.40.206 and 220.127.116.11. Seems to be working ok... I'm stil a noob at this so and i'm open to ideas, if you think i could have done this better.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104576719.83/warc/CC-MAIN-20220705113756-20220705143756-00507.warc.gz
CC-MAIN-2022-27
2,643
17
https://www.plays.tv/video/5912149b8a9f41f679/the-rakan-dash
code
Block this user Mute this user jrhudy1234The Rakan dash jrhudy1234@leagueofnema_YT Sure. Feel free to use it! League_Of_Fails@jrhudy1234 Thx Bro! Montage ready leagueofnema_YT@jrhudy1234 Thank u for sharing with me https://www.youtube.com/watch?v=tLPRwUUKKVc i hope you will enjoy the video. Plays.tv © 2017 Watch your friends and pro players on Plays.tv Capture & share your gameplay with the Plays.tv client
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187825227.80/warc/CC-MAIN-20171022113105-20171022133105-00620.warc.gz
CC-MAIN-2017-43
410
9
http://proscada.ru/engman.en/3.2.4_primary_port_number.htm
code
This is a TCP Port through a firewall. Leave the value at 0 if you are not using a firewall. If using a Firewall, Three TCP Ports are required (you can not specify the same port number for both Primary and Secondary). The TCP Ports used during Installation of the Project Node appear here by default. These are not necessarily the TCP Ports of your SCADA node. TCP ports are used with a Firewall and are optional. If not using a firewall, leave these fields with the default value (0). Note that 0 means the default ports numbers are used (4592 and 14592). The TCP ports you enter here MUST MATCH THE TCP PORTS ENTERED DURING SOFTWARE INSTALLATION of the SCADA Node and Project Node. They must also match the TCP ports opened by your System Administrator for your use. You must reinstall software (or edit the INI file) if you must use different TCP ports than you specified during software installation. The TCP Ports are specified when creating a new node. They are edited here only if you are moving a project to another PC or moving the SCADA node to another PC. A Firewall restricts the flow of data onto a network; it is a method of network security. Many corporations use firewalls. If your connection is through a firewall, you will need to have your network administrator assign two TCP ports for you to use the DRAW or VIEW features in WebAccess. The Primary Port is used for file transfers and the Secondary Port is used for Live Data. This only applies if you are connecting through the firewall. If all your WebAccess Clients and SCADA nodes are inside the firewall, you can ignore this. Http Port = 80 (used to serve the ASP Web Pages) Primary TCP Port = 4592 (used for file downloads and uploads) Secondary TCP Port = 14592 (used for real time data).
s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368608.66/warc/CC-MAIN-20210304051942-20210304081942-00247.warc.gz
CC-MAIN-2021-10
1,765
9
http://www.lisettedejongehoekstra.com/research/
code
My research mainly focuses on children’s hand movements, gestures and speech when they learn about Science & Technology tasks. Inspired by complex dynamical systems theory and ecological psychology, I research how children’s hand movements and speech change over time, within a task. Furthermore, I investigate how learning emerges from children’s interactions with their physical and social environment. I research both individual and dyadic learning. I’m an Assistant Professor at the Psychology Department of the University of Groningen. De Jonge-Hoekstra, L. , Van der Steen, S., Van Geert, P., & Cox, R. F. A. (2016). Asymmetric Dynamic Attunement of Speech and Gestures in the Construction of Children’s Understanding. Frontiers in Psychology , 7 , 1–19. http://doi.org/10.3389/fpsyg.2016.00473 Cox, R. F.A., Van der Steen, S., Guevara, M., De Jonge-Hoekstra, L., & van Dijk, M. (2016). Chromatic and Anisotropic Cross-Recurrence Quantification Analysis of Interpersonal Behavior. In Recurrence Plots and Their Quantifications: Expanding Horizons (pp. 209-225). Springer International Publishing.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679099281.67/warc/CC-MAIN-20231128083443-20231128113443-00229.warc.gz
CC-MAIN-2023-50
1,114
3
https://hackaday.com/2016/01/22/link-trucker-is-a-tiny-networking-giant/
code
If you’re a networking professional, there are professional tools for verifying that everything’s as it should be on the business end of an Ethernet cable. These professional tools often come along with a professional pricetag. If you’re just trying to wire up a single office, the pro gear can be overkill. Unless you make it yourself on the cheap! And now you can. [Kristopher Marciniak] designed and built an inexpensive device that verifies the basics: - Is the link up? Is this cable connected? - Can it get a DHCP address? - Can it perform a DNS lookup? - Can it open a webpage? What’s going on under the hood? A Raspberry Pi, you’d think. A BeagleBoard? Our hearts were warmed to see a throwback to a more civilized age: an ENC28J60 breakout board and an Arduino Uno. That’s right, [Kristopher] replicated a couple-hundred dollar network tester for the price of a few lattes. And by using a pre-made housing, [Kristopher]’s version looks great too. Watch it work in the video just below the break. Building an embedded network device used to be a lot more work, but it could be done. One of our favorites is still [Ian Lesnet’s] webserver on a business card from way back in 2008 which also used the ENC28J60 Ethernet chip.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474581.68/warc/CC-MAIN-20240225035809-20240225065809-00162.warc.gz
CC-MAIN-2024-10
1,246
8
https://www.strongfirst.com/community/threads/mobility-for-a-man-with-a-desk-job.15376/page-2
code
If you want a simple program, covering all the bases that is not too taxing, I would recommend the Flexible Steel routines.Hello everybody, This is my first post but I am reading this forum since a couple of months now and thank you all for contributing to this amazing resource. I am still a beginner, having discovered the KB world through a one day Strongfirst KB course here in France this year. I am progressing slowly towards 24kg bells in S&S and just doing the warmup has increased my overall mobility that had been greatly reduced by various desk jobs. Still, I think I could do more in terms of mobility / flexibility exercises. Maybe a "program" I could use daily without too much thinking and with clear explanations (videos would be awesome). I am curious about the Flexible Steel dvd mentioned above. Could someone give us some more details, the website being rather elusive on the "format" (how many programs, how long are they, is there a "programming" of sorts etc.). Also any user feedback would be useful! Anyway, I could just pull the trigger and see the content (as it is rather inexpensive) but the thing is, I'm hesitating with another mobility resource, "mobility secrets" from Hector Gutierrez Jr (Mobility Secrets™). I've seen his name quite a couple of times here so I figured it was good quality but, once again, hard to tell from the description alone... So if someone could share some "insights" on it I would be very grateful! Also, I have duly noted the book from Kelly Starrett (Desk Bound) for which the samples on Amazon seem promising. If someone has other ideas as well, don't hesitate to share. Thanks a lot for your advices.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400250241.72/warc/CC-MAIN-20200927023329-20200927053329-00214.warc.gz
CC-MAIN-2020-40
1,665
7
https://cydomedia.com/careers-in-web-development/
code
Back in the days, the internet was just a facility that was used mainly for sharing and linking research materials. These days, this technology has become versatile with applications in marketing, medical and health science, blogs, and entertainment, among other fields. With the rise in the use of the internet, the number of web pages being made witnessed an increase which means more people leaning towards a web development career. However, as a matter of fact, the total number of websites around the globe is constantly rising with every tick of the clock. At the start of 2020, there were around 1.5 billion online websites. As of November 2020, this number has risen to over 1.8 billion, which is an increase of 0.3 billion in just nine months leading to an average rate of around 1 million websites being developed per day. Whether you are a front-end developer or a backend developer, your skills will continue to be in high demand considering the number of websites that website development services continue to create. This article will explore the three different types of web developers out there, front end, back end, and full-stack, and how each of them fares in terms of career growth and industry demand. A front-end developer primarily deals with the user interface (UI) and it is the main person responsible for developing the website’s UX. This includes the color scheme of the webpage, the content, the tags, the outlook, and everything that a user interacts with while visiting a website. The following are the major aspects that a front-end web developer should be mindful of: In short, the duty of a front-end developer is to utilize a little bit of coding along with content writing and amazing graphics to make a website that is absolutely sensational. Generally, every website is linked to a server that has a database that enables the webpage to exist. Without the back-end, there can be no proper front-end of the website. Webpages with no back-end development are called static sites but they are suitable only for small businesses. Back-end web development demands a strong knowledge of Python, Java, Ruby, .NET, and PHP. Maintaining the back-end of a website is quite strenuous as compared to maintaining the front-end. The reason behind this fact is the complexity of programming languages involved and the higher lines of code. The key duties of a back-end developer are as follows: In a nutshell, back-end development demands contemplation and critical thinking along with technical knowledge. However, that is why it pays more than front-end development. Below is a comparison of the two aspects of web development careers to give our readers a clear insight of the two sides: |FRONT END WEB DEVELOPMENT||BACK END WEB DEVELOPMENT| |The front-end deals with the client-side.||The back-end manages the server-side| |This is an easy job to do and requires comparatively less expertise.||It requires sheer focus and keen considerations to do the task.| |Font end developers focus on gravitating more and more clients towards the website.||The duty of back-end developers is to work out methods that ensure the efficient performance of a webpage.| |The market requirement of a front-end developer is comparatively on the lower side||Back-end developers are always on call.| Full-Stack developers know both front-end and back-end development. Diverse knowledge of these two sides opens doors towards more opportunities for Full Stack Developers. It is the job of a full-stack developer to supervise every aspect of web design, starting from the loading time of a webpage to the overall architecture of the website. Getting a hang of the two fields needs time and experience in order to fully grasp the fundamentals. Full-Stack Developers are multitaskers with a wide range of job responsibilities. Some of these duties are as follows: Moreover, the job of a full stack developer calls for someone proficient in both communication and coding. As mentioned earlier, the duties are a combination of both front-end and back-end developers. Web development career is currently one of the most in-demand careers out there. Whether you are a front-end developer or a full-stack developer, the opportunities are plenty out there. The monetary advantage you can get out of these careers solely depends on how well you manage to advance your skillset and make it suit the needs of the currently prevalent market trends.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103271763.15/warc/CC-MAIN-20220626161834-20220626191834-00756.warc.gz
CC-MAIN-2022-27
4,442
23
http://msdn.microsoft.com/en-us/library/microsoft.build.tasks.windows.getwinfxpath
code
This API supports the .NET Framework infrastructure and is not intended to be used directly from your code. Implements the GetWinFXPath task. Use the GetWinFXPath element in your project file to create and execute this task. For usage and parameter information, see GetWinFXPath Task. Assembly: PresentationBuildTasks (in PresentationBuildTasks.dll) Thetype exposes the following members. |BuildEngine||Gets or sets the instance of the IBuildEngine object used by the task. (Inherited from Task.)| |BuildEngine2||Gets the instance of the IBuildEngine2 object used by the task. (Inherited from Task.)| |BuildEngine3||Gets the instance of the IBuildEngine3 object used by the task. (Inherited from Task.)| |BuildEngine4||Gets the instance of the IBuildEngine4 object used by the task. (Inherited from Task.)| |HostObject||Gets or sets the host object associated with the task. (Inherited from Task.)| |Log||Gets an instance of a TaskLoggingHelper class containing task logging methods. (Inherited from Task.)| |WinFXNativePath||Infrastructure. Gets or sets the path for native WinFX runtime.| |WinFXPath||Infrastructure. Gets or sets the path for the WinFX runtime.| |WinFXWowPath||Infrastructure. Gets or sets the path for WoW WinFX run time.| |Equals(Object)||Determines whether the specified object is equal to the current object. (Inherited from Object.)| |Execute||Infrastructure. Executes a task. (Overrides Task.Execute().)| |GetHashCode||Serves as a hash function for a particular type. (Inherited from Object.)| |GetType||Gets the Type of the current instance. (Inherited from Object.)| |ToString||Returns a string that represents the current object. (Inherited from Object.)| Windows 8, Windows Server 2012, Windows 7, Windows Vista SP2, Windows Server 2008 (Server Core Role not supported), Windows Server 2008 R2 (Server Core Role supported with SP1 or later; Itanium not supported) The .NET Framework does not support all versions of every platform. For a list of the supported versions, see .NET Framework System Requirements.
s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368699273641/warc/CC-MAIN-20130516101433-00020-ip-10-60-113-184.ec2.internal.warc.gz
CC-MAIN-2013-20
2,038
20
https://pathema.jcvi.org/publications/simple-algorithm-infer-gene-duplication-and-speciation-events-gene-tree
code
A simple algorithm to infer gene duplication and speciation events on a gene tree Zmasek CM, Eddy SR When analyzing protein sequences using sequence similarity searches, orthologous sequences (that diverged by speciation) are more reliable predictors of a new protein's function than paralogous sequences (that diverged by gene duplication), because duplication enables functional diversification. The utility of phylogenetic information in high-throughput genome annotation ('phylogenomics') is widely recognized, but existing approaches are either manual or indirect (e.g. not based on phylogenetic trees). Our goal is to automate phylogenomics using explicit phylogenetic inference. A necessary component is an algorithm to infer speciation and duplication events in a given gene tree. This publication is listed for reference purposes only. It may be included to present a more complete view of a JCVI employee's body of work, or as a reference to a JCVI sponsored project.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510575.93/warc/CC-MAIN-20230930014147-20230930044147-00270.warc.gz
CC-MAIN-2023-40
977
4
https://codingprolab.com/product/cs110-fundamentals-of-computer-programming-assignment-1/
code
In this assignment you have to understand and implement the following concepts • IF-ELSE conditioning • To develop skills for using if – else statements. • To understand how to program conditional calculations. • Microsoft Visual Studio 2010 or later In this assignment you will create a program that an employee can use to calculate his/her pension. The pension is calculated by applying some calculations on his last drawn salary. Details about salary: Employee’s salary is divided into three parts. One part is the basic salary, the second part is the house rent and the third part is the old age allowance. These three combine to form an employee’s total salary. 5% income tax and 7% provincial tax is deducted from the basic pay. The house rent and old age allowance are calculated after the deduction of tax. Employee, who is less than 45 years of age, gets no old age allowance. For an employee who is between 45 and 55, old age allowance is 10%. For employees older than 55 the old age allowance is 15% CS110: Fundamentals of Computer Programming Page 3 Married employees are not given any house rent and the married get 15%. An employee’s pension is calculated according to the following rules: 1. Basic pay is doubled and multiplied with the number of months is service. 2. House rent is multiplied with the months in service since marriage. 3. Old age allowance is multiplied with 3. 4. Total pay is multiplied with 2. 5. The total of the above four are added together to compute the pension. Take necessary input from a user and calculate her/his pension. You cannot ask the employee about the months in service, or the months in service since marriage. Employee’s input should be a date and you need to calculate the months in your program. The output of the program must be very structured and detailed. It not only shows the pension but also must show all the calculations under proper heads and step-by-step so the employee can understand how the pension is computed. In this part of the assignment you learn to use math.h and then you will create a console-based Create a calculator that works on the console. User can enter the operands and an operator and gets the result back on the console. You can use math.h for scientific computations. For full credit, you should also include a Boolean calculator. For highest marks you should include as many features in your calculator as you can. At the start of your program you should output all the operators that you have implemented and also the instructions for using your calculator. Your software should be complete in all aspects to be used by a non-technical user. CS110: Fundamentals of Computer Programming Page 4 Any assumptions that you take must be properly stated. You must do this work individually but you can ask for help from the Lab Engineer. You cannot share your code with anyone or copy code. Plagiarism will result in zero marks. Submit only 1 zip file on the given LMS link which contains both the programs. You must include the source code files, not an exe or any other kind of file. Your file should be named as asg1[YOUR FIRST AND LAST NAME].zip Always submit 1 day before the deadline to avoid any last minute delays. Marks break down: 1. Working of the program: 50% 2. Code readability: 25% 3. Output structure and aesthetics: 25%
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474482.98/warc/CC-MAIN-20240224012912-20240224042912-00038.warc.gz
CC-MAIN-2024-10
3,341
50
http://archive.fabacademy.org/2017/fablabseoul/students/351/index.html
code
Fab Academy 2017 Do you want to know about me? Click here! principles and practices, project management 3D scanning and printing molding and casting networking and communications interface and application programming invention, Intellectual Property and Business models This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258621.77/warc/CC-MAIN-20190526025014-20190526051014-00313.warc.gz
CC-MAIN-2019-22
380
9
https://archive.nyafuu.org/bant/search/image/L7O6Rt8gAuNeML50uMLVfw/
code
All I do other than work is gym. And that argument started as I got to the gym yesterday. It really made my mood go downhill that I left after only 35 minutes. I'm thinking of going for a walk up a hill saturday morning, as it's been a while since I've explored nature, and nature does good stuff to my mind. Right now my stomach is grumbling but I just cannot eat becuase of all of this. I'm just gonna have water. Do you have snapchat so I can talk to you when I need advice or just to express myself, since I have huge difficulty doing that to people I know.
s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590348513321.91/warc/CC-MAIN-20200606124655-20200606154655-00494.warc.gz
CC-MAIN-2020-24
561
5
http://phillydotnet.org/stages/wynnewood/
code
It today’s world of fast moving customer needs and prioritization, organizations need to be able to adapt and change direction quickly and easily. For application development teams, having an agile process is key for keeping up in this rapidly changing environment. One of the most popular agile methodologies is Scrum. The first half of the day will focus on Scrum theory and how to use it in the real world with Visual Studio Online. The second half of the day will be an introduction to Git, how to work using a distributed source control system and how it differs from a centralized version control system such as Team Foundation Version Control. We will look at how to setup and use a local Git repository and how to integrate with GitHub and Visual Studio Online for code management, collaboration and code reviews. NOTE: Part I of DOUBLE LENGTH SESSION NOTE: Part II of DOUBLE LENGTH SESSION We’ll discuss improvement to the language providing developer productivity gains and better coding practices. In addition, we’ll review the tools and libraries allowing the use of some of these features before support is available in current browser versions. Come with your questions, concepts, issues and quandaries on mobile development, making your app successful, maximizing code reuse, getting, keeping, supporting and engaging millions of users with your apps from someone who has done it and is doing it every day. his talk aims to cover all facets of developing, publishing and monetizing mobile apps for the Android platform. Starting from developing your application using the Xamarin.Android platform, publishing apps in Google Play / Amazon App Store. From there we will dive into monetizing apps by implementing mobile ads with Google AdSense, Then take a look at getting analytics to support your application using Xamarin Insights. then take a look at expanding your application’s functionality by supporting it with integrated database as well as integrating back-end Web Services. And finally we will discusss app integration with social networks like Facebook / Twitter – one of the most cost-effective ways to spread the word about your app. If you’re an app developer that wants to make money, you will find many tools and ideas here to help you to push your app into the black on your balance sheet. Let one of the most consistently successful app producers on the planet show you how he makes his living. You like baseball, you like MongoDB and you like .NET. Why not put them all together and achieve a zen-like state of being completely in touch with your data! This session will walk through some familiar and not-so-familiar baseball statistics and how you can crunch them using MongoDB’s aggregation pipeline. We’ll talk about MongoDB’s aggregation pipeline, the different components of the pipeline, and how they can be used together to calculate some SABR metric statistics. This talk will be mostly code and will alternate between the MongoDB shell and code using the C# driver. This talk is updated from last year’s with the latest MongoDB 2.0 C# driver along with updated stats and some new calculations. If time permits, we’ll also look at some offline processing in order to calculate some of the more complicated statistics. RabbitMQ is a great implementation of AMQP and can be used for a many situations. In this session we’ll explore some classic messaging patterns and how they can be expressed via RabbitMQ. We’ll also delve into some advanced message routing and management. Keeping our broker alive and happy is important, too, so we’ll talk about clustering and high availability and all that jazz. This workshop will provide an introduction to Big Data Analytics using Apache Spark using the HDInsights on Azure (SaaS) and/or HDP deployment on Azure(PaaS) . There will be a short lecture that includes an introduction to Spark, the Spark components. Spark is a unified framework for big data analytics. Spark provides one integrated API for use by developers, data scientists, and analysts to perform diverse tasks that would have previously required separate processing engines such as batch analytics, stream processing and statistical modeling. Spark supports a wide range of popular languages including Python, R, Scala, SQL, and Java. Spark can read from diverse data sources and scale to thousands of nodes. The lecture will be followed by demo . There will be a short lecture on Hadoop and how Spark and Hadoop interact and compliment each other. You will learn how to move data into HDFS using Spark APIs, create Hive table, explore the data with Spark and SQL, transform the data and then issue some SQL queries. We will be using Scala and/or PySpark for labs. Users have 2 options to follow along with the demo labs. You can use the: * Hortonworks Sandbox on a VM No data center, no cloud service and no internet connection needed! Full control of the environment. http://hortonworks.com/products/hortonworks-sandbox/#install * HDP 2.3.2 on Azure with Hortonworks Sandbox. Try Hortonworks Sandbox on Windows Azure. It’s free for the the first month, and there’s no need to download the VM! Taking a tour of designing production scale Data Science Platform using Azure Cloud based Microsoft big-data and Machine-learning technologies. This session will present the industry’s best practices and patterns (with real-life examples) in designing and developing scalable and fault tolerant data platform. We will discuss multiple design choices (options) and the rationale behind choosing one over the others. I will also provide a high-level overview of current state (which has changed a lot since last code-camp) of various big-data technologies such as Hadoop, Spark, HBase, Hive running on top of Azure along with web based Machine Learning Studio running in Azure to design and architect Data Science applications. You don’t need to be familiar with it to attend this session. One of the challenges of developing a large-scale data operation is the reliable and rapid storage of large quantities of data. Couchbase Server is a distributed NoSQL database designed for performance, scalability, and availability. Unlike other NoSQL solutions, Couchbase supports the SQL query language N1QL. In this session, I will demonstrate the features of Couchbase Server and demonstrate how to use the Couchbase SDK with ASP. In addition I will discuss lessons learned during the implementation of these tools. Attendees will gain the knowledge necessary for assessing Couchbase as a solution to their large data needs. Many are surprised when coming from consumer mobility, just how different the operating environment and requirements can be in enterprise-focused mobility. This session will cover enterprise mobility from the typical business requirements though platform architecture, engineering and implementation, leading finally to developing applications on this often fragmented technology foundation. Enterprise mobility can be very challenging with mistakes resulting anywhere from lost productivity, up though total loss of your proprietary data and, often the client’s trust. An understanding of the enterprise mobility landscape is essential to crafting and implementing a successful mobile strategy upon which to build your business. The topic areas will be applicable for any sized organization including large, regulated and governmental organizations. Once the groundwork is laid, we will focus on what this all means for application development so you can build, and actually implement, the next killer enterprise app. Swift was introduced by Apple during WWDC 2014. Taking cues from modern languages such as Rust and F#, Swift is a refreshing replacement for the aging Objective-C language. Apple open sourced Swift in version 2. This means it will soon power web servers on the open web! There are already great companies like IBM contributing heavily to the Swift open source scene. Swift’s playground environment makes it easy to quickly play with the language and is the tool of choice for experimentation. Swift 3 was just released in September. This session will bring you up to speed with the Swift language through live code demos inside of a Swift playground. I’ll highlight new features from Swift 3 and how they impact your development with the language. You’ll leave with the knowledge needed to tackle iOS, Mac or web applications using a new language. There’s Thousands of apps out there, find out how to make yours stand out with some great polish through animations. You’ll learn how to give your Windows apps that style and finish without writing tons of code by creating reusable animation libraries that you will use again and again. Taking a tour of Microsoft Azure based Big-Data platform (one of the most promising technology in the industry at least for next few years) that unlocks the potential of creating new types of business applications that was not possible before. This session will take a look into the new features of Azure/HDInsight (which supports Hadoop, Spark, Hive, HBase and many other big-data technologies that runs on top of cloud based virtualized infrastructure), discuss the possible design scenarios in support of writing cross domain data analytics applications and finally writing few (more than one) different real world applications. Most of these technologies are portable and run in all major technological platforms (i.e. beyond Microsoft platform) seamlessly. I will also walk you through the Azure based Machine Learning Studio and Microsoft Cognitive Services to design and architect Data Science applications. You don’t need to be familiar with it to attend this session. Don’t know where to start with your Hadoop journey? Is your company considering Hadoop and you want to get up to speed quickly? Just want to modernize your skills? If you answered YES to any of these then this session is for you. Hadoop is a hot skill in the data space but it’s challenging to learn both the new technologies (like Spark and Hive) as well as the modern concepts (like Lambda/Kappa and “streams”). We’ll break down the most important concepts that you need to know and can start using in your job TODAY, even if you don’t have a Hadoop cluster. We’ll do an overview of the important tooling and show you how to spin up a sandbox in minutes. Sold OutThis is an all day workshop showing the ins and outs of Xamarin.Forms, a wonderful framework from Microsoft for building cross platform applications. We will be building an Android app together. The basic topics that will be covered include: – Xamarin.Forms Controls and XAML – Authentication using both JWT and also Xamarin.Auth – Navigation in Xamarin.Forms – Local databases with SQLite To code along, please bring a laptop with Visual Studio installed (either Windows 10 or Mac OSX.) To test the application locally, it’s recommended to install Android Studio as well for the SDK tools and emulators. You can also use an Android phone connected via USB in developer mode. I will be providing a RESTful API we will use for our application. If time permits, these advanced topics can be covered: – XAML design using Live Player or Gorilla Player – iOS development (with Unified API) – Animations (using AirBnb’s Lottie library) – Continuous integration using Jenkins – Incorporating an Android widget into the app DevOps is the secret sauce behind today’s most successful development teams and companies. Join Microsoft Cloud Solutions Architect Louis Berman as he shows you how to speed your race into the cloud; in many cases by as much as 10x within a single year. In this demo-heavy session Mr. Berman will demonstrate how very easy it is for every organization to adopt DevOps, but just as importantly he’ll also focus on the soft-skills needed to “sell” DevOps to your clients and peers. The session will conclude with Mr. Berman’s “Top 10 Tips for DevOps Success!” The Server is Dead! Going Serverless to Create a Highly Scalable Application You Can Manage Do you do cloud development? Are you adhering to the 12 factors (https://12factor.net/)? Cloud applications are ubiquitous now, and it is easy to miss the factors that make them truly cloud native. In this session, lets revisit the 12 factors with some practical examples using a CloudFoundry application. So you have been tasked with making that query faster. You know that indexes often can help with query performance, but how do you even start going about it. Join Sebastian Meine, Ph.D. for this truly interactive session and discover how indexes in SQL Server work. After attending, you’ll be able to answer common index questions, like: – Which columns should I add to my index? – How many indexes should I add to my new table? – Does the column order in my index have to match that in my query? – Does it hurt to have too many indexes? – When should I consolidate Indexes? – Are there queries that get slower after I create an index? But even more important, you’ll be able to explain how indexes are organized in SQL Server and what mechanism is responsible for the amazing performance gains you can achieve with them. Don’t miss this unique session. Attend, and you might just turn into an indexing superstar. Life would be so much easier if everything was in a database or pulled via API. But that is not the case. All too often we get data files (or have to send them) in various formats. This session discusses some of the tools available to help you figure out what the file looks like so you can pull it apart using those tools or your tool-of-preference. While the GNU version of these tools will be the focus, the skills learned apply to many different platforms. Being available on so many platforms gives you lots of choices. Your choices include Microsoft’s Bash under Windows 10, Cygwin under many Microsoft Windows versions, MAC OSX, the Linux core of Android, commercial Linux — like Red Hat Enterprise, and commercial UNIX — like IBM’s AIX or Sun/Oracle’s Solaris. Of particular interest are ‘head’, ‘tail’, ‘wc’, ‘awk’, ‘dd conv’, and shells. A few of the differences between UNIX/Linux and Windows will also be discussed to ease your shifts in our heterogeneous environments. This knowledge also comes in handy if you need to migrate code from an existing UNIX/Linux-based application.
s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107876768.45/warc/CC-MAIN-20201021151342-20201021181342-00429.warc.gz
CC-MAIN-2020-45
14,483
54
https://community.graylog.org/t/graylog-does-not-log/15290
code
i’m stuck way back at the very beginning i’m afraid. i can’t get graylog to log anything. i’m following along here: i’ve used the most basic docker-compose example provided (although i’ve repeated the more basic steps without using docker-compose and had the same results), but i’ve added access to the 5555 port. it all seems to work, i can log into the web console, but when i issue the command (as per the document): echo ‘First log message’ | nc localhost 5555 nothing gets added to the messages in the web console. if i run that nc command with the -v argument, it says: Connection to localhost 5555 port [tcp/*] succeeded! i have tried this on a macOS host, and a linux host, and the messages don’t appear in the web console on either. any tips? i’m reasonably experienced with docker if that makes a difference. this is my docker-compose.yaml: version: '3' services: # MongoDB: https://hub.docker.com/_/mongo/ mongo: image: mongo:3 networks: - graylog # Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/6.x/docker.html elasticsearch: image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.5 environment: - http.host=0.0.0.0 - transport.host=localhost - network.host=0.0.0.0 - "ES_JAVA_OPTS=-Xms512m -Xmx512m" ulimits: memlock: soft: -1 hard: -1 deploy: resources: limits: memory: 1g networks: - graylog # Graylog: https://hub.docker.com/r/graylog/graylog/ graylog: image: graylog/graylog:3.2 environment: # CHANGE ME (must be at least 16 characters)! - GRAYLOG_PASSWORD_SECRET=somepasswordpepper # Password: admin - GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918 - GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/ networks: - graylog depends_on: - mongo - elasticsearch ports: # Graylog web interface and REST API - 9000:9000 # Syslog TCP - 1514:1514 # Syslog UDP - 1514:1514/udp # GELF TCP - 12201:12201 # GELF UDP - 12201:12201/udp - 5555:5555 networks: graylog: driver: bridge
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335424.32/warc/CC-MAIN-20220930020521-20220930050521-00576.warc.gz
CC-MAIN-2022-40
1,981
10
https://www.mumsnet.com/Talk/general_advice_tips/73961-question-for-those-with-kids-with-us-passports
code
Would your son not be able to enter and leave the US on his UK passport? My dd has both as she was born there and we always went on the same passports as she would have had to que in a different line otherwise... No experience of US here, but I have a non-UK EU passport, my kids have British passports, and have dh's surname (I have kept my own). I have found it useful to carry the long birth certificates with me when I travel, as some countries (Germany and Switzerland particularly) get a bit arsey about believing the kids are actually mine. (The fact that they all look like little clones of me is, apparently, neither here nor there). DD1 (now 23) has a US passport and we are both British but we only travelled all together when she was tiny, so prob. not v helpful. However, IIRC we did use her US passport whenever we left or re-entered, even though we only had GB ones and went through the aliens [wierd eyes emoticon] channel. (They never removed her for her own protection anyway.)
s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583511744.53/warc/CC-MAIN-20181018063902-20181018085402-00416.warc.gz
CC-MAIN-2018-43
995
4
https://ro.pokernews.com/tours/wsop/2013-world-series-of-poker/event-33-2-500-seven-card-razz/chips.33524.htm
code
We caught a hand between Andy Bloch and Scott Bohlman, where Bloch kept catching monster cards on every street and kept firing. Bohlman completed on third street to 600, and Bloch (seated to his immediate left) raised to 1,200. Bohlman called. On fourth street, Bloch raised and Bohlman called. Same as on fifth street, where Bloch bet and Bohlman called. On sixth street, Bloch bet again, and noticing that he was far behind, Bohlman folded.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506528.3/warc/CC-MAIN-20230923194908-20230923224908-00192.warc.gz
CC-MAIN-2023-40
442
3
https://movabletype.org/documentation/developer/apps/alt-templates.html
code
Developers can override the templates used by the application to display its user interface, without overwriting the templates the application ships with, by placing alternative versions of those templates in the /path/to/mt/alt-tmpl directory. Files placed there should have the same file name as the template they wish to override. For example, let’s suppose we would like to provide an alternative to the Movable Type dashboard. Here are some steps you can follow to make those changes safely without altering the original files: Make a copy of the dashboard template and place it in the cp /path/to/mt/tmpl/cms/dashboard.tmpl \ /path/to/mt/alt-tmpl/cms/ Edit the file now located at Movable Type will immediately begin using your customized version of the template as opposed to the version that came with Movable Type by default. Pros and Cons While alternate templates are by far the simplest and most straight forward way to customize the Movable Type user interface, one major limitation remains: only one plugin can override a template in this manner at a time. Plus, keeping alternate template up to date as the templates they are derived from can be cumbersome and error prone. Therefore, alternate templates are typically best used by users to customize their own installation, as opposed as a mechanism for plugin developers to alter slightly a page’s contents.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817095.3/warc/CC-MAIN-20240416124708-20240416154708-00294.warc.gz
CC-MAIN-2024-18
1,378
9
http://freshermart.in/vacancies/big-data-qa-engineer-bangalore/
code
Duties and Responsibility - Validate data pipelines/workflows delivered by Data Engineers and dashboards by the BI Engineers against user story requirements - Ensure principles of privacy and security are assessed and designed into the data solutions - Write high quality, generalised test datasets and test methods for verifying and validating new functional and non-functional data pipelines/ workflows and dashboards, enabling automated testing wherever possible - Generate golden datasets to efficiently stress-test software and assure version control - Report QA outcomes to Data Engineers, BI Engineers and Business Owners - Develop standard QA rules that would be applied across all data pipelines/workflows and dashboards - Maintain a high-quality audit trail of your data pipeline/workflow and dashboard testing - Manage and mentor junior members of the team - Create standards, conventions, processes, SLAs to reduce time and increase quality of delivery of the QA team - Evaluate different tools and technologies to be used by the team. What are the requirements of the role? - Understanding of Quality Assurance data practices - Hands-on experience writing unit tests and mocking the data - Hands on experience to analyse regression reports, automated builds and KPI’s - Strong experience of writing acceptance criteria or scenarios collaboratively with other stakeholders - Participated in full development data lifecycle - Experience with SQL, Python, data modeling, ETL development, and data warehousing - Experience with AWS technologies including Redshift, Redshift Spectrum, Athena, Hive etc.
s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154042.23/warc/CC-MAIN-20210731011529-20210731041529-00149.warc.gz
CC-MAIN-2021-31
1,612
19
http://aplebessite.com/2015/01/06/an-idle-thought-of-my-own/
code
…on the subject of language, triggered by John H. McWhorter’s piece in a recent Wall Street Journal. He wrote about the likelihood of English being the sole planetary language in 100 years. He suggests it’s very unlikely that English—or any other language—will become the sole language any time soon, and for a variety of reasons. He never got into why that might be a good thing, though. He did mention If all humans had always spoken a single language, would anyone wish we were instead separated now by thousands of different ones? Which is what triggered my idle thought. It’s good that we’ve never had a single language; it’s good that we developed a variety of civilizations from a plethora of languages. Language is thought. Language is both how we express our ideas and how we develop ideas in the first place: the one feeds back into and informs the other. Differing modes of thinking produce differing modes of problem recognition and of solution. There’s no doubt in my pea brain that we’ve made, as a species, the technological, political, and social progress that we have because of our varying languages and their varying problem handling sets. A single language would have slowed us down a lot more, and it would have left us with blind spots and problems unrecognized and recognized problems unsolved that we have, in fact, worked through or are in the process of working through. We need lots of unrelated languages, because we need the broad range of thinking. We’ll stagnate as a species when we are reduced to a single language.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233510707.90/warc/CC-MAIN-20230930181852-20230930211852-00539.warc.gz
CC-MAIN-2023-40
1,570
7
http://forums.larian.com/ubbthreads.php?ubb=showflat&Number=606450
code
NOTE: This has been figured out. No need to read except for EDIT 2 (which, by itself, should have been in the general section). Maybe I should just delete most of this but, well, maybe someone will find it instructive or interesting or, at the very least, make them absolutely certain that I am totally insane. I found the Act II Level 1 and 2 keys easy enough but I couldn't pick up the level 1 key no matter what I tried. It said I had access when I clicked in it and indeed it was on the secondary weapons list but - after I grabbed the second key and went to the BF, three of the four are locked. I can go back I suppose but likely still won't be able to pick it up. Suggestions? EDIT - I did notice in the strategy guide that it needn't be picked up. EDIT 2 - also read something in the guide that it's possible to turn the DK pink - just wondering, if I do so, does he say anything? If so, it may be worth doing just to hear it. EDIT 3 - OK, this may be getting weirder or, perhaps it is the way it was meant to be but there are six entrances to dungeons? The problem is actually fixed now since two are unlocked but I haven't read about more then four in the second act. Good thing I decided to explore the other places that were still in the fog of war. EDIT 4 - Gaaaah! The guide stopped at four keys - until the next page! All of this for naught (well, if I get an answer, perhaps not for the pink DF).
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670559.66/warc/CC-MAIN-20191120134617-20191120162617-00536.warc.gz
CC-MAIN-2019-47
1,412
7
http://www.lulus.com/products/restricted-blackout-whiskey-tapestry-oxford-flats/106714.html
code
Only Until Free Shipping View Your Bag Your Bag Is Empty. select a size and enter your email below to be notified when this product comes back in stock! Tag your photos on Instagram or Twitter for a chance to WIN $100! Jersey Devil Grey Crop Tee City Classified Sadler Dark Beige Patent Pointed Flats Two-Piece Suits Me Natural Beige D'Orsay Flats J Slides Dibbie Black and White Slip-On Sneakers Training Days Black Sneakers Kensie Gardenia White Cutout Slip-On Sneakers Check out our Blog > Sign up for our emails and be the first to know about special offers, sales, the latest arrivals and much more!
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098987.83/warc/CC-MAIN-20150627031818-00168-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
604
14
https://learning10.com/artificial-intelligence/dgx-station-vs-diy/
code
As a developer, researcher, or facts scientist, you want to convey the energy of AI to your work. You could go to IT to get a server, but you want the control and versatility to be effective, although still working with GPUs for deep learning, schooling and inference. It might seem charge powerful to check out and construct your possess AI workstation. But executing that might depart you configuring, troubleshooting, and optimizing computer software for weeks or even months. Introducing NVIDIA DGX Station, the world’s to start with private supercomputer, purpose-created for AI. Discover additional: http://nvda.ws/2yPDrqu
s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514572289.5/warc/CC-MAIN-20190915195146-20190915221146-00222.warc.gz
CC-MAIN-2019-39
630
3
https://www.wordpresspluginfinder.com/rss-linked-list/
code
Can be seen in use at: Top Categories - put the top_categories.php file in your plugins directory activate the plugin modify your template to include the following PHP: You can pass in an optional numeric value like: to change the number of categories displayed. By default it shows the top 10 categories (by number of posts).
s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00553.warc.gz
CC-MAIN-2022-21
326
6
https://blenderartists.org/t/making-parts-of-a-model-emit-light-in-unity-and-be-able-to-turn-on-off-and-up-down-of-brightness/1185129
code
I am new here and new to 3D drawing, I hope I am posting in the right forum. I have (well not yet) a model where I want parts of the model to be able to emit light on command and it must also be possible to turn up and down the brightness of that light. I have no idea how to approach this…? Do I have to make those parts as seperate objects on my model, or even export seperate models and assemble them in Unity and make them light sources in Unity? Or can I do it all in Blender some how?
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496667260.46/warc/CC-MAIN-20191113113242-20191113141242-00465.warc.gz
CC-MAIN-2019-47
492
2
https://oracle-patches.com/en/databases/services-applicable-to-dbaas
code
We now have a solid foundational understanding of what cloud computing is and how it applies to DBaaS. In this article, we outline the specifics around services as they relate to DBaaS and what they mean to end users as well as to the provider in a cloud computing environment. The services offered by the DBaaS provider to the end user fall into three main categories: provisioning, administrative, and reporting. Some of these services are optional, and others are mandatory. Provisioning services provided to end users include some or all of the following: - The ability to requisition new databases. - The ability to choose database options as needed (partitioning, advanced security, High Availability with Real Application Cluster, etc.). - The ability to add resources (storage, CPU, network bandwidth, etc.) to existing databases. This includes the ability to scale up as well as to scale down. - Database backup capability using provided backup resources. Administrative services include some or all of the following: - The ability to perform on-demand database restores and recoveries. - The ability to perform database clones using existing database backups. - Database monitoring capabilities, including basic 24/7 incident reporting management capabilities. Reporting services include some or all of the following: - Performance management, which is the ability to look at a database from a performance and tuning standpoint, whether in the form of reporting or in the form of application and GUI database restore and recovery capability. - Resource consumption and usage reports, which let end users compare the resources provisioned and the actual usage so they can fine-tune resource needs to accommodate workload. - The ability to view resource chargeback based on resource allocation and consumption. - The ability to track provider compliance to the SLAs. The ability to track provider compliance to the SLAs is an especially critical point to understand. To ascertain whether or not the requested services are being provided at an appropriate level, end users must first define what “appropriate level” means. For each service, there may be more than one SLA. The higher the SLA, the more technology and resources are needed to satisfy the SLA. Pricing is also affected by the level of service detailed in the SLA. For example, for I/O performance guarantees, the SLA would specify the input/output operations per second (IOPS) and megabytes per second (MBps) would specify I/O service times. Based on the SLA, the provider determines the actual storage layer provided to the end user. It is the provider’s responsibility to ensure that the service delivered to the end user is within the accepted limits. If we look at I/O performance as an example, the SLAs could be structured as follows: - Bronze standard: Small block average I/O service times equal to or under 15 ms. - Silver standard: Small block average I/O Service time equal to or under 10 ms. - Gold standard: Small block average I/O service times equal to or under 5 ms. - Platinum standard: Small block average I/O service times equal to or under 1 ms. Based on these SLAs, the provider may choose to - Place bronze customers on a low-end storage array’s using primarily serial advanced technology attachment (SATA) disks. - Place silver customers on high-performance storage arrays using serial-attached SCSI (SAS) drives. - Place gold customers on a high-end storage array’s with a combination of SAS and solid-state drive (SSD). - Place platinum customers on a high-end storage array based entirely on SSD or flash memory. The key is that, once end users make their choice, the provider has to - Define the exact key performance indicators (KPIs) required to meet the service level expectation. - Ensure that the KPIs required for the SLA are measured and monitored. - Plan for expansion to continue to be able to meet and provide the expected KPI metrics both now and in the long term. - Provide end users with reports that support or, if necessary, justify the provider’s service performance capabilities. Architecture of an Oracle-Based DBaaS Implementation DBaaS started primarily as a consolidation exercise for reducing capital expenditures (CAPEX), but as it evolved, organizations started looking into other key drivers, such as self-service, showback, and chargeback. Before we look at the details of how to implement DBaaS, we need to have some understanding of the underlying consolidation models and deployment issues that are common to all DBaaS flavors and some of the terminology that we use when defining DBaaS. The various consolidation models that can be used to provide DBaaS are shown in Figure 1. The simplest and most prevalent form of consolidation exists around server virtualization. Server virtualization offers a simple way of running multiple operating system instances on the same hardware. A better model, platform consolidation, consolidates multiple databases on the same operating system, or a cluster. However, in both cases, database sprawl is still an issue that invariably leads to larger administrative overheads and compliance challenges. An even better consolidation model is the capability to host multiple schemas from different tenants within the same database, using Oracle Database 12c’s multitenant architecture. Figure 1. Consolidation models Before we describe such methodologies, however, it is important to have a common understanding of the components that make up the underlying architecture. Architecture and Components In Oracle terminology, hosts containing monitored and managed targets are grouped into logical pools. These pools are collections of one or more Oracle database homes (used for database requests) or databases (used for schema requests). A pool contains database homes or databases of the same version and platform—for example, a pool may contain a group of Oracle Database 188.8.131.52 container databases on Linux x86_64. Pools can in turn be grouped into zones. In the DBaaS world, a zone typically comprises a host, an operating system, and an Oracle database. In a similar vein, when defining middleware as a service (MWaaS) zones, a zone consists of a host, an operating system, and an Oracle WebLogic application server. Collectively, these MWaaS and DBaaS zones are called platform as a service (PaaS) zones. Users can perform a few administrative tasks at the zone level, including starting and stopping, backup and recovery, and running chargeback reports for the different components making up a PaaS zone. In the DBaaS view of a PaaS zone, self-service users may request new databases, or else new schemas in an existing database can be created. The databases can be either single instance or a Real Application Cluster (RAC) environment, depending on the zones and service catalog templates that a user can access. Diagrammatically, these components and their relationships are shown in Figure 2. Figure 2. Components of a PaaS zone
s3://commoncrawl/crawl-data/CC-MAIN-2023-23/segments/1685224655244.74/warc/CC-MAIN-20230609000217-20230609030217-00250.warc.gz
CC-MAIN-2023-23
7,020
44
https://db0nus869y26v.cloudfront.net/en/List_of_foodborne_illness_outbreaks
code
This is a list of foodborne illness outbreaks. A foodborne illness may be from an infectious disease, heavy metals, chemical contamination, or from natural toxins, such as those found in poisonous mushrooms. Main article: List of foodborne illness outbreaks in the United States In 1999, an estimated 5,000 deaths, 325,000 hospitalizations, and 76 million illnesses were caused by foodborne illnesses within the US. Illness outbreaks lead to food recalls.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233511075.63/warc/CC-MAIN-20231003092549-20231003122549-00449.warc.gz
CC-MAIN-2023-40
455
3
https://community.airtable.com/t5/show-tell/have-you-had-frustrations-with-interfaces-try-amplify-instead/td-p/69748
code
Interfaces is a cool new Airtable feature! But, you’ve probably run into some limitations like adding new records or setting internal permissions. It might be time to try the On2Air Amplify app in the marketplace. "This is what INTERFACES should have been." - Chris Amplify is a record dashboard app that lets you customize how you view records, linked records, linked tables, and more. You can: Create new records or linked records from the dashboard Set Permissions for records and layouts. You can choose to allow or disallow a specific role type or a specific user to create, clone, edit, or delete records. (please note, a user can still view the data in the underlying base. This is meant to help a user focus on data specific to their needs.) Create Default Values for any field automatically. Add your default values in the settings, then each time you create a new record, it’s automatically pre-filled with your field data
s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00395.warc.gz
CC-MAIN-2023-06
936
6
https://www.ancestry.com/boards/topics.software.general/1783.5/mb.ashx
code
I don't know if you found any information but none the less. You can import your data in Access and Excel but the basic functions only allow importing text coded records. To do anything else more accurate, you need to create custom queries and macros in Access or Excel. The images can be re linked but you'll probably need to link them manually unless your GED file has those links. You can also create a macro which changes the links in your GED file prior to importing it. The Access or Excel queries would need to read the GED contents and the macros could then format the links. It's not impossible to do but it involves work on your part. The product owners don't expect people to move away form their products so they don't provide anything else but simple backup and import tools. The good news is that once you create your new database and setup all the macros and queries you can also include functionality to do full backups including any scans. Please note though that Excel and Access are still limited and any good quality database managing softwares won't be free. Hope this helps.
s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934806066.5/warc/CC-MAIN-20171120130647-20171120150647-00188.warc.gz
CC-MAIN-2017-47
1,096
6
https://github.com/omegahat/Rlibstree
code
Join GitHub today GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together. Shell C Assembly Fetching latest commit… Cannot retrieve the latest commit at this time. |Failed to load latest commit information.| This is a quickly written interface to a suffix tree library to explore different aspects of external data and the use of suffix trees in R for text manipulation. Currently, we use the libstree (http://www.cl.cam.ac.uk/~cpk25/libstree/, download from http://www.cl.cam.ac.uk/~cpk25/downloads/libstree-0.4.0.tar.gz or more recently from http://www.icir.org/christian/libstree/) source code by Christian Kreibich. Installation of that library is relatively straighforward as it is stand-alone, not depending on other libraries. Unfortunately, to use it easily with another program, the libstree code should be installed. To put it someplace for which you do not need special permissions, use cd libstree-0.4-0 # or whatever the relevant directory is. ./configure --prefix=$HOME/local make install Then use R CMD INSTALL --configure-args=--with-libstree=$HOME/local Rlibstree and that should find the relevant header and libaries files. Specifically, the argument for --prefix when building and installing libstree should be the same as the value for --with-libstree in the R CMD INSTALL. The package provides access to StringSet and SuffixTree classes which are external pointers, i.e. references to the C-level data structures. The package provides an interface to the getLongestSubstring() facilities in libstree for finding the longest common and repeated substring of a given length. These are the algorithms that are currently available in the libstree code. In this R package, one can iterate over the elements in a StringSet using lapply/sapply. The operation can be given as either an R function or a C routine (an object of class "NativeSymbol"). The C routine can be obtained using getNativeSymbolInfo(symbolName). We want the address of that, but methods for lapply will coerce a NativeSymbolInfo appropriately. The ability to use a C routine is intended to illustrate this facility in the R-C interface and also for efficiency. We can add the same for traversing the tree. The longest substring algorithms are suboptimal in this library. So why are we using it? Primarily because we hope the interface design from R to the library will carry over to different implementations. We have used this to provide an illustration of different aspects of S4 classes, external pointers, using C routines in lapply. We will probably move to a different implementation of suffix trees if there is sufficient motivation. Other possible libraries include Shlomo Yona's at http://yeda.cs.technion.ac.il/~yona/suffix_tree/ and Stefan Kurtz's which is part of MUMmer from TIGR (The Institute for Genomic Research) available at sourceforge.net. (See the file src/kurtz/streesrc/streeproto.h for the potentially relevant C routines) or in a different form the wotdsrc.new from http://bibiserv.techfak.uni-bielefeld.de/download/tools/wotd.html associated with the paper http://www.zbh.uni-hamburg.de/staff/kurtz/papers/GieKurSto2003.pdf I have no experience with any of these yet, so take the pointers as merely places to explore.
s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886102757.45/warc/CC-MAIN-20170816231829-20170817011829-00338.warc.gz
CC-MAIN-2017-34
3,306
7
http://wiki.fractalaudio.com/axefx2/index.php?title=Scene_controller
code
- 1 Available on which Fractal Audio products - 2 About scenes - 3 Scenes and MIDI - 4 Switching scenes can cause an audio gap when the Amp block changes - 5 Scene names - 6 Copying and pasting scenes - 7 Default scene upon preset loading - 8 Switching scenes - 9 Initial block Bypass states - 10 Scenes, X/Y and channels - 11 Modifiers, controllers and Global Blocks - 12 Scene Revert - 13 Scene Controllers - 14 More control with scenes Available on which Fractal Audio products - Axe-Fx III: yes - Axe-Fx II: yes - AX8: yes - FX8: yes Read this: Mini Manual (PDF). And consult the Owner's Manual. Scenes represent a single preset in 8 different variations. The routing (grid) is always the same in all scenes (remember: it's a single preset). The parameter values in all blocks also are the same (with the exception of Scene Controllers, read below). But the Bypass states ("engaged/bypassed" aka "on/off") of the effect blocks can vary per scene. Also, the X/Y state or active Channels can vary per scene. Finally, each scene can have its own output level setting. All in all, scenes are similar to an advanced switching system for a pedalboard or a 19" rack. Switching between sounds is faster with scenes than with presets, when configured correctly. Also, spillover of delay and reverb trails is preserved better when switching between scenes than when switching presets. There's no way to create, enable or disable scenes. They are always there. Remember, scenes are just variations of a single preset. The indicator on the hardware display and the switch LED on compatible foot controllers show the currently active cene. Scenes and MIDI Scenes offer MIDI functionality, depending on the hardware. FX8 – Scenes can switch relay states and send a MIDI Program Change and/or Control Change. AX8 – Scenes can send a MIDI Program Change message. Axe-Fx III – Scenes can send up to 8 MIDI Program Changes and/or Control Changes, through the Scene MIDI block. Switching scenes can cause an audio gap when the Amp block changes When an Amp block is switched between X/Y or changes channels when switching scenes, there will be a short gap in the sound. The gap is caused by the necessity to briefly mute and unmute the sound. See Amp block and X/Y switching. To avoid this, switch between two Amp blocks (AX8: n/a), or use Scene Controllers to change amp settings instead of X/Y switching, or use one of Bakerman's switching tricks. The Axe-Fx III features customizable scenes. The names are displayed on the FC controllers and can be edited on the hardware and in Axe-Edit III. The Axe-Fx II, AX8 and FX8 do not support scene names. Copying and pasting scenes To copy/paste scenes on the hardware: use Layout > Tools. This does not copy the scene's name. "Scene copy doesn't copy the name, just the states." source For more possibilities, use the editor. Default scene upon preset loading Axe-Fx III: this is a global option (Global menu). When set to “As Saved” the scene selected when recalling a preset is the scene that was active when the preset was saved. When set to a particular scene value, that scene will always be selected when a preset is recalled. Note: this applies to switching presets on the hardware only, not to loading presets in the editor. Axe-Fx II: the default scene is always 1. This can't be changed. FX8 and AX8: you can specify the default scene in the Global menu, or per preset. - Use NAV up/down buttons on the Home screen. - Use soft knob "A" in certain screens. - Use a foot controller or directly connected switch. - Use MIDI (assign CCs in Setup > MIDI/Remote). - Use the editor. - Use MIDI PC Mapping. To switch scenes via SysEx, check this document. - Use Quick Control knob A to select a scene within the current preset in the Recall screen. - I/O > Mapping on the Axe-Fx II provides a Map To Scene parameter. This makes it possible to send a MIDI PC message to select a scene within a preset. After configuring the mapping, don't forget to set Mapping to Custom to activate it. - Pedal jack: connect a momentary switch to the rear of the unit. In I/O > CTRL set Scene Increment to Pedal. In I/O > Pedal set Pedal Type to Latching. MFC-101: the MFC-101 lets you assign switches to scenes. You can assign a switch to each scene, increment of decrement scenes, or toggle between scenes 1 and 2. To turn the bottom row of the MFC-101 into scenes switches, set Bank Size to 0. FX8: press the assigned Scene switch, or press the assigned Single or Sticky Scene switch and press the scene number switch. AX8: press the assigned Scene switch, or press the assigned Single or Sticky Scene switch and press the scene number switch. Or turn the "C" knob. MIDI controller: assign a switch to the MIDI CC for Scene Select with values 0 to 7 to select scene 1 to 8 within the current preset. Values higher than 7 also select scene 8. Values higher than 63 will step through the scenes, wrapping at the limits. MIDI CCs can also be used to Increment or Decrement the current scene. If you don't specify a value, the switch will switch between scene 1 and 8. The default MIDI CC for scene selection is 34 (the Axe-Fx III lets you specify the CC). This can be changed. Initial block Bypass states Scenes 2 to 7 may have all effect blocks engaged initially. This is by design. Watch out for loud bursts. Use the editor to perform bulk operations, such as setting an effect's Bypass state in all scenes in one go. Scenes, X/Y and channels An effect's X/Y state or Channel is set per scene. For example, Delay in scene 1 can be set to X or Channel A, while the same Delay in scene 2 is set to Y or Channel B. Note that using different types of Delay or Reverb in each state may impact spillover. Of course, you can still use two instances of effect blocks (instead or in addition to X/Y states or Channels) in your presets. This lets you bypass/engage each instance per scene (at the expense of CPU). "Channels can be thought of as a preset for an individual block. For example, you can think of the Delay block as being a stand-alone delay pedal (or rackmount processor) with four presets. Scenes store the bypass state and channel for each block. By using scenes and channels you can use a single preset for an entire song, an entire set or even the entire show. Since the routing doesn't need to change things switch fast and smooth. When switching presets the processor has to assume the routing might have changed and therefore has to clear all the buffers, mute the audio, etc. which takes time and interrupts the audio." source Use the editor to perform bulk operations, such as changing a block's channel in all scenes of a preset in one go. Modifiers, controllers and Global Blocks Modifier settings, controller settings (except for Scene Controllers) and Global Blocks are the same in every scene. You can't carry over the current engaged/bypassed state of an effect from the current scene to another one. What you can do: instruct the device the way you want it to handle effect states when leaving and returning to a scene. By default the Axe-Fx remembers which block states are manually bypassed/engaged after having selected a scene. When you select another scene and return to the previous scene (without changing presets), the device will recall those effect block states. If you prefer always recalling a scene in its initial stored state (keeping blocks to their saved states), turn on the parameter SCENE REVERT in I/O > MIDI. Note that SCENE REVERT applies only to scene switching via MIDI or FC controllers; it doesn't kick in when switching scenes on the hardware. Furthermore, SCENE REVERT doesn't work with PC Mapping. MFC-101: if you want to retain effects block states in a switching scenario, stick to preset switching and use "global" IA switches on your MFC-101. Parameter values in effect blocks are the same across all scenes. However, there are Scene Controllers available which allow you to set (change) parameter values per scene. Just like regular controllers you can assign a Scene Controller to a modifiable parameter. The controller values are set per scene in Control > Scene. Note that the values in Control > Scene always relate to the parameter it controls. For example, when attaching a scene controller to Delay Feedback, be aware that this parameter ranges from -100 to 100. Setting the controller at 0% sets feedback at -100, not at 0. source If you use Min and Max in the modifier menu, the Scene Controller percentages will be proportional to that specified range. - Attach Scene Controller 1 to Reverb Mix. It might have a value of 10% in scene 1, and a 20% in scene 2. This would change the Reverb mix per scene. A popular application is to use attach a Scene Controller to Input Drive or Input Trim in the Amp block, enabling you to vary the amount of amp gain per scene. - Make a Scene Controller change a note's pitch in the Synth or Pitch block per scene. - Crossfade sounds through Scene Controllers. - More tips and examples The Axe-Fx III Owner's Manual has a tutorial on Scene Controllers. Number of scene controllers: - Axe-Fx II, AX8, FX8: 2 - Axe-Fx III: 4 More control with scenes - Combine multiple existing presets into a single preset with scenes. - Use scenes instead of the MFC-101's Song Mode, to provide all sounds for a song in a single preset. - Use two Amp blocks and if necessary two Cab blocks for flexibility (Axe-Fx II and III only). - Configure different X/Y states or channels for effect blocks and set these per scene. - Decrease Bank Size on the MFC-101 to have more IA switches on your MFC-101 available for scene switching. - Use the Alternate Preset functionality on the MFC-101 to get access to multiple presets (with scenes) through a single preset switch. - Create a "Lead" scene for each preset by adding a delay and drive block, and increasing the output level of the scene (IN/GTE) and saving the scene with the blocks engaged. - Create a scene where only the Amp and Cab blocks in the routing are engaged and dedicate an IA switch to it. This switch lets you return to your basic tone at all times. Think of it as a "Panic" switch. - When using various guitars, the use of scenes enables you to optimize the preset's output level for each guitar. Also, you can bypass/engage stuff like a PEQ in each scene etc. for even more control.
s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578613603.65/warc/CC-MAIN-20190423194825-20190423220825-00474.warc.gz
CC-MAIN-2019-18
10,355
90
https://community.intel.com/t5/Intel-Quartus-Prime-Software/Quartus-Prime-University-Program-VWF-Functional-Simulation-Error/td-p/1212751
code
I am trying to simulate a 1 bit 2-to-1 MUX waveforms in Simulation Waveform Editor. But while running the simulation I am getting the following error. Expecting any help urgently. Thank you Could you provide the .qar design file to me and steps taken to reproduce the error? Can attach it here or private message me if it is confidential. This issue is probably due to open the specified file in read mode. You have to validate that the path exists and you have the correct permissions in the directory. I can see you have space in your path "simulation singlebit_MUX2.vho". Change to other name or put "_" or combine the name to simulationSinglebit_MUX2.vho.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337446.8/warc/CC-MAIN-20221003231906-20221004021906-00042.warc.gz
CC-MAIN-2022-40
659
5
https://www.techgroups.com/opportunities/opportunity/60539/
code
The Cybersecurity & Technology Controls group at JPMorgan Chase aligns the firm's cybersecurity, access management, controls and resiliency teams. The group proactively and strategically partners with all lines of business and functions to enable them to design, adopt and integrate appropriate controls; deliver processes and solutions efficiently and consistently; and drive automation of controls. The group's number one priority is to enable the business by keeping the firm safe, stable and resilient. Our Cyber Security Technology Controls, Global Identity & Access Management (GIAM) provides identity and access management solutions for the firm's infrastructure and applications. The team ensures that appropriate access controls are in place and applied effectively and continuously. GIAM Data Services team is responsible for defining and executing the multiyear strategy and roadmap along with working with our customers on daily basis to meet their data needs. As a product owner on this team you will be responsible for defining data requirements, ensuring quality throughout agile delivery lifecycle and delivery customer communication (i.e. Release Notes). We are actively seeking an experienced Data Services Product Manager who will: * Serve as liaison between business and development teams to translate business and design requirements into technical requirements, providing strong leadership to both Product Owners and Development Teams in support the customers * Work with clients to identify use case with clear business value and secure adoption commitments and partnerships on work model * Establish and author product documentation * Analyze business requirements to understand the business needs and to determine how their applications can best functionally fulfill those needs * Combine knowledge of what the business wants with knowledge of how the systems are built and used to create functional designs across applications * Facilitate the production of key documents including business requirements, master story list, detailed user stories, sprint planning and design documents * Facilitate project planning sessions with project managers, business analysts and team members * Interface with members of your scrum team, and line of business product teams, business analysts from other teams to deliver solutions * Work with scrum team and customers to test and certify applications before they are deployed to production * Work with technology and business partners to develop and refine our software delivery process * Perform critical analysis on information consolidated from multiple sources, identify and resolve conflicts, and break down high-level information into detailed workable requirements * Work with Developers and Subject Matter Experts to understand the work effort for a requirement, and, if needed, facilitate making adjustments that satisfy all parties * Actively participate in adding to Confluence that is living documentation of the latest business and technical requirements * Leverage an analytic mindset while working with data and business intelligence tools to deliver actionable insight * Present analysis and recommendations to technology leaders and colleagues * Work as a data strategist to identify and integrate datasets, data components, and attributes from various information systems that can be leveraged to advance our access management agenda * Establish data management and data analytics principles, guidelines and best practices for the broader organization to follow This role requires a wide variety of strengths and capabilities, including: * 10+ years of hands-on experience as a Data Product Manager, with heavy emphasis on technology and implementation * Must have demonstrated background in Data Management technologies and with business analysis * Must have experience with Cloud Data solutions with a preference to AWS * Excellent verbal and written skills are critical since this job primarily entails delivering technical information to both technical and non-technical audiences * Demonstrate work on breaking down data needs translation to vision, strategy and roadmap * Experience working with business Intelligence and data science teams to deliver data pipelines * Strong analytical and troubleshooting skills * Ability to leverage SQL to query database and develop data models * Experience with QA and Test Driven Development in support of ensured product quality. Establishing and providing testable data/functional requirements * Ability to monitor analytics in support of ensured product performance and feature enhancements * Strong interpersonal skills to manage relationships with a variety of partners and stakeholders * Strong experience with Agile development methodologies * Experience writing technical stories in JIRA/Confluence using industry standard notations * Experience working with diversified multi-location team * Self-starter that is capable of tackling complex / loosely defined problems and structuring a well-organized and actionable solution. * Passion for data and deriving insights that can be applied to client's decision making * Familiarity with AI solutions - including data management, machine learning and stream data architecture * Background in Identity and Access Management * Knowledge of financial services concepts and products * Bachelor's Degree or equivalent enterpriese level hands on work experience JPMorgan Chase & Co., one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world's most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. In accordance with applicable law, we make reasonable accommodations for applicants' and employees' religious practices and beliefs, as well as any mental health or physical disability needs. Equal Opportunity Employer/Disability/Veterans It's easy, and free! Add jobs from any website! Get recommendations from your friends! Start by adding this job...
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487637721.34/warc/CC-MAIN-20210618134943-20210618164943-00371.warc.gz
CC-MAIN-2021-25
6,786
46
http://squigloo.com.au/general/mobile-friendly-contact-details/
code
Have you ever visited a website on your phone and tried to call a phone number, but because of the formatting it doesn’t work? In this tutorial I will show you how to make your contact details mobile phone friendly including the address, phone number and email. There a few different options available here and after testing on various devices I have found that using a Google Maps link to be the best option as both iPhone and Android will intercept this link and prompt the user to choose to view on Google Maps or the internet (or other Map App). The other options I trialled included Embedding a Google Map and using the Geo prefix in link. The issue with embedding the Google Map is you’re not giving the user the freedom to use the map App of their choice and it also takes up screen real estate which you may not want to use. The issue with the Geo prefix is that is currently not very supported. It worked on Android but not iPhone, making it fairly useless for mobile. Linking to the address to Google Maps Simple go to maps.google.com, find the address, press the link icon in the top right and copy the Google maps link. Add this link to your address or a view on map link like so: <a href="http://maps.google.com.au/maps?q=melbourne+gpo&ll=-37.813547,144.963613&spn=0.0099,0.026157&oe=utf-8&client=firefox-a&fb=1&gl=au&hq=gpo&hnear=0x6ad642af0f11fd81:0x5045675218ce7e0,Melbourne+VIC&cid=0,0,16491477996227855296&t=m&z=16&vpsrc=0&iwloc=A">Melbourne GPO</a> Embedding a Google Map To embed a Google Map you need to go to maps.google.com, find the address and then press the embed icon in the top right. This time select the iframe code and paste this into your website. Use the geo prefix in your link Simple add the ‘geo:’ prefix follow by the longitude and latitude like so: To make a phone number or text call simply include the ‘tel:’ prefix in your link, making sure you remove any spaces, + signs or special characters. <a href="tel:0395075301">Call us</a> Note: Using this style link will not work on non-mobile browsers. We’ve all been doing this on for a long time and is common practice but here it is again to remind us all. <a href="mailto:firstname.lastname@example.org">Email us</a> Why not help the user out a little and add the subject line in for them too, like so:
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320303717.35/warc/CC-MAIN-20220121222643-20220122012643-00032.warc.gz
CC-MAIN-2022-05
2,306
15
https://gist.github.com/dannguyen/305c118f155bd1887da0
code
This message was sent via the stanfordgis mailing list: I am very pleased to announce that all Stanford University faculty, students and staff now have access to CartoDB.com through our own Enterprise level account. This provides Stanford researchers access to the full CartoDB platform, including 250mb of data storage per account, unlimited map views, sharing of private datasets within the Stanford Enterprise Organization, syncing of CartoDB datasets to Google forms and DropBox datasets, publishing of public maps from private datasets, and more! Soon to come will be migration tools for moving from one account to another, as well as Groups, for creating working groups and managing Labs, Classes and projects. To create your profile on the Stanford CartoDB Enterprise account, you must use your stanford.edu email address. If you currently have a personal CartDB account associated with your stanford.edu email address, you will need to change the email associated with that account, or use a wildcard with your email address (see below). The steps to sign up for a Stanford CartoDB Account are: Go to http://stanford.cartodb.com/signup Use your stanford.edu email address to sign up for a profile. If you already have a CartoDB profile using your stanford.edu email address, try appending a wildcard to your email, like so: email@example.com becomes… firstname.lastname@example.org with a wildcard of '+mapninja' If you are new to CartoDB, you may find the following links helpful in getting up to speed on it’s very intuitive user interface: CartoDB Academy: http://academy.cartodb.com/ An Introduction to CartoDB through Humanities Lens https://gist.github.com/makella/7747631a51473403d8cb Introduction to SQL and PostGIS in CartoDB: https://gist.github.com/ohasselblad/721c7de3cff591635257 As always, watch the StanfordGIS listserv for announcements about CartoDB related workshops, events and news and never hesitate to contact me if you have any questions about our resources. Please feel free to forward this announcement to anyone you feel might be interested in using CartoDB.
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487621699.22/warc/CC-MAIN-20210616001810-20210616031810-00329.warc.gz
CC-MAIN-2021-25
2,096
8
https://help.precisely.com/r/Precisely-EnterWorks/10.5/en-US/EnterWorks-Process-Exchange-EPX/EPX-Workflow-Activities-and-Callout-BIC-Framework/EPX-Core-Product-Activities/Automatic-Activity/Command-Line-BIC-Automatic-Activity/Using-the-Command-Line-BIC-Editor/Expiration-Tab
code
The Expiration tab is used to specify how long a work item can remain at the Command Line BIC activity and what will happen to the work item if that period is exceeded. Setting an expiration period is optional, but should an error occur with the BIC Manager, an expired Command Line BIC activity will indicate that the BIC Manager was unavailable for a time longer than the expiration period. To enable work item expiration: Click the Expiration Period checkbox. Specify when the expiration should occur by clicking any of these options: Specific Period – If the expiration is set to Specific Period, set the amount of time until work item expiration by entering values in Days, Hours, Minutes, and Seconds fields. You can manually enter the values or click the up and down arrows to fill out the fields. Make sure that at least one of the four fields should be filled out. The Seconds value must be at least 15 seconds. Setting an expiration period less than 60 seconds will cause delay in the actual work item expiration period because, by default, the Control Manager checks for expired work items every 60 seconds. The expiration polling interval can be reduced by modifying the value of the control.expiredWorkItem.interval setting in <EPX>\bin\config.properties (in the Control Manager Properties section), but before doing so you should consider the impact that this change will have on the performance of your system. Dynamic Date - If the expiration is set to Dynamic Date, you will need to provide the following: - Work Item Key – The key you will use to get the actual send date and/or time during runtime. For example, type in "DynamicExpiration.Date" if the expiration date is designated by the Date field of the DynamicExpiration work item type in the work item. Date Format – The default format is MM dd yyyy HH:mm. Select your desired date format from the Date Format dropdown list or enter values in the combo box. Format Test – Contains the current date and/or time formatted using the format selected in the Date Format field. Each time a new format is selected, this value is updated to reflect the newly selected format. Refer to "Appendix," on page 16 for the available date formats and time zones. Instead of setting the expiration date by specifying a work item key, a send action class can be used to set that date. Click the Custom Action tab and enter the fully qualified class name for the custom code in the Send Action Class field. After filling out the necessary fields, proceed to the next step. Click the Send Work Item checkbox to automatically send the work item to the next activity in the flow upon Note: The work item cannot go to another point in a flow that needs manual intervention to select a participant. For example, the work item can go to an All split to specify that all participants receive the work item following the split. If it is a Some or a One split, however, manual intervention is required to choose a participant. - Click the Send E-mail checkbox to automatically notify one or more participants, groups, or roles should the activity expire. When you select the Send E-mail checkbox, the E-mail table is enabled. Use the E-mail table to add email recipients. To add email recipients, select one or more recipients from the E-mail address selection dialog. For more information about the Expiration tab, see the Process Modeling guide. - Save the data entered and proceed to another tab by clicking Apply. Clicking OK will also save the data entered and exit the Command Line BIC editor. To cancel saving the data entered, click Cancel.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296819273.90/warc/CC-MAIN-20240424112049-20240424142049-00606.warc.gz
CC-MAIN-2024-18
3,602
18
https://jeffhoogland.blogspot.com/2014/04/
code
here. All of these images are built directly on top of the latest Ubuntu 14.04 packages. The 32bit and 64bit images utilize the 3.13 Linux kernel, while the Chromebook image utilizes a 3.11 kernel due to hardware compatibility issues. The Chromebook image is tested/designed to work with the Acer C720 and HP 14" Chromebooks. It could very well work with other Chromebooks, but they have not been tested. For more information on installing Bodhi on your Chromebook follow the directions here. Updated Release Schedule Some folks made note that when I first posted the Bodhi 3.0.0 release schedule we were set for a stable release at the end of June. After some discussion on our user forums it was decided that we would all be more comfortable with waiting till after Ubuntu releases their first major update to 14.04 before we called 3.0.0 our "stable" Bodhi release. With this in mind the Bodhi 3.0.0 stable release has been moved from a June 27th target date to a August 2nd target date. This makes our release cycle heading towards 3.0.0 stable look like: - May 30th - Release Candidate - June 27th - Release Candidate 2 - August 2nd - Stable Release While I a linked a change log above, a picture is worth a thousand words as they say! Below is pictured the new Radiance Enlightenment theme that is nearing completion (Thanks Duma!) which is now the default look for Bodhi 3.0.0. Also shown in the screenshot is ePad text editor (replaces Leafpad) and eepDater system updater. As always please, please, please do not post issues in a comment on this post. Instead open a thread in the 3.0.0 testing section of our user forums. Also keep in mind this is a testing release not intended for production machines.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100276.12/warc/CC-MAIN-20231201053039-20231201083039-00780.warc.gz
CC-MAIN-2023-50
1,713
9
https://hydroculture.global/aquaculture/feeding-and-monitoring-scylla-serrata-mud-crab-farming-in-ras-vertical-farm/
code
There are some differences when culturing mud crabs in boxes versus the traditional mud crab ponds. The first one is the feeding and monitoring operations. For individual boxes, you can keep a close tab on each individual crabs and implement any corrective actions! PS: We conduct monthly courses for those interested in mud crab aquaculture, should you be interested, do drop us a PM with your email, full name and mobile number! 🦀👇👇FOR MORE INFO, SUBSCRIBE OUR CHANNEL NOW!!👇👇🦀 ⭐⭐⭐ CONNECT WITH US ON SOCIAL MEDIA ⭐⭐⭐
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947475897.53/warc/CC-MAIN-20240302184020-20240302214020-00339.warc.gz
CC-MAIN-2024-10
549
4
https://listman.redhat.com/archives/anaconda-devel-list/2008-November/msg00247.html
code
I am having problem in skipping the network settings in kick starting the Linux rescue mode boot. I am trying to write a kick start file for Linux rescue mode boot. My kick start file is as shown below: #kick start is run in text mode #language used for installation network --device eth0 --bootproto dhcp --onboot yes # install from cd-rom #don't automatically mount any installed linux partition in rescue mode Even though I have given the network settings options in the kick start file, in the boot up it prompts for network settings. Do you want to start the network interfaces on this system? Yes | | No | My Question is: Is my kick start file correct or not? Thanks for any help,
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00387.warc.gz
CC-MAIN-2022-05
686
14
https://ifootpath.kayako.com/article/10-how-do-i-delete-my-account
code
We are able to delete your account for you (we will be sorry to see you go...) Just contact us providing your email address and name and we will confirm the deletion. It would also be great if you could let us know why you would like your account deleted. Just to say though that there are no membership or subscription costs. The iFootpath website is free to use and the App is a one-off fee of £1.99 - you get all current walks and all future walks with nothing more to pay. If you have the iFootpath App you have bought this via your App Store (they hold all the payment details, etc.). The App will always be available to you even if you delete it from your device via your App Store account. iFootpath only holds a Name, Email address, Username and Password.
s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496670268.0/warc/CC-MAIN-20191119222644-20191120010644-00322.warc.gz
CC-MAIN-2019-47
764
3
https://miramuseai.net/blog/ComfyUI-IPadapter-V2-update-fix-old-workflows-comfyui-controlnet-faceswap-reactor-13263
code
ComfyUI IPadapter V2 update fix old workflows #comfyui #controlnet #faceswap #reactor TLDRThe video tutorial guides users through updating to the new version of the IP adapter V2, thanking Mato for his creation. It covers the process of updating the IP adapter node through the manager or manually via GitHub. The tutorial then demonstrates how to organize models and adapt previous workflows to the new version. A practical example is given, showcasing a technique that integrates a character's face into a new image using the IP adapter and reactor, proving the updated tool's effectiveness and ease of use. - 🚀 Introduction of the new version IP adapter V2 and gratitude towards Mato, the creator. - 🔄 Importance of updating Comfy and IP adapter node for the latest features and improvements. - 🔗 Directing users to Mato's GitHub page for manual updates and accessing required models. - 📂 Highlighting the new folder structure for models in the updated version, specifically 'com ui/models/ip adapter'. - 🔄 Demonstration of how to transition from the old IP adapter to the new one within a workflow. - 🔍 Explanation of the two main nodes available in the new IP adapter: 'IP adapter Advanced' and 'IP adapter Tiled'. - 🖼️ Clarification on the use of 'IP adapter Tiled' for panoramic images and its image preparation capabilities. - 📌 Step-by-step guide on fixing common errors when updating workflows and removing old nodes. - 🎨 Showcasing a practical example of integrating character faces into new images using the IP adapter and Reactor. - 📈 Presentation of a simple workflow adapted for LCM (Latent Content Modulation). - 🎉 Final demonstration of the updated workflow's effectiveness in producing results close to the reference image. Q & A What is the main topic of the video? -The main topic of the video is how to use the new version IP adapter V2. Who is Mato and why is he thanked at the beginning of the video? -Mato is the creator of the IP adapter. He is thanked for his contribution to the community and for making the tools that enable many of the processes demonstrated in the video. What is the first step in updating to the new version of the IP adapter? -The first step is to update Comfy and enter the manager, then click on 'update Comfy UI'. What happens if the automatic update through the manager doesn't work? -If the automatic update doesn't work, it's possible to update the IP adapter node manually by visiting the GitHub page provided in the video description. What is the significance of the 'com ui/models/ip adapter' folder in the new version? -In the new version, the models should be placed in the 'com ui/models/ip adapter' folder, which is different from the previous version's location. What are the two main nodes available for the IP adapter in the new version? -The two main nodes are 'IP adapter Advanced' and 'IP adapter Tiled'. What is the difference between 'IP adapter Advanced' and 'IP adapter Tiled'? -The main difference is that 'IP adapter Tiled' is used for more panoramic, non-square images and it prepares the images for the IP adapter, eliminating the need for a 'prepare image to clip Vision' node. How does one resolve the error encountered when transitioning from the old to the new IP adapter? -To resolve the error, one should delete the old IP adapter node and the system should then work as usual. What technique was demonstrated in the video for integrating a character's face into a new image? -The technique involves a combination of the IP adapter and the reactor, which was previously demonstrated in some of the last videos. How does the final result of the demonstration compare to the reference image? -The final result is very close to the reference image, showcasing the effectiveness of the new version of the IP adapter. What does the video creator promise to do with the workflows? -The video creator promises to update all the workflows that were uploaded in the descriptions of the videos for compatibility with the new version of the IP adapter. 🔧 Introduction to IP Adapter V2 and Updating Process The paragraph introduces the new version of the IP adapter, expressing gratitude to Mato, its creator. It emphasizes the importance of Mato's work and encourages viewers to check out his channel. The speaker outlines the process of updating the IP adapter through the manager and provides a link to the GitHub page for manual updates. The paragraph also highlights the changes in the file structure for the models in the new version, explaining the need to move them to a new folder or create a link using the extra model paths.yaml file. 💡IP adapter V2 💡Face Restore Model ComfyUI IPAdapter V2 style transfer workflow automation #comfyui #controlnet #faceswap #reactor SDXL ControlNet Tutorial for ComfyUI plus FREE Workflows! The EASIEST ComfyUI Faceswap Workflow! Fast and fun! AnimateDiff Tutorial: Turn Videos to A.I Animation | IPAdapter x ComfyUI Stable Cascade in ComfyUI with Updated Method and Custom Workflows ControlNet creation and usage for Stable Cascade in ComfyUI AI
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296817128.7/warc/CC-MAIN-20240417013540-20240417043540-00143.warc.gz
CC-MAIN-2024-18
5,104
46
https://funcshun.com/blog/aws-cloudhsm-update-cost-effective-hardware-key-management-at-cloud-scale-for-sensitive-regulated-workloads/
code
Our customers run an incredible variety of mission-critical workloads on AWS, many of which process and store sensitive data. As detailed in our Overview of Security Processes document, AWS customers have access to an ever-growing set of options for encrypting and protecting this data. For example, Amazon Relational Database Service (RDS) supports encryption of data at rest and in transit, with options tailored for each supported database engine (MySQL, SQL Server, Oracle, MariaDB, PostgreSQL, and Aurora). Many customers use AWS Key Management Service (KMS) to centralize their key management, with others taking advantage of the hardware-based key management, encryption, and decryption provided by AWS CloudHSM to meet stringent security and compliance requirements for their most sensitive data and regulated workloads (you can read my post, AWS CloudHSM – Secure Key Storage and Cryptographic Operations, to learn more about Hardware Security Modules, also known as HSMs). Major CloudHSM Update Today, building on what we have learned from our first-generation product, we are making a major update to CloudHSM, with a set of improvements designed to make the benefits of hardware-based key management available to a much wider audience while reducing the need for specialized operating expertise. Here’s a summary of the improvements: Pay As You Go – CloudHSM is now offered under a pay-as-you-go model that is simpler and more cost-effective, with no up-front fees. Fully Managed – CloudHSM is now a scalable managed service; provisioning, patching, high availability, and backups are all built-in and taken care of for you. Scheduled backups extract an encrypted image of your HSM from the hardware (using keys that only the HSM hardware itself knows) that can be restored only to identical HSM hardware owned by AWS. For durability, those backups are stored in Amazon Simple Storage Service (S3), and for an additional layer of security, encrypted again with server-side S3 encryption using an AWS KMS master key. Open & Compatible – CloudHSM is open and standards-compliant, with support for multiple APIs, programming languages, and cryptography extensions such as PKCS #11, Java Cryptography Extension (JCE), and Microsoft CryptoNG (CNG). The open nature of CloudHSM gives you more control and simplifies the process of moving keys (in encrypted form) from one CloudHSM to another, and also allows migration to and from other commercially available HSMs. More Secure – CloudHSM Classic (the original model) supports the generation and use of keys that comply with FIPS 140-2 Level 2. We’re stepping that up a notch today with support for FIPS 140-2 Level 3, with security mechanisms that are designed to detect and respond to physical attempts to access or modify the HSM. Your keys are protected with exclusive, single-tenant access to tamper-resistant HSMs that appear within your Virtual Private Clouds (VPCs). CloudHSM supports quorum authentication for critical administrative and key management functions. This feature allows you to define a list of N possible identities that can access the functions, and then require at least M of them to authorize the action. It also supports multi-factor authentication using tokens that you provide. AWS-Native – The updated CloudHSM is an integral part of AWS and plays well with other tools and services. You can create and manage a cluster of HSMs using the AWS Management Console, AWS Command Line Interface (CLI), or API calls. You can create CloudHSM clusters that contain 1 to 32 HSMs, each in a separate Availability Zone in a particular AWS Region. Spreading HSMs across AZs gives you high availability (including built-in load balancing); adding more HSMs gives you additional throughput. The HSMs within a cluster are kept in sync: performing a task or operation on one HSM in a cluster automatically updates the others. Each HSM in a cluster has its own Elastic Network Interface (ENI). All interaction with an HSM takes place via the AWS CloudHSM client. It runs on an EC2 instance and uses certificate-based mutual authentication to create secure (TLS) connections to the HSMs. At the hardware level, each HSM includes hardware-enforced isolation of crypto operations and key storage. Each customer HSM runs on dedicated processor cores. Setting Up a Cluster Let’s set up a cluster using the CloudHSM Console: I click on Create cluster to get started, select my desired VPC and the subnets within it (I can also create a new VPC and/or subnets if needed): Then I review my settings and click on Create: After a few minutes, my cluster exists, but is uninitialized: Initialization simply means retrieving a certificate signing request (the Cluster CSR): And then creating a private key and using it to sign the request (these commands were copied from the Initialize Cluster docs and I have omitted the output. Note that ID identifies the cluster): The next step is to apply the signed certificate to the cluster using the console or the CLI. After this has been done, the cluster can be activated by changing the password for the HSM’s administrative user, otherwise known as the Crypto Officer (CO). Once the cluster has been created, initialized and activated, it can be used to protect data. Applications can use the APIs in AWS CloudHSM SDKs to manage keys, encrypt & decrypt objects, and more. The SDKs provide access to the CloudHSM client (running on the same instance as the application). The client, in turn, connects to the cluster across an encrypted connection. The new HSM is available today in the US East (Northern Virginia), US West (Oregon), US East (Ohio), and EU (Ireland) Regions, with more in the works. Pricing starts at $1.45 per HSM per hour.
s3://commoncrawl/crawl-data/CC-MAIN-2023-40/segments/1695233506686.80/warc/CC-MAIN-20230925051501-20230925081501-00061.warc.gz
CC-MAIN-2023-40
5,763
22
https://cutshort.io/jobs/hibernate-java-jobs-in-pune
code
Beauto Systems is looking for a Java Developer to join our growing team, as we develop projects for small businesses across South Africa and beyond. We are seeking someone with a passion and depth of experience in solving problems that customers love and can rely on.Job Responsibities:- Design, implement and maintain java application phases To take part in software and architectural development activities Develop technical designs for application development Develop application code for java programs Conduct software analysis, programming, testing and debugging Identifying production and non-production application issues Transforming requirements into Code Develop, test, implement and maintain application software Recommend changes to improve established java application processes Experience Required:3 to 7 years hands-on Software Development experienceProven working experience in Java developmentHands on experience in designing and developing applications using Java EE platformsObject oriented analysis and design using common design patterns.Profound insight of Java and JEE internals (Class loading, Memory Management, Transaction management etc)Excellent knowledge of MySql, Hibernate, Spring, Junit, Git.Experience in the Spring Framework.Contribute in all phases of the development lifecycleWrite well designed, testable, efficient codeEnsure designs are in compliance with specificationsPrepare and produce releases of software componentsSupport continuous improvement by investigating alternatives and technologies and presenting these for architectural review. Looking for varsatile engineer with following skill set. Primary Java, J2EE, Spring, Spring boot Hibernate, Python based ORM frameworks Unit testing frameworks Web technology and Database knowledge Good To Have Ruby on Rails, Python JQuery, XML, HTML5 and other UI technology Shell Scripting Java, J2EE About Company : WebShar India Private Limited. Culture: Startup culture, Conditional work from home allowed Equipments: Macbook Pro will be provided by company. Eligibility Criteria Working knowledge on Java, Spring, Hibernate, MySQL, Mongo db, AWS cloud services exposure, 1+ years in development for enterprise applications and experience of working on the full stack. Strong programming skills, Hands on experience in developing modern web applications.Must have experience in developing cloud-based web applications. Good analytical and problem-solving skills. Must be familiar with managing and maintaining code repository like Git. Strong commanding skills on Java, hibernate, spring-boot, developing and deploying microservices. Knowledge of Japanese culture/language will be an added advantage. Experience with Agile/Scrum development methodologies Job Description Designs, develops, and implements web-based Java applications to support business requirements. Follows approved life cycle methodologies, creates design documents, and performs program coding and testing. Resolves technical issues through debugging, research, and investigation. Own & Develop the web solutions based on Java Microservice architecture, Hibernate, Spring. Stay updated with new technologies and of changes in technologies that affect back-end and front-end web development. Java / Java Script code development using frameworks like Spring, Hibernate , Rest web services, AWS services. · Dev-Ops activities using Maven, Eclipse and other tools · Contribute to providing estimates for implementation of new requirements and work diligently towards delivering within the estimates · Ensure Quality of the releases by adhering to SDLC best-practices such as, unit tests,continuous integration, system testing · Re-use code. Ensure delivered code is modular and extensible · Ensure adequate coverage of functional and non-functional requirements in test plans Position Requirements · 3-5+ years of hands on experience in Java/J2EE development. Experience with the implementation of a distributed, enterprise JAVA/J2EE solution is strongly preferred · Experience in JAVA/J2EE Frameworks such as Spring, hibernate is strongly preferred · Experience in SOA and SOA related technologies, solutions and products as well as experience with implementation of REST full web services is preferred · Experience with data modelling and experience with relational databases like MySQL, SQL Server or Oracle is desirable · Mobile application development experience on Android or iOS is desirable Good to Have : experience in Machine learning and AI Seeking an experience Java / J2EE senior developer / technical lead to join a highly skilled team of senior developers within NLP Automation & Machine Learning Technology Group and help us continue to build our Cognitive Automation product. The role is not just about software development, it is also about the design and architecture of our proprietary product and its implementation across the finest financial firms globally.This is a senior position reporting to the Vice President of the company. As the lead engineer you will be directly responsible for the design and architecture of all software development.The codebase is less than 36 months old, there's lots of new development and improvements to be made. Working across the full development life cycle from stakeholder liaison through to delivery you will make the technical decisions, set the standards and drive quality.Candidate should be self-motivated, energetic, driven and looking to build a career in a fast-paced market environment at one of the leading - domain-tech- firms.We embrace diversity, ideas and intellect and above all to be fair, honest, open and transparent. We embrace tough technical and intellectual challenges, we solve the hard problems and bring incredible value to our customers, employees and shareholders.Role- Lead software engineering projects and create the development and delivery of enhanced software solutions- Develop overall technical plan and create architecture proposals based on identified solution gaps. As recognised subject matter expert, lead planning, design and implementation of technical solutions- Create solution definition and solution architecture. Assist Management in Business Case Development and Scenario Planning leading to an effective decision-making process- Engage with key stake holders, internal and external, to understand user requirements- Take ownership and accountability for the deliverable s in all phases of the development life cycle- Build a future ready product and team- Become redundant!If your background resonates with the below, then do reach out to us!- 6+ years of development / technical expertise (experience is indicative only)- Hands-on experience with Advanced Core Java Technologies incl., multi threading, distributed caching, & fault-tolerant logic- Strong experience with real-time, low-latency, high-throughput, distributed and scalable systems- Understanding and experience using continuous build tools like Maven / Jenkins / GIT- Experience with web technologies like Servlets, Spring and Struts- Experience using latest frameworks like Spring MVC, Spring Boot, Spring Rest- Experience with SQL on any of the RDBMS - Oracle, PostgreSQL, My SQL- Experience with any of the ORM frameworks - Hibernate/iBatis, JDBC, JPA- Experience with Web Services development - SOAP, REST- Exposure to JMS - IBM MQ or Active MQ is good to have- Exposure to performance testing using JMeter is good to have- Use of code repository tools like SVN, GIT- Exposure to any of the build and deployment tools - ant, gradle, maven- Understanding coding practices, code quality and code coverage- Experience with Agile practices Regards Pooja
s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540544696.93/warc/CC-MAIN-20191212153724-20191212181724-00185.warc.gz
CC-MAIN-2019-51
7,694
5
https://historicaldocuments.co/drupal-python/
code
- Drupal Basics Tutorial Drupal course will take you from beginner to being skilled in all aspects of Drupal. Diverse industries like Art & culture, IT, consultancy, Banking & Insurance, Media, Travel & tourism are using Drupal and if you are interested in working in any of these industries, it is only advisable to join Drupal online training. Python Drupal download module This python 3 module will help you to extract data from a Drupal 7 or 8 web site. This data can be later used for archiving or for migration to another CMS, such as Wagtail. Drupal is an open source CMS, written in PHP. It has a long history and many followers. A large number of web sites run on Drupal and there is a substantial development community. The product is extendable, there are plenty of useful modules available to enhance functionality of Drupal web sites. I am creating a Drupal 7 custom module and would like to call a python script from the PHP and receive back some output from the script. I am running on a Linux OS and have the following so far. - Drupal Advanced - Drupal E-Commerce - Drupal Useful Resources - Selected Reading In this chapter, we will study about how to Create Pages in Drupal. It is very easy to create pages in Drupal. Following are the simple steps used to create pages in Drupal. Step 1 − Click Content in the top menu. Step 2− Click on Add content as shown in the following screen. Step 3− Click the Basic page option. Step 4− Create Basic page will get displayed where you need to fill all the required details as shown in the following screen. Following are the details of the fields present on Create Basic page. Title − It specifies the title for new page. Body − It specifies the description of the page. Text format − It specifies the Text format for your page such as Filtered HTML, Full HTML, and Plain text. Menu settings − By clicking on checkbox Provide a menu link, it shows the details of Menu such as Menu link title, Description, Parent item, and Weight. Revision information − It specifies to provide revise information, if any changes are made in the pages. URL path settings − It specifies to add URL alias to access the content of pages to the users. Comment settings − By selecting open or close, it allows displaying a comment box for the page. Authoring information − It specifies the authored name and the date when page has been authored. Publishing options − It specifies that the page should be Published, Promoted to front page and Sticky at top of lists for the users. Once you complete adding the content to the page. Click the Save button to create the page. Before saving the details, you can also preview the filled page using the Preview button.Newer version available (0.0.9) Download data from Drupal using Python This python 3 module will help you to extract data from a Drupal 7 or 8 web site. This data can be later used for archiving or for migration to another CMS, such as Wagtail. Release historyRelease notifications RSS feed Drupal Python Code Drupal Python Programming Download the file for your platform. If you're not sure which to choose, learn more about installing packages. |Filename, size||File type||Python version||Upload date||Hashes| |Filename, size drupal_download-0.0.3-py3-none-any.whl (7.1 kB)||File type Wheel||Python version py3||Upload date||Hashes| |Filename, size drupal_download-0.0.3.tar.gz (6.7 kB)||File type Source||Python version None||Upload date||Hashes| Drupal Python AlternativeClose Hashes for drupal_download-0.0.3-py3-none-any.whl Hashes for drupal_download-0.0.3.tar.gz
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304835.96/warc/CC-MAIN-20220125130117-20220125160117-00413.warc.gz
CC-MAIN-2022-05
3,595
35
https://heds.nz/tags/gis/
code
Posts with the Gis tag 28 May, 2021 2 December, 2020 (updated 9 April, 2021) – We can create Leaflet maps with simple location data, using only the default Django models and a home-made geoJSON serialiser. Let's avoid bloated geographical libraries and see how lightweight we can get our maps to be. 12 July, 2020 – An easy way to add further interactivity to Leaflet maps rendered in R Shiny apps is to enable zoom-to-point functionality for your polygons. There currently isn't an out-of-the-box solution for this, but it's pretty easy ...
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334620.49/warc/CC-MAIN-20220925225000-20220926015000-00515.warc.gz
CC-MAIN-2022-40
545
6
https://www.synergex.com/docs/tk/tkChap2PAINT.htm
code
Open topic with navigation WNSupported in Synergy .NET on Windows USupported on UNIX VSupported on OpenVMS A single, printable paint character. The .PAINT command determines which character “paints” an empty field to indicate where the user types input. Initially the paint character is a blank. The .PAINT command resets the default paint character. Note that the PAINT qualifier of .FIELD will override the input window’s .PAINT character currently in effect for that field. Fields drawn from a repository always override .PAINT. Paint characters specified by .FIELD PAINT or .PAINT have no effect in a Windows environment. PAINT, NOPAINT qualifier for the .FIELD command for information about an alternative way to specify the paint character for an empty field In the following example, the underscore character indicates where the user should type input.
s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158011.18/warc/CC-MAIN-20180922024918-20180922045318-00222.warc.gz
CC-MAIN-2018-39
865
10
http://www.freecode.com/tags/texlatex?page=3&sort=updated_at&with=&without=
code
The TeXlipse plugin adds LaTeX editing support to the Eclipse IDE. It provides both LaTeX and BibTeX editors, a project creation wizard, and a complete user manual of the editor functions. Additional features include syntax highlighting, document outline, section folding, command completion, cite and ref completion, templates, builder integration, viewer integration with inverse search, and more. The plugin makes it possible for LaTeX documents to be edited and built like normal projects in an IDE, and the viewer support makes it easy to check the outcome. prerex is an interactive (command-line) editor and a LaTeX macro support package that can be used to create very attractive and readable prerequisite charts. A graphical front-end for the editor also provides a prerex-enabled PDF viewer. A prerequisite chart is a network of course boxes linked by prerequisite and co-requisite arrows. Pandoc is a Haskell library for converting from one markup format to another, and a command-line tool that uses this library. It can read markdown and (subsets of) reStructuredText, HTML, and LaTeX, and it can write markdown, reStructuredText, HTML, LaTeX, DocBook, OpenDocument XML, RTF, ODT, GNU Texinfo, MediaWiki markup, and S5 HTML slide shows. Pandoc extends standard markdown syntax with footnotes, embedded LaTeX, and more. A compatibility mode is provided for those who need a drop-in replacement for Markdown.pl. Included wrapper scripts make it easy to convert markdown to PDFs and Web pages to markdown documents. It has a modular design where the addition of a new input or output format requires only the addition of a reader or writer module. dvipng makes PNG or GIF graphics from DVI files obtained from TeX and its relatives. Its benefits include speed; it uses very fast bitmap-rendering code for DVI files. Furthermore, it does not read the postamble, so it can be started before TeX finishes. It supports PK, VF, PostScript Type1 (via FreeType or t1lib), and TrueType fonts (via FreeType), color specials, can render CJK fonts, and more. LaTeX::Table is a Perl module that provides functionality for an intuitive generation of LaTeX tables. It ships with some predefined, good-looking table styles. This module supports multi-page tables via the xtab and longtable LaTeX packages. For publication quality tables, it utilizes the booktabs package. It also supports the tabularx and tabulary packages for nicer fixed-width tables. Furthermore, it supports the colortbl package for colored tables optimized for presentations. The ltpretty program makes it easy to use this module from within a text editor such as Vim or emacs.
s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171004.85/warc/CC-MAIN-20170219104611-00176-ip-10-171-10-108.ec2.internal.warc.gz
CC-MAIN-2017-09
2,643
5
http://www.tfw2005.com/boards/threads/i-was-just-thinking-about-a-tf-problem.136274/
code
I dont know if this is in the right forum but since it is TF related amongst others I will put it here. I dont know if this has ever happend or has been up before but here goes and its not a personal problem. What if u hade a big loose/boxed collection of TF toys ( or any other brand SW, Gi joe etc etc. ) and u started to date a girl/boy or whatever floates your boat that also collected TFs and hade a big loose/boxed collection. Time goes by and u buy stuff for your own collections. The time comes to move in togheter and first problem comes up 1: What will happen to the collections ? Will the couple compare and sell of the dupes or keep both? after some time the second problem comes up. 2: Relationship dies , who gets the TFs ? would u start buying again ? I know its a long shot but I could happen and I bet it has probably happend to. What would u ppl do ?
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823839.40/warc/CC-MAIN-20171020063725-20171020083725-00774.warc.gz
CC-MAIN-2017-43
868
1
http://dartden.com/viewtopic.php?f=12&t=8946&sid=42fb97c54882881904bd2b73ea984c90
code
For the last few weeks I've been short with my fruit flies and instead of sending off for some I've been buying them from PetCo. I happened to notice about three weeks ago they had an unusually large amount of cultures than they normally have. Most of the time they generally have about three or four but now all three of the stores have at least 12-15 cultures at various stages of development(At this point I'll assume you know what I'm about to tell you..) Well - Tonight I bought a few things there along with a culture and while I was passing by the reptile cages something caught my attention. It only flashed by in my peripheral vision but somehow triggered something familiar in the deep recesses of my mind. I looked again and sure enough, there in the water dish of that enclosure was a 3-4 month old D. tinctorius "Cobalt". I'm not some elitist or arrogant dart frog keeper who thinks only us anointed can keep them but the average shlock pet customer does not have the experience or knowledge base to deal with these animals. That and the fact that they aren't going to be willing to spend the kind of money to raise them properly nor are they willing to spend the time learning about them before they make a purchase. I should have known this was going to happen when I saw Green Tree Pythons there a couple of months ago. General Dart Frog Questions and Comments. Care and Husbandry 1 post • Page 1 of 1 Who is online Users browsing this forum: Bing [Bot] and 68 guests
s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947957.81/warc/CC-MAIN-20180425193720-20180425213720-00424.warc.gz
CC-MAIN-2018-17
1,485
8
https://www.ecoviewwindows.com/locator/30080.18789.ecoview-windows-doors-of-houston/r-2516978522934671292/
code
Windows were installed just before Xmas. My wife and I are very happy with the quality and looks of the new windows. A kudo to the installation team led by Marco. Very professional and pleasant to work with. I had an issue with some windows and this has been resolved to full satisfaction by Jenny Seales. Special mention for the competence she has shown in understanding the and resolving the issue. I would unconditionally recommend Ecoview for Window replacement.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474661.10/warc/CC-MAIN-20240226162136-20240226192136-00073.warc.gz
CC-MAIN-2024-10
466
1
https://krantz.dev/projects/skew/
code
Skew is a chrome extension that lets people see the “skew”, or political bias using a sliding scale. After navigating to an article, users can click on the Skew chrome extension. Skew then parses the article and uses Google Cloud’s Natural Language Processing API to find biased wording. It categorizes those words as either “right”, “left”, or “neutral” then shows the extent either “minimal”, “moderate”, “strong”, or “extreme”. The analysis is displayed to the user within the extension window by showing a point on a line where the side denotes left or right, and the distance from the end denotes the extent of the bias. Skew was built for IvyHacks 2020. Due to the time constraints placed upon us and access to limited data, the detection is not particularly accurate, but it can still get the general sense of the article.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335034.61/warc/CC-MAIN-20220927131111-20220927161111-00750.warc.gz
CC-MAIN-2022-40
863
3
https://agenda.unil.ch/display?id=1558618811618
code
Supporting the Design for Technology-Mediated Sharing Practices Tuesday 4 June 2019 (12h00 - 13h00) - Internef - 237 Online social networks have made sharing personal experiences with others - mostly in the form of photos and comments - a common activity. The convergence of social, mobile, cloud and wearable computing expanded the scope of user-generated and shared content on the net from personal media to individual preferences to physiological information (e.g., in the form of daily workouts). Once everyday things become increasingly networked (i.e., the Internet of Things), future online services and connected devices will only expand the set of “things” to share. Given that a new generation of sharing services is about to emerge, it is of crucial importance to provide design practitioners with the right insights to adequately support novel technology-mediated sharing practices. In my talk, I explore these practices within two sharing contexts: (1) outdoor sports and (2) “sharing economy” services. The goal of my research is to understand current practices of sharing personal digital and physical possessions, and to uncover corresponding end-user needs and concerns across novel sharing practices, in order to map the design space for user experience design to support emergent and future sharing needs. Anton is completing his doctorate at the Research Group for Ubiquitous Computing at the Faculty of Informatics at USI Lugano in Switzerland. His research interests lie in understanding user experience around contemporary sharing practices of personal digital information and physical objects. In 2016/17 Anton was a visiting design researcher at the Everyday Design Studio in the School of Interactive Arts + Technology at Simon Fraser University in Vancouver, British Columbia (Canada). Before joining the Ph.D. program at USI Lugano, Anton worked in Sony Mobile Communications Inc. (USA), Metaio GmbH (Germany) and Ricoh Company Co. Ltd (Japan), gaining hands-on technology prototyping and UX design expertise in the context of mobile augmented reality, wearable devices, and cross-device interactivity. Anton’s work is published in over 20 peer-reviewed articles at the top HCI-related venues including ACM CHI, DIS, MobileHCI, NordiCHI, and Ubicomp. In addition to that, some of his research projects was presented at the Swiss Pavilion at CeBIT (2016/17), the largest computer expo in Europe, and at the Mobile World Congress 2010, the world’s largest exhibition for the mobile industry.
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657129517.82/warc/CC-MAIN-20200712015556-20200712045556-00168.warc.gz
CC-MAIN-2020-29
2,529
4
http://help.cashmusic.org/discussions/problems/22946-mailchimp-list-not-syncing
code
Awesome. Let me know but either way I'll look into it. There's a new mass import API from MailChimp that should make an initial sync better and I noticed some API stuff is pretty out-dated...so I should clean up regardless and want to make sure syncing is all happy. And subs are definitely high up on the roadmap! Finishing up a new public version of that so you should be able to check it out next week sometime... Thanks for responding please do keep me posted and give me any prompts for testing here too. I know sync is an issue if you are asking me to re-import after the API is upgraded then I can do so for testing purposes.
s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251799918.97/warc/CC-MAIN-20200129133601-20200129163601-00481.warc.gz
CC-MAIN-2020-05
632
7
https://www.flexmonster.com/question/how-to-indicate-specific-format-for-number-values-at-the-pivot-tables-component/
code
We have your pivot tables component integrated to our web product, and we are having some problems with customers that have the numbers in Spanish formatting ( comma for decimals and dots for thousands). The problem is that the sums that are generated with pivot tables component are not accurate. I’m thinking that we might need to indicate to pivot tables how to treat the number formatting? I checked the doc, and found this link: https://www.flexmonster.com/doc/data-types-in-csv/, but it doesn’t indicate anything about number formatting. Can you guide me to do this? Here is the CSV that we use as a data source: http://datos.energiaabierta.cl/rest/datastreams/244590/data.csv?pArgument0=2017 We are doing the following operation with the pivot tables (see attachment-01.png). I’m also attaching another image with the correct sum using excel pivot tables (attachment-02.png) Here is the pivot tables integration (click on “pivotear” icon on the left sidebar): http://datos.energiaabierta.cl/dataviews/241243/generacion-bruta-en-sistemas-medianos/ Thanks in advance. Thank you for the detailed explanation. Yes, you right the component is not ready to receive the formatted data in CSV. Please consider the idea of passing not formatted data and then applying the appropriate formatting on the grid. You can do it using the Format object, which is the property of the report object – http://www.flexmonster.com/api/format-object/. Please let us know if everything works fine for you. Thanks for the explanation. I will review our implementation and will contact you if I need more information. One more question: We are thinking to change to a JSON data source implementation to implement these changes. Will I need new keys for our customers? Your current keys will work with JSON data source as well. Should you have any other questions – feel free to ask.
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057584.91/warc/CC-MAIN-20210924231621-20210925021621-00165.warc.gz
CC-MAIN-2021-39
1,878
11
https://omicsplayground.readthedocs.io/en/latest/dataprep/uploadopg/
code
Uploading your data in Omics Playground¶ Users can import their data from the Upload data panel located under the Load Panel module. The platform requires a file with the read counts and one with a description of the samples at the minimum. An optional file with the desired contrasts can also be provided. The format of files must be comma-separated-values (CSV) text. It is important to name and format the files as explained in the previous sections.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296818999.68/warc/CC-MAIN-20240424014618-20240424044618-00799.warc.gz
CC-MAIN-2024-18
454
2
https://www.gartner.com/en/documents/4003549-how-financial-services-cios-are-assessing-the-business-value-of-key-technologies
code
Gartner analyzed nearly 900 use cases from our Eye on Innovation Awards to understand what value each initiative contributed. CIOs can use this data to see how IT leaders are generating business value from their technology investments and to incorporate these ideas into their own planning. Strategic Planning Assumption Additional Research Contribution Gartner Recommended Reading Note 1: Most Common Technologies Used to Address Primary Value Clusters
s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00633.warc.gz
CC-MAIN-2021-39
453
5
http://www.oddjack.com/?certs=topics/yelp
code
An open, distributed platform as a service Updated Oct 23, 2017 yelpapi is a pure Python implementation of the Yelp Fusion API (aka Yelp v3 API). Updated Oct 20, 2017 A php client for consuming Yelp API Updated Aug 3, 2017 A Yelp-inspired single-page web app where users can CRUD businesses and reviews Updated Oct 18, 2017 🌆 TouristFriend API lets you query Google Places, Yelp and FourSquare at the same time, with bayesian rankings! Updated Aug 24, 2017 pre-commit hook terraform; pre-commit hook prometheus Updated Oct 7, 2017 An Android app that queries Yelp's API for a random restaurant near you Updated Jun 24, 2017 An Android library for the Yelp Fusion API v3 Updated May 10, 2017 Learn how to find and work with locations in Django, the Yelp API, and Google Maps api. Updated May 13, 2017 DEPRECATED - A php client for consuming Yelp API v3 (Fusion) Experiments in providing business suggestions based on Yelp ratings Updated Nov 4, 2011 Share your appreciation with other CWRU students! Updated Apr 7, 2017 PHP Client wrapper for Yelp's Fusion API Updated Jun 25, 2017 An extensive Swift wrapper for the Yelp Fusion Developers V3 API. Updated Sep 29, 2017 CWRU/Yelp Love client in golang Updated Mar 18, 2017 Yelp integration for Mixmax email client ✉️ Updated Feb 18, 2017 Updated Jun 23, 2017 Yelp Review Dataset Parser Updated Mar 26, 2017 Recommendation System for the Yelp challenge dataset Updated Feb 2, 2017 Lighter version of yelp re-created in a week Updated Apr 14, 2017 A Python wrapper for Yelp API Updated Feb 3, 2017 A yelp chatbot ... Updated Jul 10, 2017 a sample web-app with react , express Updated Jun 13, 2017 Applying the Anna Karenina to Yelp reviews Updated Jun 29, 2017 A WordPress php library for interacting with the Yelp API. Updated Dec 5, 2016 An analysis on the effect of price perception on user ratings based on Yelp commentary Updated May 3, 2017 Does being on a date impact the score on a yelp review? Let's find out! Updated Feb 7, 2017 iOS App EatPick Updated Feb 4, 2017 Arrange your night out! Updated Feb 13, 2017 Repository for team UMW-Goats for DE Hack U 5 Updated Feb 21, 2017
s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187826283.88/warc/CC-MAIN-20171023183146-20171023203146-00088.warc.gz
CC-MAIN-2017-43
2,139
58
https://mail.haskell.org/pipermail/haskell-cafe/2011-February/088998.html
code
[Haskell-cafe] Byte Histogram andrewcoppin at btinternet.com Sat Feb 5 16:40:58 CET 2011 On 04/02/2011 07:30 AM, Johan Tibell wrote: > Right. It can still be tricky. I think we can get rid of a large > number of strictness issues by using strict data structures more > often, this should help beginners in particular. For the rest better > tooling would help. For example, a lint tool that marked up code with > the strictness information inferred by the compiler would be useful. I > had time to write one I would make the output look like HPC html > reports, with one color for strict function arguments and one color > for lazy function arguments. There's the RTS watch that makes it spit out heap profiling information. However, determining what the hell this data actually means is well beyond my powers of comprehension. I keep hoping that eventually the mechanism used by ThreadScope will eventually allow you to compile a program with profiling, run it, and observe absolutely everything about its execution - how many cores it's using, how much RAM is allocated to each generation, etc. Then again, if you could actually single-step through a Haskell program's execution, most strictness issues would become quite shallow. Indeed, when I first learned Haskell, the very concept of lazyness ever being "detrimental" was incomprehensible to me. I couldn't imagine why you would ever want to turn it off. But then I built a simple program that single-steps through Haskell(ish) expressions, and suddenly discovered that foldl' needs to exist... More information about the Haskell-Cafe
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949598.87/warc/CC-MAIN-20230331082653-20230331112653-00474.warc.gz
CC-MAIN-2023-14
1,590
27
https://iitis.github.io/QSWalk.jl/latest/
code
QSWalk.jl is a package providing an implementation of open continuous-time quantum walk based on the GKSL master equation. In particular, it provides the implementation of functions useful for analyzing the local interaction, the global interaction, and the non-moralizing global interaction stochastic quantum walk models. Package repository contains examples presenting the most of the functionality of the package. Examples are provided as .ipynb, as well as .jl files. The detailed description of the package can also be found in manuscript available from arXiv. Description of the quantum stochastic models can be found in the following papers: - Quantum stochastic walks: A generalization of classical random walks and quantum walks by Whitfield, Rodríguez-Rosario, and Aspuru-Guzik, - Superdiffusive quantum stochastic walk definable on arbitrary directed graph by Domino, Glos, and Ostaszewski, - Properties of quantum stochastic walks from the asymptotic scaling exponent by Domino, Glos, Ostaszewski, Pawela, and Sadowski.
s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710978.15/warc/CC-MAIN-20221204172438-20221204202438-00080.warc.gz
CC-MAIN-2022-49
1,033
8
https://proxies-free.com/can-i-obtain-my-bitcoin-wallet-files-back-from-my-hdd-from-the-old-os/
code
So i guess now its good time again to give my lost wallet files another attempt. I’m an idiot for doing such a thing but then I heard perhaps I can unformat my HDD and retrieve my files back. I definitely have an idea of the passwords I used and that won’t be the issue. I was buying btc around 2014-2015. I then decided to install a new mobo and I believe I wasn’t able to unless I reinstalled Windows. I used the windows usb to re-install everything but I’m not sure. It was a very long time ago. I also don’t know if it was Bitcoin Core or MultiBit app that I was using on the desktop. Either way, I believe the files should be in the same place i.e. APP DATA > ROAMING > etc etc. Is it possible to use a program like EaseUs to recover these files? Is anybody willing to help me figure out if the files are completely gone forever or its still hidden under the old partition (which hopefully isn’t over written). Would it be perhaps under the Window.old folder? (i’m not sure if I have that folder, I need to double check back at work). I was using WINDOWS 10 and nothing else I think. If anyone can help me retrieve, I will personally send 0.3 BTC to you from that wallet. 🙂 Thanks for reading guys. Hope you can help.
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703547333.68/warc/CC-MAIN-20210124044618-20210124074618-00173.warc.gz
CC-MAIN-2021-04
1,239
12
https://forum.daohaus.club/t/grant-proposal-moloch-digital-moloch-cloudship/10879
code
Project Title: Moloch Cloudship (Team: Moloch Digital) Description: A 3D environment accessible by VR or desktop that will silo work, break down barriers for DAO to DAO coordination and collaboration, and incentivize building on Ethereum through gamified, incentivized problem solving quests. Manifesto/Vision: Provide a rich environment for coordination and collaboration amongst communities in the web3 space, allowing for faster deployment of world changing ecosystems. Problem: Work is scattered amongst all the platforms we use and that number of new platforms seems to be growing. Existing platforms have limitations. Barriers exist that stifle DAO to DAO collaboration and coordination. Work isn’t fun. Solution: Make it all accessible in one place. Destroy limitations we deal with now by moving to a platform that solves those issues or allows the tech to solve those issues. Create a spaceship everyone can literally walk down the corridor and poke our heads in to see what’s up in each others’ daos. Guilds will also help immensely with this and is why we are building the blueprint of the Cloudship to reflecct this direction we see DAO frameworks going. Gamify the problems each DAO is dealing with, attach bounties, allow interaction with a NPC (non-player chaaracter) to walk them through a questing system that promotes creative problem solving. - Token gated access, permissions, and interactions - Dome of DAOs for one location to vote in every DAO you are apart of - The Great Library of open source documents, dApps, assets, and many other shared resources amongst all DAOs - Like real-life social engagement with yours and other communities - Community specific automated greeters and tutorials - A beautiful place to meet and work that your community dreams up - Gamified experience Validation: We have 19 backers from our mirror and giveth crowdfunds so far. There are about 50 members in our discord. Our team is made of MetaGamers including one diamond founder. Dozens of communities and DAOs are ready to join the project. Progress: We have a physical world blueprint as well as a map blueprint of one of the dao spaces and our commons areas. Modular assets of interiors are being designed. We have launched a headless server. Roadmap v1 is about to be released. Avatars are almost finished and integrations beginning. World permissions for community members solved. We have published the first DAO world to be onboarded to Moloch Cloudship. Differentiation (from other projects): There is no web3 environment as rich as what we are creating with the real-time collaboration abilities that Neos provides. The richness of the interactions in these workspaces are unfounded anywhere else. Atlantis World comes closest to what we are building but still lacks in its overall experience. On Moloch Cloudship, things are intuitive, seamless, and add to the experience with additional tools not found in other platforms. We are also combining all the major tools used by DAOs into one platform. No more switching back and forth through 30 tabs on 4 desktops. For example, voting happens in one room for all your DAOs, with all active proposals visible to you. Finally, we are gamifying the experience with NPCs (non-player characters) that are programmable with metadata containing issues from a DAO or many DAOs with bounties attached, paid in xp/community token. The NPCs can guide you through an interactive problem solving process through a questing system should you choose to take on that quest. Grant Request: $34,200-37,200 USD What the Funds Are For: To support Moloch Digital in slaying Moloch with coffee, tacos, and web3 tooling and environments for the betterment of humankind. Full budget breakdown found below. Help Requested: Integrate MyMeta and metadata stored within avatars in Neos. Token gating worlds with metadata stored within avatar. Reading the blockchain within Neos. Additional Resources, Links, Portfolio: - 1 year Carter-Zimmerman Polis Citizen/Yatima or King Kazma Board member Neos Patreon membership ~ $250/month = $3,000/$6,000 board member - "Carter-Zimmerman Polis Citizen/Yatima or King Kazma" Account - Allows full commercial use of Neos and unlimited access to the physical base universe - 78.125 Neos Credits monthly - 1.17 TB of storage - Submit 8 permanent custom Neos exit messages - All of the Architect level perks - Includes Discord benefits - Moloch Digital team pay for 1 year of development of Moloch Cloudship Commons ~ $250/member/month = $15,000 - 5 team members @$10/hour for 25 hrs of work per month - Hardware ~ $3,000/member = $20,000 - Computing upgrades for stress testing - VR headsets to test limits and compatibility - Monthly operational costs for administration and clerical accounts = $1,200 - Discord boosts = $500 1 year - Domain purchase and website management = $200 - Monthly subcriptions like typeform, miro, hackmd, etc = $500
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679100575.30/warc/CC-MAIN-20231206000253-20231206030253-00710.warc.gz
CC-MAIN-2023-50
4,922
30
http://habitat-draw.blogspot.com/2009/01/low-cost-multi-point-interactive.html
code
La plataforma tecnológica que se propone construir y explorar en conjunción con el VVVV es Low-Cost Multi-point Interactive Whiteboards Using the Wiimote desarrollado por Johnny Lee de la Camegie Mellon University de Pittsburgh, Pennsylvania: Since the Wiimote can track sources of infrared (IR) light, you can track pens that have an IR led in the tip. By pointing a wiimote at a projection screen or LCD display, you can create very low-cost interactive whiteboards or tablet displays. Since the Wiimote can track up to 4 points, up to 4 pens can be used. It also works great with rear-projected displays. Software, Multitouch, Building pens.
s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232260658.98/warc/CC-MAIN-20190527025527-20190527051527-00548.warc.gz
CC-MAIN-2019-22
646
2
http://askubuntu.com/questions/427653/broadcom-sta-wireless-driver-wont-install
code
This question already has an answer here: I am running Ubuntu 12.04 and Windows vista on my Acer Extensa 4620Z. Wi-fi works fine in vista. When I try and install the Broadcom STA wireless driver in Ubuntu it says: Sorry, Installation of the driver failed. Please have a look at the log file for details: /var/log/jockey.log. When I connected to the internet via Ethernet cable and tried to install the driver the same thing happened. I am new to Ubuntu and any help would be appreciated. In jockey.log it says WARNING: /sys/module/wl/drivers does not exist, cannot rebind wl driver DEBUG: BroadcomWLHandler enabled(): kmod disabled, bcm43xx: blacklisted, b43: blacklisted, b43legacy: blackliste
s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375095494.6/warc/CC-MAIN-20150627031815-00136-ip-10-179-60-89.ec2.internal.warc.gz
CC-MAIN-2015-27
694
6
https://www.br.freelancer.com/job-search/tell-friend-site/
code
O projeto se trata de um app móvel que constará em um "eu" virtual que pode ser personalizado do seu jeito e interagir com seus amigos , o app pode funcionar como uma rede social de certa maneira podendo também compartilhar fotos e status com os outros amigos, o seu "eu" virtual viverá em uma vila que mudará e aumentará de acordo com os... where is my author part? quick fix im assuming Hi I want to work on a project where people across the world (in groups) greet my friend happy birthday either just the picture or short video wishing her birthday (in english or native language) and I will put it in a single video. I can hire multiple freelancers for this for each picture. Per picture cost would be 10-15$ max. My friend has a birthday party and I want you to draw a funny picture of them. Deliverables: * I want two (2) caricatures of my friend * Both caricatures should be amusing * Caricatures need to be in colour * I want the design files as well as the image file, at the end of the project What you should include in your bid * link to your portfolio * install and run c plus based linux tool and tell how it work user must add name, age, number, and choose card/ next show fortunes - user have account, profile display image age name and social link. if any body have like app pls send screen Hello, i'm making my own social network for an school project and know i'm getting struggled by this think about friends list. I'm trying to add people but it just adds the first row even with the Auto incriment. Work with teamviewer read the document and tell me how you will do the recommendations part [login to view URL]!AmocZLSxjLVWiVrfdoiHCAhP6G1q I need some graphic design. Tell me the best possible video call API with text chat facility working on all platforms and browsers Visit [login to view URL] and tell me how many days you need to fix the site. The site has popups I want to fix them up. Hi. I am lookong to hire someone for chatting onlone and sharing photos. There will be no nude photo sharing or sexting. The reason I ask for pics is because it helps to connect. I will pay hourly basis. Chatting will be done thru this website.
s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00160.warc.gz
CC-MAIN-2018-30
2,165
12
http://aqq-pleinair.info/46620-essay-slideshare/
code
residence hall, your high school, the home you grew up in; just any physical place. This definition was originally written by Shawn Taylor in the book. When you take these away, that means that they are no longer a part of your life (i.e. Under each college or university, you will see a tab called. Either way, have your hands ready to slap a flat essay my favourite game badminton surface. Stages of development of from a human ovum to a fetus Post a variety of leadership"s around a room. If I dont know the answer, I will do my best to find a credible source to answer you. When this happens, you have already captured the reader! At any interval, the group members may discuss or question the rationale for a participants response. The drinks are identical in every way. Better Learning Through Better Thinking. By attracting upon a striking fact that addresses the inquiry extensively, you can persuasively show your "take" on the answer.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000231.40/warc/CC-MAIN-20190626073946-20190626095946-00202.warc.gz
CC-MAIN-2019-26
944
3
http://www.factory-in-a-day.eu/project-idea/quick-overview/
code
Project objective: Reduce system integration time to one day The objective of this project is to marginalize the system integration cost by reducing the system integration time to one single day. The resultant 50% price reduction of fully integrated systems is not even the main effect. The really significant impact will be that the SME’s no longer have to earn back the investment through only one of their short production batches. In one day, the machines can be re-installed for another temporary product line and continue to be useful. Moreover, this opens up the possibility that the robots and other machinery are provided under short-term lease contracts to the SME’s, bringing the investment risk down to zero. |Factory-in-a-day is funded by the Seventh Framework Programme of the European Union.| |Supported by the Programme ICT Factories of the Future 2013 (FP7-2013-NMP-ICT-FoF). Programme under grant agreement no 609206. |Funding period: 1. October 2013 – 30. September 2017.| |Coordinator: TU Delft|
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948673.1/warc/CC-MAIN-20230327154814-20230327184814-00246.warc.gz
CC-MAIN-2023-14
1,021
7
https://www.lowyat.net/2016/100806/facebook-shows-off-its-360-degree-camera-facebook-surround-360/
code
At Facebook’s F8 developer conference, not only did Facebook make announcement for its existing services such as Messenger. The company also introduced their very own 360-degree camera called the Facebook Surround 360. It consists of 17 cameras in total, allowing users to capture videos in 360-degrees. The Facebook Surround 360 looks like a spinning top. It has 14 cameras around the edge, one fish-eye camera on the top facing upwards, and two cameras pointing down. According to the company, this setup allows the camera to capture a “truly spherical video”. Once captured, a software will be used to stitch all the clips together to form video in 4K, 6K and even 8K for each individual eye for stereoscopic playback. Again, this is nothing new. About a year ago, GoPro announced a 360-degree camera mount that lets users capture 360-degree videos. However, users will have to stitch the videos together using a separate software. For Facebook, they will not be selling the Surround 360. Instead, they will share the hardware design and video stitching algorithms on Github, so an average user will be able to capture 360-degree videos without having to spend extra on software and hardware. They will of course, still have to purchase their own cameras and tools to build the rig.
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334912.28/warc/CC-MAIN-20220926144455-20220926174455-00603.warc.gz
CC-MAIN-2022-40
1,291
3
https://rogerpseudonym.com/hostgator-add-mx-records/
code
HostGator is a leading Houston-based holding carrier of marketing, shared, digital exclusive web server, committed web server, and also clustered webhosting. It has grown significantly in the past couple of years to end up being the most significant internet-hosting business in the United States, with incomes over of a billion dollars a year. To name a few functions, it provides the attributes that clients need in order to achieve success online. This article lays out a few of these attributes. Customers who are unhappy with their HostGator account can ask for a refund within sixty days of signing up. If the refund is granted, HostGator will give an entire or partial reimbursement of the preliminary down payment plus any type of relevant costs. On top of that, HostGator’s plan of handling refunds effectively gives a thirty-day duration in which to ask for a refund if you are not satisfied with your acquisition. This refund/refund policy also puts on unexpected problems, malfunctioning items, and also delivery damage. HostGator supplies a number of different holding levels, from one of the most fundamental complimentary strategy to one of the most thorough business bundle at which point customers can create digital servers, add bandwidth, as well as install advanced programs. While a lot of the functions on offer at HostGator are easy to use, among its most innovative functions is its cPanel control board. This function enables users to do a selection of features, such as installing software application, changing web hosts, adding data sources, as well as getting stats. The cPanel control board can be utilized for a range of tasks on an online web server, permitting companies to expand their operations as well as develop even more resources. With over three hundred and fifty plans offered, HostGator makes it basic to find a strategy that will fit the requirements of your firm. HostGator supplies a diverse series of choices in both the core strategies and also shared plans that it sells. Both most prominent plans on offer are their Small Business and VPS strategies. These plans offer small to mid-sized businesses with up to five staff members, while VPS permits you to establish a virtual private server that is committed to you or a team of workers. Local business that require more control however do not intend to pay additional charges for devoted web server accessibility will certainly appreciate the Small company strategy. Hostgator’s Small company plans featured basic network gain access to, which is the most fundamental user-friendly interface for managing your digital exclusive servers. This includes email, support, FTP, as well as subdomain solution, which permit you to handle your domains in an easy to use manner. Hostgator likewise offers the vds, or digital Devoted Web servers, which allow you to manage a dedicated web server that comes with its very own os, software, and database. One of the best features that compose the Hostgator Small Business service is the included features that include cPanel. You can get assistance from the HostGator group through phone or chat, which aids you to get answers to any type of questions that you may have concerning cPanel or the Hostgator web servers. You can publish your website to Hostgator’s cPanel, which is absolutely free. You will be able to produce endless websites with the site building contractor tools offered on the cPanel, which makes it easy to develop a variety of different sites. If you want a specific system for running your blog, then Hostgator has a blog site platform that is cost-free with their organizing plans. The tools that you gain access to via Hostgator consist of the program html editor, the spread sheet program, the picture gallery, an internet mail customer, the data supervisor, an FTP customer, a purchasing cart software program, and an integrated web content administration system. With all these devices, it is easy to produce, submit, manage, and also run new websites. Because most of the programs provided by HostGator are straightforward, there is little to learn for a person that does not have much experience with webhosting. Much of the features provided by HostGator consist of PayPal as well as bank card handling, that make it very easy for individuals to purchase products on the site builder’s site. Most organizing firms use a free domain, yet HostGator consists of free domain names in a lot of their plans. Even if you do not require hostgator organizing for your new web site, you ought to consider utilizing HostGator for future usages. Even if you do not need high bandwidth or high storage space, HostGator can still give you with every little thing that you need for a basic web site. Many individuals use HostGator for fundamental site structure needs, and also due to the fact that it is so budget friendly, it is a dreamland to begin. When you require to develop a more elaborate web site in the future, nonetheless, you will probably wish to look elsewhere.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363216.90/warc/CC-MAIN-20211205191620-20211205221620-00300.warc.gz
CC-MAIN-2021-49
5,036
8
https://life-improver.com/game/what-are-the-different-techniques-i-can-use-to-get-past-the-man-eating-plants/
code
What are the different techniques I can use to get past the man-eating plants, if I have no choice but to go near their attack range? Or if I want to get an experience orb or vial that is in their path? I know that Zoya's (the Thief) 'Stealth Movement' skill and Pontius' (the Knight) 'Charge' skill can be used to get through without being caught, even if you are going near the plants' attack range. Are there techniques that don't require the use of special skills? What about other special skills that can be used to get past them? (Getting past can also mean distracting the plants, so that they don't try to catch you, when you get near them.)
s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335054.79/warc/CC-MAIN-20220927162620-20220927192620-00160.warc.gz
CC-MAIN-2022-40
649
2
https://www.expatforum.com/tags/diabetes/
code
USA Expat Forum for Expats Living in the USA Sorry this is a bit long -- I want to be clear. Hope you can help. I'm considering a move from Northern Ireland to New York, given a potential relocation offer, to work in Manhattan. Although not in a VERY high-end job, it's potentially a much better future, in some ways, so I'd like to...
s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703529331.99/warc/CC-MAIN-20210122113332-20210122143332-00282.warc.gz
CC-MAIN-2021-04
335
3
http://www.slidetoplay.com/games/fight-o-plankton/
code
Fight-O-Plankton brings the classic snake game into a new sensation! In the deep deep dea,planktons are fighting for their survival,please help those beautiful little creature. How to Play: **Draw lines to move your plankton and grow as long as you can. **Avoid colliding with the border and spikes and your own body. it’s simple but addictive, don’t miss the ultra old school fun!
s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115861305.18/warc/CC-MAIN-20150124161101-00180-ip-10-180-212-252.ec2.internal.warc.gz
CC-MAIN-2015-06
385
6
http://slodge.blogspot.com/2014/08/321-beta3-more-universal-updates.html
code
This build is still beta at present... I expect it'll have a few issues - so please do report them as you find them... we'll get the updates out there... The main feature of this 3.2.1-beta3 build includes some marvellous WindowsCommon support - for Jupiter WindowsPhone Xaml with Windows 8.1 Xaml. This is especially thanks to: - the lovely https://github.com/steveydee82 who's working at https://twitter.com/sequenceagency who have been pioneering lots of amazing shared code Jupiter apps - the fab https://twitter.com/pedrolamas who makes the brilliant http://cimbalino.org/ and the rest of the team who work on my music player of choice - https://twitter.com/NokiaMixRadio The support means you now must use a "new profile" like Profile 259 or Profile 78 to get working... don't blame me for this... blame Microsoft :/ If you want to try this Jupiter code, then you can now try building a Universal WindowsPhone/WindowsStore app - using the new Universal projects - using the new "WindowsCommon" assemblies inside a PCL of profile 32. I don't have any samples of this at present - but I'd love to hear more about your experiments with this - I'm interested in hearing more about your experience with this new unified Microsoft platform! At a more detailed level, since 3.2.1-beta1, this build also includes: - a fix for Title bindings in UIButton in iOS - some PictureChooser scaling and memory fixes (for iOS) - an infinite exception loop fix in the debug output sample files - nuget fixes for windowscommon - a default parameter added to WithConversion in fluent bindings - a null reference fix in the reflection code - when the linker has stripped out property getters/setters - a fix to improve ReloadState finding across multiple inhertiance hierarchy layers - ImeAction.Previous has been changed to match Xamarin's change of Android version - Json now has `ReferenceLoopHandling = Newtonsoft.Json.ReferenceLoopHandling.Serialize` set by default - A fix for double queryString escaping in WindowsPhone navigation - A fix for empty cc lists in the email plugin in iOS - An optimisation of resource image loading (fromBundle instead of fromFile) - A fix for UIDatePicker centering in MT.Dialog - An attempted fix for weak ref issues with CanExecuteChanged in ICommand in iOS - A fix for multiple file flushes in WriteFile in the File plugin This 3.2.1-beta3 update did also includes some attempts at getting Symbols uploaded for nuget too - but this isn't quite finished yet... seems like this nuget functionality doesn't work without a little effort for multiple assemblies in the same nupkg. This 3.2.1-beta3 build doesn't include any Xam.Forms support - https://twitter.com/Cheesebaron has pushed a fab sample about that to https://github.com/Cheesebaron/Xam.Forms.Mvx/ - beyond that Xamarin have also said there are some Mvx/Forms combination samples coming, but I don't have any inside info on these. https://twitter.com/Cheesebaron has also done some fabulous Fragment changes recently - https://github.com/MvvmCross/MvvmCross/pull/771 - expect these to be in 3.2.1 soon too :) OK... that's all from me for now... good luck with the updates :)
s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463607620.78/warc/CC-MAIN-20170523103136-20170523123136-00636.warc.gz
CC-MAIN-2017-22
3,156
27
https://www.freelancer.ph/projects/python/improve-existing-machine-learning-binary/
code
I need someone to review and optimise an existing training model for an ensemble binary classifier in python. The classifier is for a lending app, with the outcome variables being ‘paid’ or ‘default’. The data is provided in a pre-processed format, in two files: 1. A flat file with one row for each item in the training set and c.40 variables 2. A short time series of a single variable A description will be provided for all variables. Also provided is the existing python code of the training model. It is a basic ensemble model combining some rudimentary out-of-the-box python machine learning algorithms. Key performance metrics will be the AUC, Precision, and Recall. Elements that will be required: • Improving performance of model • Feature selection to ensure generalisation/robustness • Better hyper-parameter tuning • Model sensitivity testing • Determining appropriate weights for the ensemble Please let me know what is your bid for this work, and how long you think it will take. The process of work, and the milestone payments, will follow this order: 1. Review of existing model and short description of plans for work. (10%) 2. Progress report at half-way (40%) 3. Final output. (50%) The dataset and training model files will be provided once the project has been awarded. 41 freelancers are bidding on average $204 for this job Hi, I am an expert in Python. I made machine learning binary classification in python previously. I can do your project perfectly. Waiting for your response. Thanks. Hey there, I am expert in Machine and Deep learning, check out my profile for reviews about satisfied clients related to these projects. Feel free to inbox me any time. Thank you. Hi, I have worked on ML projects like anomaly detection in smart home energy usage & housing price prediction. I would like to work on your project. Let me know if you want to discuss further. Regards, Monir Hi,i'm a data scientist with a BS degree in computer science i will do your task as fast as i can and i will achieve it exactly as you want,don't worry about any thing contact me for discussion Hi, Dear How are you doing? I am very interested in your project. I am always ready for you. I wish you contact me as soon as possible. Let us discuss your project on chat in detail. Thanks for your regards.
s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027314641.41/warc/CC-MAIN-20190819032136-20190819054136-00289.warc.gz
CC-MAIN-2019-35
2,320
26
https://help.relativity.com/Server2021/Content/Relativity/Assisted_Review/QC_Round.htm
code
A quality control round is intended to provide reviewers with documents that This page contains the following sections: - Executing a QC round - Reviewing documents for a QC round - Reviewing Sample-Based Learning reports during a QC round - Evaluating overturns and making corrections - Finishing a QC round To execute a quality control round: - Click Start Round on the console. - Select Quality control as the Round Type. - For the Saved search for sampling, select a saved search containing categorized documents (e.g., <Project Saved Search> - Categorized). See Viewing categorized and uncategorized documents for your Assisted Review project for more information. - Specify your desired Sampling Methodology settings. The sample set is the randomly-selected group of documents produced by to be used for manual review as a means of training the system. Note: The fields in the Sampling Methodology section are defaulted to the values on the project settings; however, if you select Training as the round type, you override those default values. - Statistical sampling- creates a sample set based on statistical sample calculations, which determines how many documents your reviewers need to code in order to get results that reflect the project universe as precisely as needed. Selecting this option makes the Margin of error field required. - Confidence level - the probability that the rate in the sample is a good measure of the rate in the project universe. This is used in the round to calculate the overturn range as well as the sample size, if you use statistical sampling. - Margin of error - the predicted difference between the observed rate in the sample and the true rate in the project universe. This is used in the round to calculate the overturn range as well as the sample size, if you use statistical sampling. - Percentage - creates a sample set based on a specific percentage of documents from the project universe. Selecting this option makes the Sampling percentage field required. - Sampling percentage - the percentage of the eligible sample population used to create the sample size. - Fixed sample size - creates a sample set based on a specific number of documents from the project universe. Selecting this option makes the second Fixed sample size field required. - Fixed sample size - the number of documents you want to include in your sample size. Clicking Calculate sample displays the number of documents in the saved search selected for the round and the number of documents in the sample. If the values for the sample and/or saved search are unexpected, you can change any setting in the Start Round layout and re-calculate before clicking Go. You can't calculate the sample if you don't have access to the saved search selected for the round. This button is disabled if you've selected Stratified sampling as the sampling type. - Automatically create batches - determines whether or not a batch set and batches are automatically created for this round's sample set. By default, this field is set to whatever value was specified in the project settings. Once the sample set has been created, you can view and edit the corresponding batch set in the Batch Sets tab. - Maximum batch size - the maximum number of documents that the automatically-created batches will contain. This is required if the Automatically create batches field above is set to Yes. This value must be greater than zero or an error appears when you attempt to save the project. The batch set and batches created from this project are editable after you create the project. By default, this field is set to whatever value was specified in the project settings. Note: When the round is created, the field specified as the Use as an example field is set to Yes by default for documents included in the round. If you delete a round, Assisted Review reverts the Use as an example field value to Not Set (null). When reviewing documents for a QC round, you only review categorized documents. You are testing the accuracy of the categorized results of your Note: If you're done using a project, it's better for workspace performance if you finish the round rather than leaving it in a status of either Review in Progress or Review complete. After the QC round is started and all the documents in the sample are coded, admins assign the seed documents out after review after reading the reports to be corrected. Reviewers make the corrections to any seed documents that are causing issues (see Evaluating overturns and making corrections). The following reports should be reviewed after QC round sample documents have been coded but before finishing the round: - Round Summary report – useful after categorization because it shows the changes in categorization percentage from round to round. Also provides categorization volatility. See Round Summary report. - Control Set Statistics report – tracks progress of precision and recall and F1. Also gives the Summary of Rounds. See Control Set Statistics report. - Overturn summary report – tracks overturn percentages round to round. There are no overturns prior to a QC round. See Overturn Summary report. - Viewing overturned documents - The Overturned Documents view permits an Assisted Review admin to view documents that require re-evaluation quickly and efficiently. You can focus on a single round and filter by the highest ranking overturns. You may also use the pivot feature to see the most influential seed documents (documents that are responsible for a large number of overturns). Once you identify documents as needing further analysis, you can click on a link in order to review the document immediately. See Viewing overturned documents. - Rank Distribution report – shows level of conceptual similarity between human coded documents and the overall categorized documents. See Rank Distribution report. - Project Summary report – tracks overall project health. You can see a snapshot of overturn and categorization results as well as control set statistics in one place. See Project Summary report. Note: If issues are also being categorized by Assisted Review, you can also review the Issue Reports. During a computer-assisted review, a case team moves through several rounds of coding to train the system on the document collection and validate the computer’ results. This isn’t a formal round, but a between-rounds workflow used to make any necessary adjustments to the project to prepare for the next round. It consists of identifying, analyzing, and correcting (re-coding) documents which have a significant and adverse effect on project results. You are finding the seed documents that caused the overturns and then making any coding corrections to those seed documents that need to be made. Potential coding errors are reported in the Overturned Documents link in a Relativity project’s console. The Overturned Documents view permits an Assisted Review admin to view documents that require re-evaluation quickly and efficiently. You can focus on a single round and filter by the highest ranking overturns. You may also use the pivot feature to see the most influential seed documents (documents that are responsible for a large number of overturns). Once you identify documents as needing further analysis, you can click on a link in order to review the document immediately. See Viewing overturned documents. Note: We recommend that you make these adjustments prior to finishing a round and categorizing documents. The system can then make use of the corrections performed, and then apply them to the next true round. Consider the following when your reviewers are evaluating overturns and making corrections: Correcting coding inconsistencies between true or conceptual duplicates: - Each seed-overturn pair has a rank (or score) which indicates the degree of conceptual similarity they share. The maximum possible score is 100, which means the two documents are conceptual duplicates. Conceptual duplicates are documents which may or may not have identical text, but do contain the same conceptual content according to the analytics index. While it is possible that conceptual duplicates may also be exact textual duplicates (i.e., documents with the same MD5 hash value), this should not be assumed from a score of 100. - We recommend that you use the Overturn Documents report to locate these documents by filtering on the round and sorting by descending rank. A good best practice is to re-evaluate each seed-overturn pair having a rank of 95 and higher to see which document was coded correctly, as well as whether each is a suitable example. Identifying and correcting the most influential seed documents: - When viewing overturn reports at the end of a round, the same few documents can be responsible for many overturns. If those seed documents were incorrectly coded, they can greatly inflate the overturn rate for the entire project. Finding and correcting these situations is an essential component of QC round protocol. - The quickest way to find the most influential documents is by using Pivot on the Overturned Documents report. Simply choose Seed document in the Group by drop-down list and leave the <Total Only> drop-down list as is. Using the Overturn Analysis related items pane: - Once a document has been targeted for re-evaluation during a QC round, you can navigate directly to it using the hyperlinks in the Overturned Documents report. Once you reach the core reviewer interface, open the Overturn Analysis related items pane by clicking the Assisted Review icon in the bottom right corner. - Clicking the file icon next to the document's control number opens the target document in a separate viewer window. A reviewer can compare the two documents side by side to assist in the decision-making process. Note: The Overturned Documents view is helpful for review management, but you may want to prevent users from seeing the rest of the project when they only need access to overturns. You can also provide reviewers access to overturns via the field tree, which includes an overturn status field for your project. To pursue this option, create a view that can be used in conjunction with the field tree. Once all of the documents in the sample set have been coded, you should finish the round. You also have the option of finishing a round before all of the sample set documents have been coded. To finish a QC round: - Click Finish Round on the console. - Specify whether you want to categorize documents when you finish the round. You have two options depending on your project: - Categorize for designation - categorize all documents in the project based on their designation coding. - Categorize for issues - categorize all documents in the project based on their issue coding. This is only available if you have added a key issue field to the project and a reviewer has issue-coded at least one document in the sample set. - Specify whether you want to save categorization results from the previous round when you finish the current round. You may have two options depending on your project: - Save designation results - save the results of designation coding from the previous categorization. This is useful because when categorization runs, the previous results are cleared in order to apply the new category values. You can't save designation results if you did not categorize designations in a previous round. - Save issue results - save the results of issue coding from the previous categorization. This is only available if you have added a key issue field to the project. You can only save issue results if you categorized issues in a previous round. Note: You shouldn't save results at the end of every round. Saving results, especially for larger cases, can add several hours to the time it takes to finish the round. - Enter the naming for your categorization results. - Categorization results set name - the name of the categorization results set. By default, this is the name of the previous round. This is only available for editing if you are saving designation and/or issue results. - Categorization results set description - a description of the categorization results. This is only available for editing if you are saving designation and/or issues results. - Click Go. If you choose to both categorize and save results, the saving of results is performed first, then categorization.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00707.warc.gz
CC-MAIN-2022-05
12,423
65
https://toolant.com/products/okiaas-a6-cut-resistant-work-gloves
code
Free Shipping over only $35 No Minimum Shipping Quantity Required - 【ENHANCED SAFETY】Made of HPPE blended with spandex, steel wire and nylon. Comply with ANSI cut resistance Level 6, provide industry grade hand protection. - 【GOOD ABRASION RESISTANCE】Reinforced coating between thumb and index fingers offers extra endurance. Smart touchscreen design allows working with tablet or phone, work gloves for men and women. - 【FIRM GRIP & TACTILITY】Rough black sandy nitrile finish on palm and fingers guarantees excellent anti-slip performance when holding glass/pipes, even in wet or oily conditions. - 【HIGHLY VERSATILE】Reusable cut resistant gloves for handling knife, concrete, metal, blade, glass and building material. Ideal for both heavy and light duty jobs: mechanic, construction, garden/yard, wood working, whittling, warehouse, auto repair, fishing, HVAC, carpentry, metalworking, resin work, carving, etc. - 【SIZE TIP】At OKIAAS, we center on customer satisfaction. Referring to Size Guide and Measuring your hand is recommended before your purchase.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358774.44/warc/CC-MAIN-20211129134323-20211129164323-00444.warc.gz
CC-MAIN-2021-49
1,078
7
https://sharonshowcase.blogspot.com/2020/04/blank-page-muse-stamps-decorated-bag.html
code
Pop over to the Blank Page Muse Stamps Blog to see how to make these little decorated gift bags - perfect for little Easter gifts and quick and easy to make! And don't forget there is a fab discount available when you purchase in store!!!!! Don't forget to check us out at all of our social media sites The Blank Page Muse- https://blankpagemuse.com/ FB Fan Page- https://www.facebook.com/groups/blankpagemuse/ Instagram Shop- https://www.instagram.com/blankpagemuse/ Instagram Blog- https://www.instagram.com/blankpagemuseblog/
s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487637721.34/warc/CC-MAIN-20210618134943-20210618164943-00274.warc.gz
CC-MAIN-2021-25
528
7
http://www.georgeroyer.com/interstate-2-1/
code
Powertrace was the result of one of my graduate level design courses. We focused on different methods of understanding a space and its use by mapping human activity. For my final project, I designed a mobile game that used the metaphor of ghost-hunting to create persistent entities that represented overuse of energy. This project was intended to creatively promote sustainable energy usage on the campus of the University of Texas. Download a high-resolution image of the entire poster here.
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000266.39/warc/CC-MAIN-20190626094111-20190626120111-00167.warc.gz
CC-MAIN-2019-26
493
3
https://www.analyticsvidhya.com/blog/2015/05/infographic-quick-guide-learn-python-data-science/
code
Infographic – Quick Guide to learn Python for Data Science A situation has been described below. Has it ever happened to you? I wanted to learn Python for Data Science, so I googled ‘I want to learn Python for data science’. Google, effortlessly, provided you the link of all resources to learn Python. Then, you get bemused by the innumerable links available to learn Python. Eventually, you end up contemplating, ‘From where should I begin now?’ Yes ? Don’t worry. Because you will never again face such situations. There are plethora of resources available to learn programming and data science in Python. It is difficult to find a structured approach to master this language. To solve these problems, we launched learning path for data science in Python. Today, we take this once step forward and provide you with an infographic for the same. Feel free to circulate this to your friends or take a print out and keep it on your pin-board! Download the PDF Version of this infographic and save it in your computer by clicking here –> Data Science in Python.pdf To view the comprehensive version of this learning path, click here: Python learning path resources Once you complete the Beginner Level, read this baby steps guides below and proceed to the next level: Once you reach step 4, follow the baby steps guide shared below: After this, proceed as per the Infographic. Incase, you feel any difficulty in learning python, feel free to ask us here.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679516047.98/warc/CC-MAIN-20231211174901-20231211204901-00002.warc.gz
CC-MAIN-2023-50
1,466
12
https://ti-qa-archive.github.io/question/66301/textmate-titanium-intellisense-auto-code-completion.html
code
On this URL, it says, following needs to be done; make sure the Ruby JSON gem is installed: sudo gem install json cd /Users/[your username]/Library/Application\ Support/TextMate/Bundles reload your bundles or restart TextMate Could somehow point me in the right direction as to where you do the "sudo gem install json"? Terminal? if so, what path? Anything else I need to know? Yes you would run that from the Terminal. You could use the ~ to go into your user folder without having to type in it like this. cd ~/Library/Application\ Support/TextMate/Bundles Then pull the bundle from github while still in Terminal Now make sure the bundle that was pulled has the file extension tmbundle or TextMate won't see it. In terminal do this And see if you have something like ti-mo.tmbundle in there. If you do, you are good to go. If not, locate the thing that you think is the bundle and rename it to ti-mo.tmbundle. Replace somefilename with the file that represents the item you just fetched through github. mv somefilename ti-mo.tmbundle Now In TextMate go to Bundles/Bundle Editor/Reload Bundles. At that point you try to get code completion by clicking option and esc. If you don't see a list related to Titanium pop up, you probably need to install the gem for JSON. You would also do this in the Terminal but the path doesn't matter because it should find the correct path on its own. sudo gem install json I have seen some notes that claim you may need to install json_pure if the json gem doesn't work for you. If that's the case, do this. sudo gem install json_pure Hope that helps Fantastic answer…. and thank you for taking the time to answer it that carefully. I'm sorry but… I could not test the suggestions… Application Support/Textmate folder is not there! Textmate folder under Application Support folder is missing. For some reason 1.5.9 ( which is the latest download from the textmate web site ) does not create that folder. I tried two installs, no luck. I thought this would have been created automatically. I'm new to mac and I get confused between the multiple "/library" folders that seem to be repeated in various locations. As a result, I see multiple locations for the "Application Support" folders when I searched for it. I always log in as "admin" and there are no other users with admin privilidges on this mac. I've got two Applicaiton Support folders.. one as "macintosh HD/library/Application Support" -this one contains 15 sub folders ranging from apple to titanium. I also have this; -and this one contains 24 sub folders ranging from "AddressBook" to "Transmission". Note that titanium folder is also listed here as well. I get confused between the two Application Support folders above, which are both accessible from the "finder" window, one thru places>admin one thru devices>macintosh HD. But BOTH ARE MISSING THE TEXTMATE IN THERE! - even though, I just used the textmate and crated a test file with it. Lots of questions here… Where is the TextMate's Application Support folder? Should it be in one of these locations? And why is there multiple Application Support folders? Which one do I pay attention to ( I always login as "admin" ). Isn't there trick to create the proper app support textmate folder by installing a test plugin/bundles using the textmate user interface which will give us a fast track to by pass all these issues? Basically, I'm stuck at this level. The library folder that is on the root of your drive is the one that is for all users and then others are for the users in your example there would be the following folders. /Library/Application\ Support (all users can see this one) /Users/admin/Library/Application\ Support (the user named admin sees this one AND also the all users folder) There is no hard fast rule about folders that you ought to use but generally you want files in your user folders. The ~, BTW, is shorthand for your user folder so typing cd ~ is the same as typing cd /Users/[your username] and it's the quickest easiest way to get into your folders. I noticed a comment about there being a Bundles folder in the app directory e.g. Textmate.app. You'll want to avoid modifying the contents of that folder because it can and will get replaced when you install new versions of TextMate. The safest place is the Application Support folder for the app. I assume since your Bundles folder is missing you can just create one and then continue the installation of the Intellisense bundle. This next command would be your new first step and then continue with the rest. mkdir ~/Library/Application\ Support/TextMate/Bundles Yes there are two Titanium folders in Application Support. I assume the one in the root is there so you can log in as different users and get Titanium running. The one in your users folder is just for you and that is where updates for Titanium's SDK are installed. Found that out the hard way lol. Did I miss anything? This took care of the missing Application Support folder and the installing the bundle etc. Now the TextMate do recognize this bundle. But…. As expected, typing "Ti." does nothing,. Obviously, the "sudo gem install json" is needed. So, I Opened up the terminal and right there and then ( at the prompt without changing any directories ), I typed "sudo gem install json" and hit enter. Terminal asked for a password. That's where my buck stopped. My password starts with a ' character. And the terminal just does not accept it as a valid character when entering a password. Since sudo password's has a limit of 3 to try out, I felt that this little project started getting near to dangerous territories. At this time, I chose to stop. I will change my admin password and then try again ONE MORE TIME and for the last time. Any suggestions… And Isn't there another ( a UI way ) of doing this right from within TextMate UI? Well, changing admin password was unnecessary. When my terminal asks for a password, it simply does not take any character as input. There is a gray cursor that sits right in front of the word "password:" but no characters are taken at that point. If you hit enter ( which I already did once ), a bad attempt is recorded and get a warning that I can try the admin password only 3 times. What could be the reason that this smart program asks for a password and then takes no input (except hitting the enter key and take that as blank password and consider this input as a bad attempt ) ? I'm baffled. Here are the steps to generate that problem; I open up terminal and at the admin$ prompt, I enter "sudo gem install json" In response I get this; WARNING: Improper use of sudo command could lead to data loss or the deletion of some important system files. Please double-check your typing when using sudo. Type "man sudo" for more information. To proceed, enter your password, or type Ctrl-C to abort.
s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400206329.28/warc/CC-MAIN-20200922161302-20200922191302-00747.warc.gz
CC-MAIN-2020-40
6,852
65
https://forum.cogsci.nl/discussion/5516/a-repeated-measures-analysis-bayesian-or-otherwise-with-dependent-measurements
code
A repeated measures analysis (Bayesian or otherwise) with dependent measurements Hi JASP experts, This is a general question about assumptions that, I think, is applicable to both Bayesian and traditional repeated measures ANOVAs. @Cherie and I have eye-movement data of participants searching through a set of books. I'll simplify the design a bit for the sake of the discussion, but we can provide the actual data if that's useful. There are two book categories, A and B. For each trial we have quantified the gaze duration on each of the categories, giving two measures per trial. These measures are dependent, because if they look at A then they cannot look at B. In other words, high gaze durations for A are predictive (though not perfectly) of low gaze durations for B, and vice versa. Then we have an experimental condition with two levels, X and Y. We're interested in whether this condition affects gaze duration, such that participants look more at A in condition X and more at B in condition Y. An intuitive appealing way to analyze this is with a repeated measures, in which we treat book category as a factor, so we have a 2 (book category: A, B) × 2 (condition: X, Y) design with gaze duration as dependent measure. And then we'd be interested in the book category × condition interaction (not in the main effects of book category or condition). Now here's where things get tricky. - I'm pretty sure that it's ok to look at the main effect of condition on gaze duration, because X and Y are independent. - I suspect that it's problematic to look at the main effect of book category on gaze duration, because A and B are not independent. But I'm not 100% sure about this. - And what about the book category × condition interaction. Is that valid? And if not, how would we ideally analyze a dataset like this? I find it hard to wrap my head around this issue, so I really hope that someone can shed some light on this for us! How dependent are A and B? If they are completely dependent (say 100% gaze = GazeA + GazeB), than no need to put both measurements into the model - the intercept will give an indication for both, the main effect for condition (X/Y) will actually be the interaction, with the conditional means in X and Y the effect of A vs. B. You can also convert your measurements to that this ^ is true: DV = GazeA / (GazeA + GazeB) Thanks for your reply. 😄 The gaze durations on A and B are somewhat dependent, but not perfectly. (If they were, we could indeed recode it without losing data.) Basically, there are three possibilities: And the measures that we have are proportional gaze durations for A and B across a trial, which are generally values in the 0.1 to 0.3 range. So to restate the main question: Given this scenario, is it acceptable to treat this as a 2 (book category: A, B) × 2 (condition: X, Y) design with gaze duration as dependent measure? Hmmm... Given your data and design, probably the most correct analysis would be a multinomial logistic regression... But let's stick to an ANOVA-like design. It seems %A and %B are dependent (negatively). You can deal with this dependance in two ways: This would mean you use a liner-mixed model with a random intercept by trial (accounting for the differences between trials in %neither-A-nor-B), and a random slope for Book by trial (accounting for the negative dependence within each trial). lme4type formula, your model would look like this: (...| Subject)indicating any within-subject effects) Thanks for this. That makes sense. Our design is actually a little more complicated than what I described here, in the sense that there are four book categories, and four conditions. Does that make any difference for your proposed approach? I don't think this should matter. But, upon further reflection, the lme4formula should be: to account for the fact the trials are nested in subjects (and a random book effect per subject) or, if you suspect there may be any random effect for trials across subjects: > to account for the fact the trials are nested in subjects (and a random book effect per subject) Right, I was thinking about that too, but I was unsure how to indicate that. Thanks for all your help!
s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00617.warc.gz
CC-MAIN-2023-14
4,201
36
https://beyond-gdp.world/wise-database/about-the-wise-database
code
About the WISE Database The WISE Database is a repository of Beyond-GDP indexes and indicators. It is hosted by the Institute for Environmental Sciences Leiden (CML) of Leiden University, The Netherlands. The WISE Database has been under development since 2020 when a group of “Citizen Scientists” of the Wellbeing Economic Alliance (WEAll) started collecting data on various Beyond-GDP metrics after the publication of a WEAll briefing paper. The work of the Citizens Science project was also supported by the United Nations University (UNU) WISE Transformation Initiative. The main current source of funding is the WISE Horizons project, which is a Horizon Europe Research & Innovation Action (GA 101095219) funded by the European Commission. The project started on January 1st 2023. The WISE Horizons consortium members are the Institute of Environmental Sciences, ZOE Institute for Future-fit Economies, Paris School of Economics, The Centre for the Understanding of Sustainable Prosperity (CUSP) at the Universities of Surrey, University of York, SINTEF, the Centre for Applied Research in Botswana and Tsinghua University. We also encourage you to sign up for WISE Horizons Network updates. The current iteration of the WISE database is also supported by Chinese Scientific Council (CSC). We would like to express our gratitude for all the financial and in-kind support from all our sponsors, past and present.
s3://commoncrawl/crawl-data/CC-MAIN-2024-18/segments/1712296816045.47/warc/CC-MAIN-20240412163227-20240412193227-00775.warc.gz
CC-MAIN-2024-18
1,420
5
https://www.amazon.jobs/en/jobs/1836161/applied-scientist?cmpid=bsp-amazon-science
code
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses. As a core product offering within our advertising portfolio, Sponsored Products (SP) helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The SP team's primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! The Response Prediction team builds machine-learning models and infrastructure to support the Sponsored Products Ads business. Through precise estimation of shoppers' response to ads (e.g. clicks or product purchases), this team helps deliver the most relevant ads experience to shoppers, improves advertisers' ROI, and optimizes Amazon's long-term monetization. Additionally, this team builds and operates one of the largest ML workflows in WW Advertising, serving Search and Detail Pages. This is also owns the horizontal ML infrastructure to support various ML use cases - from offline ML pipelines to online model inferencing and model management services As a Applied Scientist on this team, you will: · Build end-to-end machine learning solutions; build ML models and perform data analysis to deliver scalable solutions to business problems. · Perform hands-on analysis and modeling with enormous data sets to develop insights that increase traffic monetization and merchandise sales without compromising shopper experience. · Work closely with software engineers on detailed requirements to productionize the ML models you build. · Run A/B experiments that affect hundreds of millions of customers, evaluate the impact of your optimizations and communicate your results to various business stakeholders. · Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. · Research new innovate machine learning approaches. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video https://youtu.be/zD_6Lzw8raE · PhD or equivalent Master's Degree plus 4+ years of experience in CS, CE, ML or related field · 2+ years of experience of building machine learning models for business application · Experience programming in Java, C++, Python or related language · Advanced degree in Computer Science, Mathematics, Statistics, Economics, or related quantitative field. · Published research work in academic conferences or industry circles. · Experience in building large-scale machine-learning models and infra for online recommendation, ads ranking, personalization, or search, etc. · Technical leadership in machine learning. · Effective verbal and written communication skills with non-technical and technical audiences. · Experience working with real-world data sets and building scalable models from big data. · Thinks strategically, but stays on top of tactical execution. · Exhibits excellent business judgment; balances business, product, and technology very well. Experience in computational advertising. · Experience in building large-scale machine-learning models for online recommendation, ads ranking, personalization, or search, etc. · Experience with Big Data technologies such as AWS, Hadoop, Spark · Strong proficiency with Java, Python, Scala or C++ · Experience in computational advertising technology is a big plus · Published research work in academic conferences or industry circles · Excellent oral and written communication skills Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, please visit https://www.amazon.jobs/en/disability/us.
s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320299927.25/warc/CC-MAIN-20220129032406-20220129062406-00434.warc.gz
CC-MAIN-2022-05
5,514
30
https://worldbuilding.stackexchange.com/questions/113752/how-can-i-take-over-the-country
code
I live in a western democratic country and I'm rather discouraged. I don't feel the current political system works and have decided to take matters into my own hands. My goals are: - To use the current economic/political system as a springboard to achieve complete autocratic rulership. - To avoid a violent uprising, to use the "slowly cook a frog" analogy I would prefer a series of small steps to an overnight coup. - I am happy to reward a small number of trusted lieutenants and am prepared to deal with them severely should they fail or betray me. I had thought of using some kind of security breach (either physical or digital) to create a feeling of panic and prejudice among the general population. If I could engineer a solution to this problem then perhaps I could emerge as some kind of hero? Please help me achieve my goal of absolute power!
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474440.42/warc/CC-MAIN-20240223153350-20240223183350-00498.warc.gz
CC-MAIN-2024-10
854
7
http://www.techpository.com/linux-installing-dmks-on-red-hatcentos/
code
Linux: Installing DMKS on Red Hat/Centos – Dynamic Kernel Module Support (DKMS) is a framework used to generate Linux kernel modules whose sources do not generally reside in the Linux kernel source tree. DKMS enables kernel device drivers to be automatically rebuilt when a new kernel is installed. – An essential feature of DKMS is that it automatically recompiles all DKMS modules if a new kernel version is installed. This allows drivers and devices outside of the mainline kernel to continue working after a Linux kernel upgrade. – Another benefit of DKMS is that it allows the installation of a new driver on an existing system, running an arbitrary kernel version, without any need for manual compilation or precompiled packages provided by the vendor. – DKMS was written by the Linux Engineering Team at Dell in 2003. It is included in many distributions, such as Ubuntu, Debian, Fedora, and SuSE. DKMS is free software released under the terms of the GNU General Public License (GPL) v2 or later. – DKMS supports both the RPM and DEB package formats out-of-the-box. (from Wikipedia) I was trying to install guest additions on my Centos operating system but faced a lot of problems. Though the idea was very simple as all you have to do was to install DKMS package on your centos operating system and run the install virtual box guest additions setup, but the main problem is that dkms package is not available on your centos, it is a third party repository. So I believe there are a lot of new users who face this issue (I being one of them). Following steps will help in installing guest additions on your centos. Step1: update everything( though not really required but still I took this step 1st) Step2: make a directory rpm using the following commands and go in that directory and download the rpm package from this link or goto http://pkgs.repoforge.org/rpmforge-release/ and download the appropriate package. $ mkdir rpm $ cd rpm $ rpm -i rpmforge-release-0.5.2-2.el5.rf.*.rpm $ yum install htop now if you get an error something like this error: Failed dependencies: rpmlib(FileDigests) <= 4.6.0-1 is needed by rpmforge-release-0.5.2-2.el6.rf.i686 rpmlib(PayloadIsXz) <= 5.2-1 is needed by rpmforge-release-0.5.2-2.el6.rf.i686 That means you have installed your centos virtual machine from cloudera which is centos5 and you have downloaded rpm package for centos6 so all you have to do is to change that package and download package for centos5. You can also check if you are running a 32 bit machine or a 64 bit machine as there are two packages one is for 32 bit machine and the other for 64 bit. To check which machine you are running just type the following command if you get i386 or i686 that means you are running 32 bit machine and if you get x86_64 that means you are running a 64 bit machine. Step3. Install kernel-devel $ sudo yum install kernel-devel Step4. So almost everything is done and you are ready to install dkms package sudo yum install dkms if everything goes fine dkms package will install successfully, without any issues. Step5. This will be the final step Insert VboxGuestAdditions.iso and go to the folder which will be probably in and run the following command $ sh ./VboxLinuxAdditions.run This will successfully install Guest Additions on Centos.
s3://commoncrawl/crawl-data/CC-MAIN-2024-10/segments/1707947474670.19/warc/CC-MAIN-20240227021813-20240227051813-00572.warc.gz
CC-MAIN-2024-10
3,302
30
http://lj.rossia.org/admin/console/reference.bml
code
Think of this like a DOS or bash prompt. The first word is a command. Every word after that is an argument to that command. Every command has a different number of required and optional parameters. White space delimits arguments. If you need a space in an argument, put double quotes around the whole thing. If you need double quotes and spaces in an argument, escape the quote with a backslash (\) first. If you need to do a backslash, escape that with a backslash. It's pretty straight-forward. If you're confused, ask. Arguments in <angle brackets> are required. Arguments in [brackets] are optional. If there is more than one optional argument, you can't skip one and provide one after it. Once you skip one, you have to skip the rest. Find accounts registered from given ip |allow_open_proxy <ip> <forever>| Marks an IP address as not being an open proxy for the next 24 hours. |ban_list [ "from" <user> ]| List banned users. |ban_set <user> [ "from" <community> ]| Ban another user from posting in your journal. In the future, banning a user will also prevent them from text messaging you, adding you as a friend, etc... Basically, banning somebody restricts their interaction with you severely. |ban_unset <user> [ "from" <community> ]| Remove a ban on a user. |change_community_admin <community> <new_owner>| Change the ownership of a community. |change_journal_status <account> <status>| Change the status of an account. |change_journal_type <journal> <type> [owner]| Change a journal's type. |community <community> <action> <user>| Add or remove a member from a community. |deletetalk <user> <itemid> <talkid>| Delete a comment. |expunge_anonymous_comments <username> <itemid> <talkid>| Expunge all anonymous comments for a given post. |expunge_user <username> <userid>| Expunge malicious user products. For accounts with a lot of comments you might need to run this command several times. |expunge_userpic <user> <picid>| Expunge a user picture icon from the site. |faqcat <command> <commandargs>| Tool for managing FAQ categories. Finds the cluster that the given user's journal is on. |finduser <criteria> <data>| Find a user by a criteria. |friend <command> [<username>] [<group>] [<fgcolor>] [<bgcolor>]| List your friends, add a friend, or remove a friend. Optionally, add friends to friend groups. |gencodes <username> <quantity>| Generate invite codes. |get_maintainer <community or user name>| Finds out the current maintainer(s) of a community or the communities that a user maintains. If you pass a community as the argument, the maintainer(s) will be listed. Otherwise, if you pass a user account, the account(s) they maintain will be listed. |get_moderator <community or user name>| Finds out the current moderator(s) of a community or the communities that a user moderates. If you pass a community as the argument, the moderator(s) will be listed. Otherwise, if you pass a user account, the account(s) they moderate will be listed. Get a user's email address. (for emailing them about TOS violations) Get help on admin console commands Retrieve the infohistory of a given user |ljr_fif add|delete|list_excluded [<username>]| |moodtheme_create <name> <des>| Create a new mood icon set. Return value from this command is the moodthemeid that you'll need to define pictures for this theme. List mood themes, or data about a mood theme |moodtheme_public <themeid> <setting>| Make a mood theme public or not. You have to be a moodthememanager to do this. |moodtheme_setpic <themeid> <moodid> <picurl> <width> <height>| Change data for a mood theme. If picurl, width, or height is empty or zero, the data is deleted. |net add [CIDR] name|delete <CIDR>|ban_new_accounts <CIDR>|ban_comments <CIDR>|list| ip blocks manipulation. This is a debugging function. Given an arbitrary number of meaningless arguments, it'll print each one back to you. If an argument begins with a bang (!) then it'll be printed to the error stream instead. |priv <action> <privs> <usernames>| Grant or revoke user privileges. |reset_email <username> <value> <reason>| Resets the email address for a given account |reset_password <username> <reason>| Resets the password for a given account |set ["for" <community>] <propname> <value>| Set a userprop. |set_underage <journal> <on/off> <note>| Change a journal's underage flag. |shared <sharedjournal> <action> <user>| Add or remove access for a user to post in a shared journal. |suspend <username or email address> <reason>| Suspend a user's account. Deletes syndication. Totally. |syn_editurl <username> <newurl>| Changes the syndication URL for a syndicated account. |syn_merge <from_user> to <to_user> using <url>| Merge two syndicated accounts into one, keeping an optionally specified url for the final. Sets up redirection between from_user and to_user, swapping feed urls if there will be a conflict. |tag_display [for <community>] <tag> <value>| Set tag visibility to S2. |tag_permissions [for <community>] <add level> <control level>| Set permission levels for the tag system. |twit_list [ <user> ]| List your twits (the users you don't see in ljr_fif). If you twit somebody you won't see his/her entries in ljr-fif. Remove twit on a user. |unsuspend <username or email address> <reason>| Unsuspend a user's account.
s3://commoncrawl/crawl-data/CC-MAIN-2023-50/segments/1700679103810.88/warc/CC-MAIN-20231211080606-20231211110606-00181.warc.gz
CC-MAIN-2023-50
5,267
84
http://uucode.com/texts/xmlview/xmlview.html
code
XML View on Hierarchical Data Using SXML and Scheme Saint-Petersburg State University 7-9, Universitetsjaya nab, Hierarchical data could be viewed and processed as XML using the SXML format and Scheme language. We introduce a symmetry constraint on this approach, reveal the weak points of the SXML representation, and discuss mapping between XML and SXML. Applications like compilers and text processors, which intensively work with tree-like data, could benefit of integrating XML technologies. The data could be viewed as XML and processed using XPath, XSLT or XQuery. To make it real, re-usable XML processing libraries are required. Code of such libraries should be adaptable to different programming languages and tree structures. Our approach is to embed an XML virtual machine to a host application. The virtual machine has the programming language Scheme as its native byte code. XML processing code is written in Scheme and uses the SXML format as a view of application's data. A glue code between host application and Scheme layers establishes a transparent mapping of data between the layers. It means that the same data is represented using native data structures in the host application and as SXML in Scheme. This paper summarizes issues of this approach. Our experience is based on three projects: Python AST as XML , GNU find with XPath , and XSieve . The first two are research prototypes, and the latter is a language and a practical tool for XML data transformation. We hope our paper can be used as a checklist and food for thought for those who want to implement a similar system. Our XSieve implementation is built on top of XSLT processor xsltproc and Scheme interpreter Guile . The host application is xsltproc (more precisely, libxml2 and libxslt libraries), its tree-like structure is XML itself. The paper is written with XSieve in mind, but the paper's points are applicable to any tree-like structures, not only to XML. We start the paper with an example of equivalent XML and SXML representations. Then we add a constraint on our approach: the property of symmetry. Fortunately, most practical use cases for our approach satisfy the requirement. Then we describe problems of converting namespace and attribute nodes. The next section is a summary of issues with representing XML data as SXML. Parent pointers and the equality property add some complexity, and lazy instantiation is an essential problem. These issues could be avoided by using some other XML representation, but we want to re-use big code base of existing SXML tools , and therefore we have to use SXML. The rest of the paper is about mapping between equivalent XML and SXML nodes. We list the needs for the map and raise the question of memory management. Finally, we discuss issues and possible optimizations when mapping data for use in XPath and XSLT. 2 XML and SXML The following example demonstrate how the same data is represented in XML and SXML formats. XML: (article (@ (id "hw")) (para "Hello, " (object "World") "!")) Depending on the mode (an application or Scheme), the data is automatically represented either as XML or as SXML. Unfortunately, this symmetry might be broken if the both representations are instantiated, and one of the copies is modified. In this work, we limit ourselves to simple, but important case when trees are read-only, and the property of symmetry is satisfied automatically. XPath, XQuery and XSLT implementations fit to this case. 2.2 Issues of Converting According to our experience, the hardest part of a converter is namespace processing. Some of the troubles are handing scope of prefixes, supporting the default namespace, prefix rewriting, and so on. The main problem is that it's easy to create an SXML tree with free namespaces, when some namespace prefixes are not defined. There is no correct workaround. In XSieve, we leave free namespaces as is, hoping that upon adding the tree to a bigger tree, the binding will occur and become correct. Returning attribute and namespace nodes is another sort of issues. In SXML, it is possible to create these nodes without creating an owning element. Such independence might be impossible in the host application. Libxml is an example of a library with this issues. In XSieve, we create a fake owner element before converting an independent attribute or namespace node. 3 SXML Issues 3.1 Lazy Instantiation Scheme code might need to process only top-level nodes of an XML subtree, ignoring deep child nodes. In this case converting the whole XML subtree is an overhead. Alternative is to instantiate Scheme values on demand. SXML format isn't compatible with lazy instantiation. To navigate over XML tree, the core Scheme functions, such as car, cdr or map, are used. In most Scheme implementations, these functions doesn't support delayed values. For Guile, we created a patch which allows lazy lists (S-expressions). We redefined macro SCM_CAR and SCM_CDR to check if the head or tail of a pair are delayed values, and automatically evaluate them. Special care is taken to make sure that implicit instantiation doesn't happen during garbage collection. Other Scheme implementations might natively support lazy lists. At least, this possibility is mentioned in the Scheme R5RS standard . 3.2 Parent pointers Parent pointers is an essential mismatch between XML and SXML. In XML, each node (except a document root) has one and only one parent, but SXML nodes doesn't have links to parents. To implement XPath parent and ancestor axis, we should be able to find the parent of an SXML node. Several approaches are proposed by Oleg Kiselyov , but each proposal has drawbacks. As we have control over mapping of nodes between layers, we can use a better way. While converting XML to SXML, we can create a map between XML and SXML nodes. When the parent of an SXML node is required, a Scheme function can find the corresponding XML node in the map, then its parent, and then return the corresponding SXML node. This method of getting the parent node works only if an SXML node was not constructed by Scheme code, but was converted from XML. It's not a problem because we want to apply XML technologies to application data, not to Scheme data. Another issue appears when returning a subtree from the Scheme layer to the application layer. The application might need to insert the subtree to a tree. As the root subtree node can't have two parents, a copy of the subtree should be created and added to the tree. 3.3 Equality of Nodes The same XML nodes in a host application should be the same SXML nodes in Scheme. This requirement is important, for example, for implementing XPath. Each XPath step should return only unique nodes, and it's convenient to use the Scheme operator ëq?" to filter out duplicates in a node set. Using ëq?" works well for element, root, comment and processing instruction nodes, but attribute and namespace nodes require more sophisticated equality. When XPath returns an attribute node, the SXML node looks something like this: (@ (attr-name "attr-value")) On the other side, consider an element node with attributes: (elem-name (@ (attr-name "attr-value") (attr-name-2 "attr-value-2"))) After getting the attribute ättr-name" using the functions car and cdr, the SXML node looks like the following: Obviously, "(@ (attr-name ättr-value"))" and "(attr-name ättr-value")" are not equal. To have the property of equality of attribute nodes, we demand that the common part of the both expressions "(attr-name ättr-value")" is the same Scheme value. Namespace nodes have the same problem, which is handled by analogue. As result, a node comparator isn't just a call to ëq?". It should correctly handle different forms of attribute and namespace nodes. A natural way for supporting the equality property is to remember results of converting XML nodes to SXML nodes. If an XML node is already converted, Scheme code gets already existing SXML node. 4 Mapping Between XML and SXML A need for mapping between XML and SXML nodes is already mentioned during discussion of parent pointers and equality of nodes. Yet another argument is a need to switch from a Scheme result to XML nodes after running code in the Scheme layer. The mapping also optimizes conversion between XML and SXML nodes. Once a tree is converted, the result is remembered. Subsequent requests for conversion of these nodes return the stored result immediately instead of re-evaluating. Constructing a two-side map is a simple task. It is enough to add a mapping pair after converting each XML or SXML node. Memory management is the main problem with the mapping. Application-dependent methods should be used to make sure that the references in the map can't become invalid. In case of XSieve, libxml2 memory management and Guile garbage collection were considered. In libxml2, it is possible to register a callback on deleting a node, and delete also the corresponding mapping pair. In Guile, to avoid deleting values, each root of converted trees is marked as protected from garbage collection. 5 Issues of mapping for XPath and XSLT XPath data model and XSLT processing model have the properties which add constraints and allow optimizations. In XPath, a text node never has an immediately following or preceding sibling that is a text node. The glue code should prevent immediate text siblings or at least don't fail on them. For example, if one adds a new text node to a tree, libxml checks if the last node is text, moves the text content from the new node to the last text node, and frees the new node. The node reference in the corresponding mapping pair become invalid, so the pair should be deleted too. In XSLT, input and output XML trees are independent. If XML is put to the output, it can't appear as the input. It means that after converting data to XML, an application doesn't need SXML representation. Therefore, there is no need to update the map during SXML to XML conversion. The most part of XSLT processing operates on a context node. If XSLT traversing has passed by a node, then, most likely, the node will never be used again, and several entries in the map are useless. To avoid excessive growth of the map, it can be implemented using a weak map. Garbage collector automatically deletes pairs from a weak map if the pairs reference unneeded values. In the worst case, one XML node can be converted to SXML several times, each time represented by a different Scheme value. This doesn't break the property of equality. Indeed, the life time of these values is different, and no two values exist simultaneously, so it doesn't make sense to say that they are equal or not equal. 6 Related Work The main features of our approach are: Each individual aspect isn't new, but the combination of them is unique. The first feature, XML view on arbitrary data, can be found in the Java system JXPath from the Apache Software Foundation and in .NET API XPathNavigator from Microsoft. These solutions are limited only to Java and .NET, respectively. Authors of XML libraries sometimes notice that the core of a library is a sort of a virtual machine, but they don't work on the idea. The only real virtual machine we aware of is XSLT Virtual Machine (XVM) from Oracle. Unfortunately, there is no information if XVM can be embedded and used for processing arbitrary tree-like data structures. The most part of XML processing in Scheme is performed using SXML tools . The home page of the tools has a collection of links to Scheme XML projects. We are not aware about any notes on the equality property of SXML nodes. Lazy XML processing in the tools means lazy code evaluation, not lazy data instantiation. Embedding Scheme is a well-known topic. Many implementations were designed for embedding, or at least support it. Specific details come with documentation. For Guile, there is also a useful guide by William Morgan . Mapping data representations is a common task. It is done, for example, each time when extending a high-level language by C libraries. In most cases, bindings provide an API to navigate over data, and don't attempt to avoid API in favor of compound native data structures with the property of symmetry. - XML view on non-XML data, - a virtual machine for XML processing, - using Scheme for XML processing, - embedding Scheme, - symmetry of data. Reusing XML Processing Code in non-XML Applications. Kelsey, R., Clinger, W., Rees, J. (eds.): Revised5 Report on the Algorithmic Language Scheme. Higher-Order and Symbolic Computation, Vol. 11, No. 1, August, 1998. Python AST as XML Find with XPath over file system. Free Software Foundation, Inc.: Guile (About Guile). S-exp-based XML parsing/query/conversion. On parent pointers in SXML trees. The Apache Software Foundation: JXPath - JXPath Home. XPathNavigator over Different Stores. The Oracle XSLT Virtual Machine. In Proc. XTech, 2005. Incorporating Guile into your C program. File translated from On 8 Jan 2006, 09:17.
s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00023.warc.gz
CC-MAIN-2021-49
12,994
86
https://docs.microsoft.com/en-us/azure/iot-pnp/quickstart-connect-pnp-device-java
code
Quickstart: Connect a sample IoT Plug and Play Preview device application to IoT Hub (Java) This quickstart shows you how to build a sample IoT Plug and Play device application, connect it to your IoT hub, and use the Azure IoT explorer tool to view the information it sends to the hub. The sample application is written in Java, and is provided as part of the Azure IoT Samples for Java collection. A solution developer can use the Azure IoT explorer tool to understand the capabilities of an IoT Plug and Play device without the need to view any device code. Use Azure Cloud Shell Azure hosts Azure Cloud Shell, an interactive shell environment that you can use through your browser. You can use either Bash or PowerShell with Cloud Shell to work with Azure services. You can use the Cloud Shell preinstalled commands to run the code in this article without having to install anything on your local environment. To start Azure Cloud Shell: |Select Try It in the upper-right corner of a code block. Selecting Try It doesn't automatically copy the code to Cloud Shell.| |Go to https://shell.azure.com, or select the Launch Cloud Shell button to open Cloud Shell in your browser.| |Select the Cloud Shell button on the menu bar at the upper right in the Azure portal.| To run the code in this article in Azure Cloud Shell: Start Cloud Shell. Select the Copy button on a code block to copy the code. Paste the code into the Cloud Shell session by selecting Ctrl+Shift+V on Windows and Linux or by selecting Cmd+Shift+V on macOS. Select Enter to run the code. To complete this quickstart, you need Java SE 8 on your development machine. You also need to install Maven 3. For details on how to get set up with these, see Prepare your development environment in the Microsoft Azure IoT device SDK for Java. Install the Azure IoT explorer Download and install the latest release of Azure IoT explorer from the tool's repository page, by selecting the .msi file under "Assets" for the most recent update. Prepare an IoT hub You also need an Azure IoT hub in your Azure subscription to complete this quickstart. If you don't have an Azure subscription, create a free account before you begin. If you don't have an IoT hub, follow these instructions to create one. If you're using the Azure CLI locally, first sign in to your Azure subscription using az login. If you're running these commands in the Azure Cloud Shell, you're signed in automatically. If you're using the Azure CLI locally, the az version should be 2.0.73 or later; the Azure Cloud Shell uses the latest version. Use the az --version command to check the version installed on your machine. Run the following command to add the Microsoft Azure IoT Extension for Azure CLI to your instance: az extension add --name azure-iot Run the following command to create the device identity in your IoT hub. Replace the YourIoTHubName and YourDeviceID placeholders with your own IoT Hub name and a device ID of your choice. az iot hub device-identity create --hub-name <YourIoTHubName> --device-id <YourDeviceID> Run the following command to get the device connection string for the device you just registered (note for use later): az iot hub device-identity show-connection-string --hub-name <YourIoTHubName> --device-id <YourDeviceID> --output table Run the following command to get the IoT hub connection string for your hub (note for use later): az iot hub show-connection-string --hub-name <YourIoTHubName> --output table Prepare the development environment In this quickstart, you prepare a development environment you can use to clone and build the Azure IoT Samples for Java. Open a terminal window in the directory of your choice. Execute the following command to clone the Azure IoT Samples for Java GitHub repository into this location: git clone https://github.com/Azure-Samples/azure-iot-samples-java This operation may take several minutes to complete. Build the code You use the cloned sample code to build an application simulating a device that connects to an IoT hub. The application sends telemetry and properties and receives commands. In a local terminal window, go to the folder of your cloned repository and navigate to the /azure-iot-samples-java/digital-twin/Samples/device/JdkSample folder. Then run the following command to install the required libraries and build the simulated device application: mvn clean install -DskipTests Configure the device connection string: Run the device sample Run a sample application to simulate an IoT Plug and Play device that sends telemetry to your IoT hub. To run the sample application, use the following command: java -jar environmental-sensor-sample\target\environmental-sensor-sample-with-deps.jar You see messages saying that the device is connected, performing various setup steps, and waiting for service updates, followed by telemetry logs. This indicates that the device is now ready to receive commands and property updates, and has begun sending telemetry data to the hub. Keep the sample running as you complete the next steps. Use the Azure IoT explorer to validate the code Open Azure IoT explorer. You see the App configurations page. Enter your IoT Hub connection string and select Connect. After you connect, you see the Devices overview page. To ensure the tool can read the interface model definitions from your device, select Settings. In the Settings menu, On the connected device may already appear in the Plug and Play configurations; if it does not, select + Add module definition source and then On the connected device to add it. Back on the Devices overview page, find the device identity you created previously. With the device application still running in the command prompt, check that the device's Connection state in Azure IoT explorer is reporting as Connected (if not, hit Refresh until it is). Select the device to view more details. Expand the interface with ID urn:java_sdk_sample:EnvironmentalSensor:1 to reveal the interface and IoT Plug and Play primitives—properties, commands, and telemetry. Select the Telemetry page and hit Start to view the telemetry data the device is sending. Select the Properties (non-writable) page to view the non-writable properties reported by the device. Select the Properties (writable) page to view the writable properties you can update. Expand property name, update with a new name and select Update writable property. To see the new name show up in the Reported Property column, select the Refresh button on top of the page. Select the Commands page to view all the commands the device supports. Expand the blink command and set a new blink time interval. Select Send command to call the command on the device. Go to the simulated device command prompt and read through the printed confirmation messages, to verify that the commands have executed as expected. Clean up resources If you plan to continue with additional IoT Plug and Play articles, you can keep and reuse the resources you used in this quickstart. Otherwise, you can delete the resources you created in this quickstart to avoid additional charges. You can delete both the hub and registered device at once by deleting the entire resource group with the following command for Azure CLI. (Don't use this, however, if these resources are sharing a resource group with other resources you have for different purposes.) az group delete --name <YourResourceGroupName> To delete just the IoT hub, run the following command using Azure CLI: az iot hub delete --name <YourIoTHubName> To delete just the device identity you registered with your IoT hub, run the following command using Azure CLI: az iot hub device-identity delete --hub-name <YourIoTHubName> --device-id <YourDeviceID> You may also want to remove the cloned sample files from your development machine. In this quickstart, you've learned how to connect an IoT Plug and Play device to an IoT hub. To learn more about how to build a solution that interacts with your IoT Plug and Play devices, see:
s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655890181.37/warc/CC-MAIN-20200706191400-20200706221400-00549.warc.gz
CC-MAIN-2020-29
8,005
71
http://www.webassist.com/forums/post.php?pid=64049
code
Cash On Delivery Option- Ecart I need to add an option for the buyer to pay cash on delivery rather than paying by PayPal. I also need to have the order info emailed for both options. I have set this up using Universal Email for the PayPal part but am stuck with the COD option. I have addedd a checkbox to the checkout page to indicate COD: <input type="checkbox" name="cashondelivery" id="cashondelivery" value="1"/> I have added a hidden input on the Checkout page <input type="hidden" name="cashondelivery" id="cashondelivery" value="<?php echo((isset($_POST["cashondelivery"]))?$_POST["cashondelivery"]:"") ?>"/> What I need now is when Checkout is clicked it to go to a confirm page and then send me an email to give the orfer details. What code do I need to add to the checkout page and where do I need to add it and which page do I put the Universal Email on.
s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826145.69/warc/CC-MAIN-20181214162826-20181214184826-00262.warc.gz
CC-MAIN-2018-51
867
9
https://lists.fedorahosted.org/archives/list/users@lists.fedoraproject.org/thread/Q6OEWQRO7HGOD7ZHYTEYR5XHB3332EJ6/
code
On Sat, 10 Sep 2016 17:47:07 +0000 (UTC) Beartooth <beartooth(a)comcast.net> wrote: OK. I didn't have that app, but dnf installed it. However, neither info gnome-tweak-tool nor "man:gnome-tweak-tool" in Konqueror gives anything about using it. I tried the command line (twice): [btth@localhost ~]$ gnome-tweak-tool & Why are you running it in the background? I think that you have to have the gnome environment installed when you invoke this. It is looking for gnome-shell, not finding it on your system, and thus failing. So, install the gnome group, then open a terminal in X, and start gnome-tweak-tool from that terminal. When I did that, a window came up that allowed adjustments. The package doesn't contain any other documentation, but the gui looked self-explanatory.
s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103984681.57/warc/CC-MAIN-20220702040603-20220702070603-00729.warc.gz
CC-MAIN-2022-27
775
15
http://digitalmediaphile.com/
code
Microsoft customers with Surface Pro (original) and Surface Pro 2 have reported that the hardware button that controls the volume level on their tablet stops working after installing the latest Wacom Feel-It driver. If you have a Surface Pro or Surface Pro 2 (not the Pro 3) and your volume button no longer works, and you’ve recently installed the 721.21 Wacom driver, this could be the cause of the problem. If you’ve upgraded over an older version of the Wacom driver, you can roll back the driver in device manager, reboot, and this should resolve the problem. If you didn’t install a previous Wacom driver, head over to http://us.wacom.com/en/support/legacy-drivers/ and install the 720-10 driver. Select Tablet PC and download the 7.2.0-10 driver, restart, and you should be good to go. Miracast adapters like the Microsoft Wireless Display Adapter and the Netgear PTV3000, etc. negotiate a connection with the source device. To do this, they broadcast a message that basically announces that they are available for a connection. To do this, the adapter will use one of the three non-overlapping 2.4GHz 802.11 channels (1, 6 and 11) which in essence are the lowest common denominator and would be the most broadly available and used channels. (For this reason, if you are on a device that allows 5GHz only connections and suppresses 2.4 GHz, you cannot connect). Therefore, 2.4GHz is a requirement to negotiate a connection using Miracast. 1. If 2.4GHz is the only frequency supported by your router, then issues might occur due to saturated channels from nearby routers in your environment. You might try changing the channel on your router to see if conditions improve. To see all the Wireless channels nearby, open a cmd prompt and type: netsh wlan show networks mode=BSSID [press Enter] 2. If you are connected to your router using a 5GHz channel, the Miracast frequency can be negotiated to use 5GHz (but remember, the negotiation initiates over 2.4 GHz). 3. If you are not connected to a WiFi network, the Miracast connection will always be negotiated on 2.4 GHz A Miracast session creates a virtual, second network on a direct, peer to peer basis between your host computer/device and the target Miracast display/Miracast enabled TV/Miracast adapter. You can see this in the Network and Sharing Center in Windows 8.1 after a connection is successfully made: I’m not sure when this issue first started, but it is being reported with increasing frequency on Microsoft Communities. Note that while the following applies to the Surface Pro 3 running Windows 8.1, fully updated, it has been reported that the same issue occurs for those running the Windows 10 Technical Preview. A full HD connection via Miracast is expected but does not occur (to any Miracast display, not just the MS branded one) when Bluetooth peripherals are paired and connected. Only a 1366 x 768 connection is established. (This does NOT apply to the Surface Pro 3 Pen which has no impact on screen resolution.) Below is my TV, ready to connect at full HD 1920 x 1080: The connection is made at 1366 x768. You can see below the VGA like desktop on the Surface itself which switches screen resolution to match what is negotiated on the TV. (Looks even worse on the TV, and with streaming video, this stinks). The reason this happens (bug): If you already have an active connection with a Bluetooth device like the Microsoft Arc Touch Mouse Surface Edition, or other peripherals (I tested with my Parrot Zik BT headphones and repro’d this easily), some BT connected phones, etc. this BUG seems to force a 1366 x 768 connection. For some people (and I realize that folks using the Surface Pro 3 Dock may not find this useful) 1. Don’t use Bluetooth peripherals OR 2. Don’t CONNECT your Bluetooth peripheral to your SP3 until AFTER you have established the Miracast session. Either way, you will be able to get 1920 x 1080 if your TV supports it. I don’t know what the experience is like for folks using the Surface Pro 2 or Surface Pro original, but I’d be interested to know if there are similar Miracast + Bluetooth issues there as well. Please tweet your experience to me on Twitter @barbbowman On January 15, 2015, Microsoft released a package of drivers to Windows Update that includes an updated Marvell WiFi driver for the Surface Pro 3. If you are one of the folks that has been trying to resolve issues of connecting to 2.4GHz instead of 5 GHz on your dual band router, this new driver includes settings to fine tune your connectivity preferences. First, verify that you have Driver Version 15.68.3073.151: 1. Type the words device manager on the Start Screen/search and then open device manager. 2. Expand Network adapters 3. Right Click or tap and hold the Marvell AVASTAR Wireless-AC Network Controller and select Properties 4. Open the Driver tab and verify the version Specify the band: By default, the Band is set to Auto in the Value field. Access the dropdown list and select 5GHz if you want to connect to only the 5GHz band. Note that the 5GHz band is the one that provides the 802.11ac speeds. You can also specify 2.4GHz only. Important: If you change locations and have specified a setting other than Auto, you should change the setting back to Auto to insure connectivity “on the road”. This is especially important when using public WiFi which normally uses the 2.4GHz band.
s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131296587.89/warc/CC-MAIN-20150323172136-00103-ip-10-168-14-71.ec2.internal.warc.gz
CC-MAIN-2015-14
5,399
29
http://www.mto14368cd07.com/spurr-iconic-exclusive-maria-slipper-flats-nude-sp869sh95vzg-jqgkjam.html
code
The Maria Slipper Flats from Spurr are crafted from smooth faux leather set upon a rubber sole. Inspired by men's classic smoking slippers, the pair are effortless to wear and will add a chic touch to everyday outfits. - Faux leather upper; nude shade - Slip-on design - Rubber sole SPURR ICONIC EXCLUSIVE Maria Slipper Flats Nude SP869SH95VZG JQGKJAM
s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627998913.66/warc/CC-MAIN-20190619043625-20190619065625-00542.warc.gz
CC-MAIN-2019-26
351
5
https://community.teamviewer.com/t5/TeamViewer-14/Cannot-use-QuickSupport-Links/td-p/67786
code
I have installed the latest version 14 Teamviewer - even checked for updates when started. Version 14.4.2669 I created a new Quick Supoport link via the website and sent it to another workstation nearby. It loads and installs 14.4.2669 SQC and gets a Session code. But wwhen I try and remote it my system tells me theat the remote system is running an older version of teamviewer and provides instructions (that are not applicable) on how to update it. How do I get this going? Every bit of help tells me this should already be working. Thank you for your message. Could you tell us the exact OS versions you are using on both sides of the connection? Many thanks in advance. I'm experiencing the same issue with QuickSupport which is upgraded to v14 ( Both under Linux as Windows as host (both running version 14.4.2669) are not working with a Windows QS client. I've reported this in case #3227576 and on Twitter 3 weeks ago. Updated my Twitter link, great BTW, I cannot directly link to tweets because there is a stupid regex replacing 19 numers with (even if I add a ? behing my url): **Please do no post TeamViewer IDs** Screenshots of the message dialog: Ok I fixed my problem by getting my license activated by downgrading to v13 and upgrading to v14 again, getting a new TeamViewerID that is below 10 chars (I activated in 14.0.14470 later upgraded to the latest 14.4.2669 to be correct), because my activation of a 10 char ID was not working, I tried to work it out with support, but they blamed linux, but it was not apparently (like I said, and they where investigating, but "there is no update yet from development" was the only response every now and then). If you're not using an activated TeamViewer they will just throw the version out of date as a new nice error (https://community.teamviewer.com/t5/Previous-versions-EN/Version-out-of-date-Update-the-remote-Teamv...), this is especially nice if you are actually a paying business user (since v9 at least, upgraded every year, and later switched to paying per month), support is still at "there is no update yet from development". I'm so disappointed in TeamViewer with the non-existing support (and not in my language Dutch any more, but they do advertise the phone number, just to redirect you after some extra minutes waiting .. ), advising the wrong things, not having linux on the roadmap, changing to plain wrong error messages that will have you looking in the wrong place. I think this problem costed me at least 2 days, I'm done with TeamViewer, I will be looking for some FOSS alternative. Please contact me on twitter @bwbroersma if you have any advise. Ok, so that solution is only temporary, apparently after a reboot it will give you a license update popup with: OldLicense: PaidLicense NewLicense: After which the only solution is downgrading and then upgrading again, to get a 9-char ID back for at least the duration of the boot. sudo apt install teamviewer=14.0.14470 Start TeamViewer, and update back to the latest version, and restart the teamviewer deamon (otherwise it will hang). sudo apt install teamviewer sudo systemctl restart teamviewerd.service I hope TeamViewer will fix this issue SOON. But Last time I was already looking for an alternative to Quick Support, and found out about the build in MicroSoft Windows ways to get Remote Assistance, which has command line options to generate a invite file with a chosen password, can easily be scripted to use a random password, submit the file to a server, and use a linux Free RDP solution.
s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986700560.62/warc/CC-MAIN-20191020001515-20191020025015-00157.warc.gz
CC-MAIN-2019-43
3,534
24
http://ubuntuforums.org/showthread.php?t=838846
code
Music playing issues. I can't play more then 1 song at once. Like say I want to watch a youtube video, and listen to a song. The one that I open second will freeze, or just won't play. I tried asking in IRC for help. One guy said to install terminatorX. I did that.. but then he never told me WHAT exactly to do with it. I still got the error.. iuno. Another guy said to put everything to ALSA settings. I did, still get the error.Thanks to the people who helped me in there, IndyGunFreak and fabz0r, and to anyone who helps me here. Last edited by Zalibidas; June 24th, 2008 at 02:51 AM. Re: Music playing issues. I have no lockups, but I do face a problem of silence after a certain sequence of events. If I view a Flash object that has any sound, and later open an application that makes any sound, Flash 'took away' the sound from everything (so I reboot to get sound back). However, if I listen to a sound file, and leave the player open (even stopped or with an empty playlist, just as long as the application is never terminated) while listening to something Flash, I have no problem. I just make sure to open a music file first thing when I use the computer, and never close it. If I ever close that audio application and listen to audio in Flash, then nothing can make sound again. So this is what I suggest you try: reboot, and open some audio file. You can stop or pause it, so long as you do not quit the application. Now open some Flash, and then go back to that audio player that you left running, and I hope it continues making sound. Thats all I can suggest. Forget Google Linux; with no URL submission, they are missing a lot of pages! Instead, use nixCraft's Google Custom Search Engine: Linux Search Engine!
s3://commoncrawl/crawl-data/CC-MAIN-2016-26/segments/1466783396538.42/warc/CC-MAIN-20160624154956-00127-ip-10-164-35-72.ec2.internal.warc.gz
CC-MAIN-2016-26
1,726
9