Document
stringlengths
395
24.5k
Source
stringclasses
6 values
/* * IntPTI: integer error fixing by proper-type inference * Copyright (c) 2017. * * Open-source component: * * CPAchecker * Copyright (C) 2007-2014 Dirk Beyer * * Guava: Google Core Libraries for Java * Copyright (C) 2010-2006 Google * * */ package org.sosy_lab.cpachecker.core.waitlist; import org.sosy_lab.cpachecker.core.interfaces.AbstractState; import org.sosy_lab.cpachecker.cpa.loopstack.LoopstackState; import org.sosy_lab.cpachecker.util.AbstractStates; /** * Waitlist implementation that sorts the abstract states by the depth of * their loopstack. * States with a larger/smaller (depending on the used factory method) * loopstack are considered first. */ public class LoopstackSortedWaitlist extends AbstractSortedWaitlist<Integer> { private final int multiplier; private LoopstackSortedWaitlist( WaitlistFactory pSecondaryStrategy, int pMultiplier) { super(pSecondaryStrategy); multiplier = pMultiplier; } @Override protected Integer getSortKey(AbstractState pState) { LoopstackState loopstackState = AbstractStates.extractStateByType(pState, LoopstackState.class); return (loopstackState != null) ? (multiplier * loopstackState.getDepth()) : 0; } public static WaitlistFactory factory(final WaitlistFactory pSecondaryStrategy) { return new WaitlistFactory() { @Override public Waitlist createWaitlistInstance() { return new LoopstackSortedWaitlist(pSecondaryStrategy, 1); } }; } public static WaitlistFactory reversedFactory( final WaitlistFactory pSecondaryStrategy) { return new WaitlistFactory() { @Override public Waitlist createWaitlistInstance() { return new LoopstackSortedWaitlist(pSecondaryStrategy, -1); } }; } }
STACK_EDU
Timeline: Introduction and Creating Simple Animations In this article, we will cover getting started with Timeline and create a simple animation. According to Unity’s documentation, you will “…use the Timeline to create cut-scenes, cinematics, and game-play sequences by visually arranging tracks and clips linked to GameObjects in your Scene. We will need to add the Timeline Window to the editor. With the Timeline window open, select the game object in the hierarchy you will want to animate and create a Director Component and Timeline asset. Now, we have a Timeline for the game object we selected. Playable Director Component Looking at the game object, there is now a Playable Director component added to it. The Playable Director is the component that plays the Timeline animations. It has a few settings starting with Update Method… DSP — plays the Timeline according to the speed of an audio track. Game Time — plays the Timeline according to the game’s speed. If the game is running at half speed for a slow motion effect, so will the animation. Unscaled Game Time — plays the Timeline at a speed that is not affected by the game time, so if the game is playing in slow motion the animation will be playing at normal speed. Play On Awake controls if the Timeline will play as soon is the game object is activated. Wrap Mode will either have the Timeline loop or hold on the last frame. Lastly, Initial Time will be the amount of time that will pass before the Timeline plays. Creating A Simple Timeline Animation We are going to add an Animation Track to the Timeline of the coin game object. You will need to make sure to create an Animator for the game object, so an Animation can be created and added to it. Click on the Record button to enter Record Mode and you will see the Timeline turn red. While in Record Mode, left-click on the game object and add a key frame. We are going to add one more key frames by moving the time slide to the desired end time and rotating the game object 360°. When the Timeline is played back, the coin rotates 360° from the first key frame to the last key frame in 2 seconds. In the Animation window, you can adjust the timing of each individual key frame or all of them. You can also adjust the speed of the animation by right-clicking on the Timeline and converting the animation into a Clip Track. This allows you to use the Speed Multiplier to either slow down or speed up the playback of the timeline. You can adjust the Ease In and Ease Out by holding Control (PC) or Command (Mac) and dragging/holding the clip ends. You can record multiple animations on the timeline and have them blended, moved on the timeline and deleted. If you want to stack animation movements, you can do that in the Animation window and simply enter record mode and start adding addition movements. Above you can see how the Coin object is rotating and moving up and down at the same time.
OPCFW_CODE
Some test text! May 10 2022 by Derek Chong Microsoft Teams is a popular collaboration app for businesses; more than 145 million monthly active users and 500,000 organizations rely on Teams. Previously, however, enabling interaction on PDFs in Teams was not easy or required a separate software service integration. Well, in this guide, we show an easy way to boost productivity and keep users engaged, by enabling a full-fledged document experience right in their Microsoft Teams, no additional SaaS licenses required. We’ll show you how to integrate PDFTron’s WebViewer sample directly with MS Teams as a Teams App in a few steps. WebViewer allows you to open PDFs, annotate, fill, sign, edit, and so much more in Teams. Check out the full Note: Since we’re an SDK, you can later A JSON manifest file is generated when apps are made for Microsoft Teams. The manifest file contains the information the user needs to use the Teams app, such as the web pages to navigate to for the Teams Tabs. Tabs in Teams are an <iframe>, so adding both new and existing web apps is possible. The following needs to be installed for this guide: First, we need to set up WebViewer. For this example, we can explore PDFTron's The WebViewer sample needs to be publicly available using HTTPS endpoints for Microsoft Teams. This can be done quickly with the tool Microsoft provides in more detail why ngrok should be used for the setup: → Microsoft instructions to To install ngrok, follow the steps below found on ngrok authtoken <token> ngrok http 4200 --host-header=localhost:4200 Next, we start a new project for Microsoft Teams and connect our tunnel. You should have something like this at the end! Note: If there is an issue with Sideloading not being enabled, you must sign in to the How to navigate the Admin Center (steps 1-4 as above) And that's it! We hope this guide makes integrating a complete PDF and document experience with Microsoft Teams even easier. If you’re interested in trying other features with PDFTron WebViewer, check out our As always, if you have any questions, or run into any issues, don’t hesitate to reach out This blog post describes how to open XLSX documents in a Vue web app and much more with PDFTron WebViewer. This article describes how to open XLSX documents in an Angular web app and so much more with PDFTron WebViewer. A description of how to open PPTX documents in a Vue web app and so much more with PDFTron WebViewer.
OPCFW_CODE
Please provide an instance of Ublaboo\DataGrid\DataSource\IDataSource or an array search This if statement breaks old code where I was used to $this->setDataSource($netteDatabaseSelection). https://github.com/contributte/datagrid/blob/a328e35b96885517a616b2d86bdb98378831dae1/src/DataGrid.php#L504 Using 6.0 version :D Aaaa there is no 6.0 version yet. IOn 6.0, there will be BC breaks. But this one will not break backward compatibility - see https://github.com/contributte/datagrid/blob/v6.x/src/DataGrid.php#L502 Oh, okay. Anyway, I tried to composer require ublaboo/datagrid:~5.8 to make it work again with Nette v3 but in fails on Your requirements could not be resolved to an installable set of packages. Problem 1 - Can only install one of: nette/application[v2.4.x-dev, 3.0.x-dev]. - Can only install one of: nette/application[v2.4.x-dev, 3.0.x-dev]. - Can only install one of: nette/application[v2.4.x-dev, 3.0.x-dev]. - ublaboo/datagrid v5.x-dev requires nette/application ^2.4.0 -> satisfiable by nette/application[v2.4.x-dev]. - Installation request for ublaboo/datagrid ~5.8 -> satisfiable by ublaboo/datagrid[v5.x-dev]. - Installation request for nette/application (locked at 3.0.x-dev, required as ~3.0) -> satisfiable by nette/application[3.0.x-dev]. Installation failed, reverting ./composer.json to its original content. How am I supposed to install Ublaboo on Nette3 right now? It is not possible yet. We were waiting for other packages to migrate to nette 3.0. We are currently working on the 6.0 version. It will be here soon. :) But if the datagrid works for you in branch v6.x, it is a great news! I did not test it yet - I am only refactoring and fixing phpstan bugs.. It doesn't :) What does that mean? Do you have some detailed answer? Hmm I wanted to post an answer with lines pointing to github files, but got confused instead. If I am having composer pointing to ^6.0 version and in composer.lock I see it's v6.x-dev, which branch am I currently using then? It shows me that I have an error on this line: 1022: return $this->actions[$key] = new Action($this, $href, $name, $params); but when I look this line up on this repo, I can't seem to find the line at all. In the 6.x branch it is on line https://github.com/contributte/datagrid/blob/v6.x/src/DataGrid.php#L911 Oh I had some terrible mix of versions installed or something I guess, I deleted the package entirelly, reinstalled, deleted the if statement in DataGrid.php and now I am getting deprecation errors like Nette\Utils\Callback::invokeArgs() is deprecated, use native invoking right here https://github.com/contributte/datagrid/blob/82c8df4d474bb0b746629abc24f547e769d18839/src/DataGrid.php#L395 Guess I just have to wait or try to fix it on my own for now. Thanks for help :-) :D Aaaa there is no 6.0 version yet. IOn 6.0, there will be BC breaks. But this one will not break backward compatibility - see https://github.com/contributte/datagrid/blob/v6.x/src/DataGrid.php#L502 By the way you said this one won't be a BC break. How come it's not a BC break? I didn't get the hint from link you posted. What am I missing? Passing a simple Nette\Database\Table\Selection gives the InvalidArgumentException, which wasn't happening few versions earlier. I will fix the invokeArgs bug today. You had some old commit. Current HEAD of v6.x branch should not show you the files you were point to.
GITHUB_ARCHIVE
David Timms wrote: Brian D. Carlstrom wrote: > John Summerfied writes: > > Rahul Sundaram wrote: > > > You are the first for this release. > > Third today by my count, but I've not been watching closely. > I used to be selective about package installation, but on a recent > machine I just selected everything. The main reason is because I run a OK, I was going to file an enh request regarding this, but since there is lots of opines here, what do people think of the following as a solution to the various views expressed: 1. Everything option (stays) removed. 2. For the list at left (the general groups), a right click popup is provided, with two options. a. unselect all [?ctrl-shift-a (gimp) not sure what other appz All groups would be deselected, leave only the "base operating system" (for firewall / special builders) b. select all - [ctrl-a] c. select all - default optional components All default parts of each of the 6? groups would be selected. 3. For each package sub group (RHS) with options: again a right click popup is provided with the same three options: a. as above b. as above c. default selections (so if you make a mistake you can easily go back, without restarting the installer - Probably also good for the left list as well. Coding could be pretty simple (for items in chosen list, check= on/off, not sure about the default selects - ignore for v1?)...(same in text mode installer!) Anyway, good potential solution ??? DaveT. Umm, reply to self since got no starters ; ) given the long winded and poorly named discussion in "FC5T2 ready for even a test release?", I though it better to start a thread with an actual subject indicating the matter at hand. Rahul: would a method to achieve lots of peoples goals like the one above not kill multiple birds with the one stone: ? 1. The UI stays neet, no special checkboxes. 2. Normal users probably wont find the option (hidden in a right click), reducing it's actual use to special case where people need it, and who'll learn how to do it. 3. Makes it much quicker for the user to install none of the options within a category. 4. Makes it much quicker for the user to install all options within a category. (even if only to have a minimum no of packages to unselect to get ehat you really wanted). 5. Make it easy to select the installers defaults again. What say you all ? My point of view was getting a minimumish install on a low space/spec machine. I spent probably close to ten minutes going to each category, and then unselecting each optional item that I didn't want to get installed. The programming to achieve the same would probably have taken the same amount of time, and it would be sown once and reaped forever, for minimal/maximal users alike.
OPCFW_CODE
Chris Klosowski is a Software Developer at GoDaddy where his primary role is working on large-scale WordPress & BuddyPress installations. In his free time he is a WordPress plugin developer with numerous plugins on WordPress.org and Github. Chris is also the developer of WP-Push which integrates WordPress and plugins with the Pushover mobile app. Chris currently resides in San Tan Valley, Arizona with his wife and son, and hails from Wyoming, MI. He’ll be presenting Honey, I Shrunk the Logs. Why do you use WordPress? I use WordPress because of the myriad of plugins and themes available, and the willingness of the community to help out and solve problems or drum up new ideas. When and how did you start using WordPress? I started working with WordPress in 2005 when all I wanted to do was make a slight change to the Kubrick theme when new content was published. That spawned into me writing my first plugin. What tips or resources would you recommend to a new WordPress user? My number one tip for new users, is get involved and don’t be a stranger. The forums on WordPress.org are full of people in the community who want to help. While WordPress may be opensource, that doesn’t mean it’s without amazing (community provided) support. What advice would you give someone who’s building a business around WordPress design or development? Be social and give back (code or support). Just because your primary product is commercial, doesn’t mean they all have to be. Releasing some of the smaller projects or contributing to the projects of others is a great way to remain part of the community, why still being a business. How do you stay informed about WordPress (news, tips, etc.)? I have a pretty solid set of Twitter users that I follow for news including Pippin Williamson, Chris Lema, Brian Krogsgard, and most of the core dev team. As far as blogs I tend to watch PostStat.us and TorqueMag.io for news. I also subscribe to the weekly email from WPMail.me. When I need developer tips I look towards PippinsPlugins.com, TomMcfarlin.com. What’s a cool WordPress-based site you’ve seen recently? What do you like most about WordCamps? Getting to see all the people that I interact with on Twitter, the WordPress.org Forums, and IRC. Brainstorming in person is way better than online. Where can we find you online? My personal site is KungFuGrep.com. I typically write about software development (both WordPress and in general). I am also the developer of WP-Push.com, a plugin and extensions to allow WordPress to interact with the Pushover Mobile App for iOS & Android. On Twitter I am @cklosowski.
OPCFW_CODE
I´m having trouble with GitHub. I´ve playing around with a remote repository of Git. When I now try to make any changes to the remote directory, i.e. git remote show origin git push -u origin master I get this error Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Ulrichs-MacBook-Pro:coredatatest ulrichheinelt$ git push -u origin master Permission denied (publickey). fatal: Could not read from remote repository. I would be happy, when I could start again with a new (empty) remote directory. Or is there a way, to fix this error? This are my first steps with GitHub, started yesterday… Many thanks in advance! my settings at https://github.com/UlliH/CoreDataTest/settings too early happy 🙁 After setting the SSH and GPG keys, the errors are still the same. :-/ I think that’s right so, but still the same… - On your GitHub profile there is an It is located on top-right corner of the webpage. - Press it and you will see left - Inside that menu find SSH and GPG keysoption and press it. - You will see an option New SSH keyto add new key. - Generate SSH key using ssh-keygen -t rsa -b 4096 -C "your email". - Copy the output of cat ~/.ssh/id_rsa.pubto your clipboard - Paste the above copied output to the form at https://github.com/settings/ssh/new generate your key Visualize your keys Start the agent Add your key to the agent I was having the same problem with my ssh connection. I tried to work it through ssh, but couldn’t find a working solution for it. So, in that case, I changed my remote URL from SSH to HTTPS. I used the command: $ git remote set-url origin https://github.com/USERNAME/REPOSITORY.git. You can see your remote url changed using: $ git remote -v. You can find more detail on Here This will change your remote URL to HTTPS so you will now have to type your GitHub username and password to push your project to the remote repo. I know ssh is easier than HTTPS meaning that you don’t have to type out your username and password, but this might be helpful if you didn’t find any solution for fixing it through ssh, and you are in a rush to push your code to your repo. For me I had to set what host to use what SSH key. In your local machine SSH folder, usually under ~/.ssh create/edit the file called config using your preferred editor like vim or gedit and add the following with your git Host, HostName, and ssh IdentityFile (your ssh private key file path): Host gitlab.example.com HostName gitlab.example.com IdentityFile /home/YOURUSERNAME/.ssh/id_rsa make sure you have named the “public key” and “private key” files properly; precisely like “id_rsa” and “id_rsa.pub”. This is something that you can find in your users/.ssh folder. add the public key in GitHub Restart your terminal ( bash supported) and try to clone again if you have the write access to the repo, you should be good to go after these changes. Talking from experience (after spending an hour), I could not find any info on any forum that stated that we have to explicitly keep the name of the private and public file as mentioned above. If any of you facing same kind of issue on Bitbucket then here is the solution: [email protected] MINGW64 /u/works (master) $ git clone ssh://[email protected]:5449/rem/jenkinspipeline.git Cloning into ‘jenkinspipeline’… [email protected]: Permission denied (publickey). fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. [email protected] MINGW64 /u/works (master) $ cat < ~/.ssh/id_rsa.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC99aqMXtBpVvCQb6mezTHsftC3CFY9VOpGaNmckmcTzXoOOWOheeM9V2NTrOWxpbE3UNdL/6ZnmPyv5EI2zPMPstVIP5jAqcmxOvGc2zxy4wHeGTgrP6UaXs2nLScV4E7+rmdaVtSsfOa1i+eU2eg4UnIJpRLtGD6N+hMKJXaXWpGfQ79USiPhPQKDGOz3PeEDggyvHs7HUzaKZpwEeIKYOSDXsvDwTJ6s5uQ30YfX3eoQbAis8TJeQejAmkuu62oSOs5zFJMSTAzakiyXW/xCUsLrnUSzxmBKO2BIA/tSTrqW/Gj0VhDniDFGwGz0K1NfLzfEJLWKvdB2EJWVFjEd [email protected] Goto: https://bitbucket.internal.abc.com/plugins/servlet/ssh/projects/REM/repos/jenkinspipeline/keys 1) Add keys Copy/paste the id_rsa.pub key value there: [email protected] MINGW64 /u/works (master) $ git clone ssh://[email protected]:5449/rem/jenkinspipeline.git Cloning into ‘jenkinspipeline’… remote: Enumerating objects: 1146, done. remote: Counting objects: 100% (1146/1146), done. remote: Compressing objects: 100% (987/987), done. remote: Total 1146 (delta 465), reused 0 (delta 0) Receiving objects: 100% (1146/1146), 149.53 KiB | 172.00 KiB/s, done. Resolving deltas: 100% (465/465), done. I got it after wasting a lot of time… In the accepted answer of Shravan40 everything was ok, but me idiot added at github.com a new repository with adding a new README.md and this caused the error ERROR: Repository not found. fatal: Could not read from remote repository. Please make sure you have the correct access rights after a lot of tries, i added a new repository without a new README.md and everything was ok, but i don´t know the reason. 🙁 Till yesterday, when on a new try i finally noticed it… So my solution in addition to Shravan40s answer is maybe it will help someone… In my short experience using git with linux, I found there were two simple answers to this error. run these commands in this order git remote set-url --add origin <https://github.com/username/repo> git remote set-url --delete origin <[email protected]:username/repo> This will reconfigure your config file to use HTTPS origin instead of SSH. now try running push or pull commands. Reboot your linux VM (if you’re using one) and/or host machine. Rebooting has resolved the issue for me more than once.
OPCFW_CODE
Elasticsearch Training in New York City - October 19 & 20 . Results from Loading phrases to help you refine your search... [CASSANDRA-1002] Fat client example cannot find schema ...Running the client example in contrib shows that it cannot find the schema, possibly caused by CASSANDRA-44.Throws this error:Exception in thread "main" java.lang.IllegalArgumentException: U... , 2010-04-26, 23:04 [CHUKWA-449] Create utility to generate a sequence file from a log file ...See this thread:http://www.mail-archive.com/chukwa-user%40hadoop.apache.org/msg00084.htmlWe should have a utility class that can generate a Chukwa sequence file from a raw log file.... , 2010-04-26, 19:42 [CHUKWA-472] TsProcessor: make date format configurable ...The TsProcessor's default date format and it's date format for a given data type should both be configurable. To set time format for a given data type:<property> <name>TsPro... , 2010-04-26, 19:22 [CASSANDRA-859] update cassandra-cli for mandatory login() ...With the completion of CASSANDRA-714, the cli will need to be updated accordingly. Either the --keyspace command argument (or something equivalent entered interactively after startup), will ... , 2010-04-26, 19:20 [CHUKWA-477] Support post-demux triggers ...Add the ability to trigger an action upon successful completion of a Demux process.... , 2010-04-26, 16:28 [CHUKWA-478] TestSocketTee fails intermittently ...TestSocketTee sometimes fails, as the socket returns no data on the first call to read.... , 2010-04-26, 15:48 [CASSANDRA-946] Add a configuration and implementation to populate the data into memory ...Proactively load data into the memory when the node is started, there will be a configuration to enable this function and will be per Columnfamily. The requirement is to speed up the reads f... , 2010-04-26, 15:30 newest on top oldest on top last 7 days (2987) last 30 days (11092) last 90 days (28439) last 6 months (53938) last 9 months (162594) Bill Graham (3) Chris Douglas (1) Eric Evans (1) Johan Oskarsson (1) Monitor These Apps! All projects made searchable here are trademarks of the Apache Software Foundation. Service operated by
OPCFW_CODE
Yes It Is (YII) – Fast , Secure and Professional PHP Framework Yes It Is (YII) is an open source PHP framework for web application development with highly optimized performance this making it ideal for all kinds of web projects. YII PHP Framework has impressive and high performance when compared to other PHP Frameworks and this is the reason for YII getting huge positive attention from all the web developers. The adoption and popularity of YII Framework is increasing at a very rapid pace because of its remarkably rich features. This framework is considered to be the best PHP Framework for developing Web 2.0 Applications. This framework is apt for social media websites , online portals , forums and so on. YII is abbreviated as “ Yes It Is” and is usually pronounced as “Yee”. Wondering why is it abbreviated as “Yes It Is” just because it is the answer to the questions such as : Is it professional? Is it fast ? Is it secure? The answer to all the above questions is “Yes It Is” -à the YII Framework. Overview on the Rich Features of YII Regardless of the fact that you are Team of Distributed Web Developers building a complex web application or a single web developer building a simple website, YII PHP Framework will augment your team with efficient and professional resources whilst give you add on experience and all this for free. With YII you can keep your concentration on the tasks that are specific to your business needs and YII will provide you the implementation strategy for all the business requirements due to its rich feature set as mentioned below : MVC Coding Pattern YII provides a clear separation between the business logic from the presentation via its proven MVC Architecture and Coding Pattern. Form Validation and Input YII has a collection of Validators and various helper methods which ease the task of form input & validation. It is very easy and safe to collect form input with YII. Authorization and Authentication YII provides authorization through hierarchical RBAC i.e. role based access control with in-built authentication support. Support for Web Services YII allows generation of various complicated WSDL specifications automatically along with proper management of web service request handling. DB Migration , DAO (Database Access Objects) , Active Record and Query Builder Web developers can model the database data in the form of objects so as to get rid of writing complex and tedious SQL queries repeatedly. Compatibility with Third Party Code The design of YII is compatible enough to work with third party code so you can use the code from Zend Framework or PEAR to work well with your YII Web Application. Support for AJAX Widgets YII has a collection of AJAX enabled widgets so that you can write versatile and efficient user interface with simplicity and ease. This framework consists of a separate library that will contain all the user contributed components. Theming and Skinning Features YII will let you easily switch the appearance of your YII Power website through its skinning and theming features. Fully Fledged Documentation Each and every property or method is documented in detail along with various comprehensive tutorials for assistance. Safe and Secure There are several security measures that come with YII PHP Framework which will protect your web application from XSS Attacks (Cross Site Scripting) , CSRF (Cross Site Request Forgery ) , SQL Injections and Cookie Tampering. Functional and Unit Testing Based on Selenium and PHP Unit one can write and run functional tests and unit tests. Error Logging and Handling The log messages that are appear are filtered ,categorized and after that routed to various destinations. All the error and log messages are presented in a very clear manner. Support for Localization (L10N ) and Internationalization (I18N) This framework provides support for interface localization , message translation , time and date formatting , number formatting. Layered Caching Scheme This framework has wide support for fragment caching , dynamic content , data caching and page caching. You can easily change the storage medium of caching without making any modifications to the application code. Support for Automated Code Generation There are several code generation tools which will help you in code generation for implementing various features such as CRUD , form input and so on.
OPCFW_CODE
I have a py server that I run on my computer. Would like to have it run on a cloud server as well as a vpn server with wireguard setup so that I can access a home device remotely and have security 31 freelanceri licitează în medie 169$ pentru acest proiect Hello Dear, I am a VPN expert and i have hands-on experience with OpenVPN, IPsec, L2TP, PPTP, SSL, etc. I have a Cisco CCNP certificate. I have great experience in various network technologies such as VLAN, STP, OSPF, Mai multe Hi How are you? I have read your post and it is really interesting to me. I have great experience with Python with Pandas, numpy. I 've used django and flask etc and have experience in deep learning, image process. Mai multe I can do it. As 9+ years experiences in these field. I can give good quality work. I have read the guidelines of your work.I believe that i can provide you the best quality works you are anticipating from this platfrom Mai multe Hello Dear, I am an Expert Network Engineer and I am working as Professor and Expert Consultant Network Engineer in a Multinational ISP. I have great hands-on experience for more than 10 years in various network tech Mai multe Cloud Server and VPN Good morning , Hi I am a very experienced statistician, data scientist and academic writer. I have completed several PhD level thesis projects involving advanced statistical analysis of data. I ha Mai multe Okay I understood your requirements but I have few doubts text me so I can clear my all doubts right away I can help you and I am ready to start work from now . Thank you Hello, I'm an IT expert with more than 15 years of experience in the IT industry . I'm Cisco Certified networking professional 300-100 and 300-115 and Linux Certified Professional lpi 101, 102,103 and red hat certified Mai multe Hi, I see that you need some assistant in Cloud Server and VPN I have skills in: C,C#,C++,JAVA and I have experience in web development. I have skills with the following programming languages: C, "C ++", "C #", Python Mai multe ✔✔✔ Hello, Ellen A.. Hope you're doing well. I have been working with VPN, Internet Security, Computer Security, Coding and Python for over 7 years. I have read your project description carefully and I hope my experi Mai multe Hi sir Hope you're doing well I'm familiar with vpns and right now I have several vpn severs running with no problem. Moving your operations from your own sever to cloud is a wise thing to do and many people are doing Mai multe Hi, I have good experience in related domain that you need such as Network, Windows, Linux, sysadmin, Hosting, Cloud(AWS) and security. I can solve your problem ASAP, I would be happy to work together. please contact w Mai multe Hello, As a senior developer, I can help you to manage your server as you want. I am familiar with Python, Linux, Ubuntu server, Engineering, VPN, etc. As you can see my profile, I have much experience with many proj Mai multe Experienced and top-performing Professional, Systems & Network Administration, more than 15 years of extensive experience. Ability to oversee and lead projects of great magnitude and importance while being conscientiou Mai multe Hi I'm currently working in isp as a core engineer having good knowledge about switching routing LAN WAN cloud installation system administrator Linux Unix docker GitHub VM and Active Directory please open chat for mor Mai multe Hello client.. I can implement your requirement to use AWS Lambda.. It's better than vpn. Please invite me to chat and give me a chance to serve you. Sincerely. Thanks. Hello. I can solve your problem as quickly as possible. A real capable person doesn't speak much. I am waiting for your chat. Thank you. Hello, Nice meet you! I have read your project requirements and then I am sure I can complete that project. I can help you. Thank you. I am a specialist on cloud architecture and i have also certificate of AWS server and GCP cloud also.
OPCFW_CODE
This blog has been a long time coming. I have wanted to post on a number of things that completing the blog became a priority. Before I jump into topics of interest, I want to list some of the technologies driving the blog. The more I use Hugo, the more I love it. My first attempts at a blog used database-driven content management systems (CMS), which made sense at the time. Database driven CMS's are still relevant for large, distributed teams that need a central point to manage their content. The rise of static site generators allows us to sidestep the reliance on a back-end for simple sites. There is a static site generator suited for everyone but when Hugo started supporting org-mode for content was when I became hooked. The ability to customise Hugo to your tastes is a big plus. It handles the content and then quickly gets out of the way. It is fast, stable and provides all the functionality I need. I only have praise for the team, Hugo is a joy to work with. GitHub & GitHub Pages You have a static website but you now need SSL or some simple server functionality? Enter Cloudflare, helping secure your website as well as other benefits. It solved the issues I had, and it solved it for free. Gulp NPM There are options with Task Runners too but I have grown very found of Gulp. In time, I may simplify my builds using more generic tools (i.e. npm natively or bash scripts) but Gulp does what I need it too and it does it well. Gulp certainly still has its place but after working with Webpack, I have grown more accustomed to NPM scripts. Moving from Gulp to NPM scripts allowed me to remove a 'minor' dependency & streamline my build process. Susy CSS Grid & Flexbox For the life of me, I cannot understand why Susy has not been more widely adopted. I have been using Susy as a Sass grid system for a few years and the flexibility it gives is unmatched. The rise of flex-box and CSS-grid will eventually replace Susy. Until then, it allows greater freedom in positioning & layout without the expense of my sanity or weekends. For its time, Susy was amazing! It added so many options that were either not available or took too long to develop & maintain. But after CSS grid & flexbox arrived (and Susy started to shut down), there really was no excuse to hold onto the past. A combination of CSS grid & flexbox can allow for almost limitless responsive layout options. I decided to try it out and recreate the existing Susy layout using CSS grid & flexbox. And where better than here! The blog runs Analytics but should still function with JS disabled. Not a day goes by when I do not write some JS, but this blog is as much about experimenting & prototyping as it is about blogging. I'll save JS for the good stuff. My love of Emacs grows. After a number of years using it as a time-tracking & scoping tool, I recently took the plunge and started developing in it. My quality of life has increased significantly. Question: Is it the only IDE solution & does it do everything better than all the others? Answer: No, but it can do so much once you get over the learning curve & play to its strengths. Tip 22 from the Pragmatic Programmer was "Use a single editor well". This struck me as odd because I was using a number of IDE's simultaneously. Each one had its specific niche but fell short in other respects. I now see the advantage of consolidating my life in a single editor. I use other IDE's when needed (Visual Studio, Xcode, Unity & Unreal) but Emacs feels like home every time I fire it up.
OPCFW_CODE
🍀 The Job: At Prophecy Labs, we rely on insightful data to consult our solutions and closely collaborate with our end clients —and we’re seeking data engineers to help us grow further. Our ideal hire will have the mathematical and engineering interest or expertise, combined with a curiosity and creativity about the data. You’ll wear many hats in this role, but much of your focus will be building out Python/Java/Scala ETL processes and writing superb SQL. Beyond technical prowess, you’ll need the soft skills it takes to communicate highly complex data trends. We’re looking for someone willing to join a fast growing, young, fun filled, innovative & cross-cultural firm. We will provide initial training for the potential candidates who we found fit and could upskill as an experienced data engineer for this role. - Work with data to solve business problems, building and maintaining the infrastructure to answer questions and improve processes. - Help streamline our data workflows with a stable Data Lake or Data Warehouses, using streaming or batch data pipelines. - Work closely with the data science and business intelligence teams to develop data models and pipelines for analytics, reporting and machine learning projects. - Be an advocate for best practices, scalable programming skills and continued learning. In this role, you can expect to work on consulting assignments in your first year, alongside internal projects. You will have the opportunity to work with cutting-edge technologies and play a key role in driving our company's success. 🙌 What to expect at Prophecy Labs: Prophecy Labs is a young startup founded by passionate Data Experts. Your team members will be experienced Data Scientists and Machine Learning Engineers from around the globe, each with a critical mindset and a healthy dose of fun. We value honesty, transparency, and open communication. 🇪🇺 Our offices are located in the center of Brussels, in the beautiful Gare Maritime. With our office, we strive to create the ideal breeding ground to nurture creativity and innovation. As a Consultant, you will occasionally be doing on-site visits at our clients. However, most of our work is done remotely. 🔎 The job requires: - Bachelor’s degree in computer science, information technology, engineering or equivalent - 3 or more years of relevant experience with Python/Java/Scala, SQL and data visualization/exploration tools. - Have a good working knowledge about Agile framework, Versioning tools (like Git), Job schedulers (like Airflow), DevOps CI/CD deployments etc. - Familiarity with the AWS/Azure/GCP or any cloud ecosystem - Communication skills, especially explaining technical concepts to non-technical business stakeholders. - Comfort working in a dynamic, research-oriented team with concurrent projects and consulting assignments. 🧧 What we offer you: - An attractive compensation package with extralegal benefits (group and hospitalization insurance, mobility solutions, meal and eco vouchers) which we will keep competitive over time. - Challenging data and machine learning projects across industries - Excellent opportunities to develop, personally and professionally, and spread your wings within a young and flexible organization where diversity and inclusion are the standard. - Through our mentorship program you will have a dedicated mentor that will help you define your own growth path - A healthy work environment with a focus on well-being, connectivity, feedback, and open communication - Afterwork apero, lunch-time gym breaks, team lunches… When you want to let off some steam with the team, we’ve got you covered. - An individual budget for purchasing the hardware, tools and technology of your choosing. Your budget also allows you to invest in the latest training resources and certifications paths. Ready for a new adventure? Great! Apply now via this link! 🎉
OPCFW_CODE
[lustre-discuss] Chunk of file -> LNET node andreas.dilger at intel.com Thu Mar 2 16:46:05 PST 2017 On Mar 2, 2017, at 12:31, François Tessier <ftessier at anl.gov> wrote: > Correct me if I'm wrong: when a file is created on a Lustre fs, a set of > OSTs (depending on the stripe count) is assigned. ... a set of OST objects is assigned. > It means that the chunks of file (of size stripe_size) will be distributed > among these OSTs. To each OST corresponds a set of LNET nodes. I'd say "Each OST is hosted by an OSS node". > From an application point of view, when the file is effectively written, the > chunks are sent to the OST(s) through the corresponding set of LNET nodes. > My questions are: > - How to know (if possible using the Lustre API), for each chunk, what > is the corresponding LNET node? After the fact this is relatively straight forward. You can use the FIEMAP ioctl (via the "filefrag" utility from Lustre e2fsprogs) running on any client to report exactly the placement of each byte of the file on each OST. In advance of actual file IO (or also after the fact), the formula for each file is basically: fetch file layout via llapi_layout_get_by_path() or similar stripe_index = (logical file offset / stripe_size) % stripe_count OST index = llapi_layout_ost_index_get(layout, stripe_index) > - Is this distribution decided at file creation? In other words, is this > distribution based only on offsets in file? Yes, round-robin (RAID-0) striping is currently the only form of file layout, and the OST object allocation is done when the file is first opened. The OST object used is round-robin based only on file offset, as shown above. It is possible to "change" the layout of a file after it was written using the "lfs migrate" command, though this is essentially rewriting the file content after the fact to map to new objects/OSTs as requested. We are also working on a new feature for the Lustre 2.10 release (PFL, see http://wiki.lustre.org/images/1/1a/Progressive-File-Layouts_Hammond.pdf and DoM for 2.11, see http://wiki.lustre.org/images/8/8f/LUG2014-DataOnMDT-Pershin.pdf ) that will allow each file's layout to have different segments based on the file offset, so that it is possible to have different stripe count, stripe size, and even different classes of storage based on the file offset (e.g. SSD for the first 1MB index, HDD for the rest of the file). This will allow a great deal of flexibility for file layouts if applications/libraries need it, and will improve "out of the box" performance for users that don't want to deal with the details. Lustre Principal Architect More information about the lustre-discuss
OPCFW_CODE
ThreadLocal get() returning null sometimes after spring boot upgrade from 2.1.4 to 3.4.3 This is the code: public class ContextManagerImpl implements ContextManager { private static final ThreadLocal<Context> ctx = new ThreadLocal<Context>(); @Override public Context getContext() { if(ctx.get() == null) { ctx.set(new Context("", "")); // Dummy context. This should never happen } return ctx.get(); } @Override public void begin(Context context){ ctx.set(context); // Verified context passed is never null or blank } @Override public void end() { if(ctx!= null) { ctx.remove(); } } } public final class Context implements Serializable { private String s1; private String s2; } There are 2 threads which are using this class with different threadlocal context. It works fine most of the time, however sometimes even when begin() sets the value in ctx properly, getContext() returns dummy context. I suspect there is a race condition somewhere which is causing this but given that threadlocal set() and get() are threadsafe and initialisation of ctx is done during declaration this should never happen. Note: I have upgraded spring boot to 3.4.3 but still using JDK 8. ThreadLocal get() returning null sometimes after spring boot upgrade from 2.1.4 to 3.4.3 If Spring guarantees that begin(...) is called before getContext(), then it sounds to me like a one thread is doing the call to begin(...) method and another thread is calling getContext(). If this is the case the getContext() will return null. ThreadLocal is designed to provide a different context for each thread. Is that what you want? I don't fully understand your wiring but I'm suspicious of your use of ThreadLocal. What are the spring boot guarantees around threads? I suspect that you should instead just make it a volatile field: private static volatile Context ctx; // maybe set to new Context("", ""); // this should be called by spring boot @Override public Context begin(Context ctx) { this.ctx = ctx; } public Context getContext() { return ctx; } If you are worried about some race conditions if multiple instances of your ContextManagerImpl are being initialized with the same context then you could use an AtomicReference instead: // may want to initialize with a dummy context private static AtomicReference <Context> ctxRef = new AtomicReference(); // this should be called by spring boot @Override public Context begin(Context ctx) { ctxRef.compareAndSet(null, ctx); } public Context getContext() { return ctxRef.get(); } Yes it is guaranteed that begin() is called before getContext(). And yes I want each thread to get a different context. If I make variable volatile or AtomicReference then both threads won't have different copies. Problem is that for a thread, even if begin() is called to set the context, getContext() is returning null sometimes. Given that thread is designed to provide a different context for each thread, both threads shouldn't interfere with each other. Not sure what I am missing. Wondering if its because of spring boot 3.4.3 and jdk 8 compatibility ? If thread1 calls begin to set the context and then thread 2 calls getContext() it will get null right @pankajsachan? I really don't think that's what you want. Sorry @Gray If I was not clear before but both threads will call begin() before calling getContext() and hence there shouldn't be null. Are you sure @pankajsachan that each and every thread that will call getContext() will call begin(...) first? I suspect this is not the case because you are getting a null. If you are getting a null then your code is not being called like you suspect. Plain and simple. to give more details, when a Grpc request comes, in the interceptor I am calling begin(...) and then during processing of request in grpc server implementation class I am calling getContext(...) to fetch details like header etc which I have set in begin(...). I did more reading and found that it is not guaranteed that the grpc request and response will be done in same thread and hence it is possible that when grpc request comes as part of thread X, begin(...) is called in that thread and processing/response using getContext(...) is done in thread Y? Just a theory though.
STACK_EXCHANGE
Flux balance analysis (FBA) is based on the stoichiometric constraints of the metabolic reaction network, and estimates the reaction fluxes by maximizing an biological objective function, such as the biomass production. However, such biological objectives are not always valid, since cells are not always in pursuit of maximizing its own growth, especially for cells in multicellular organisms. In a recent paper published on BMC Systems Biology, Lee et. al. tries to integrate absolute gene expression into the metabolic flux prediction, and improved the predictions of experimentally measured fluxes . If we recall, FBA is formulated as such: where is the stoichiometric matrix based on the constructed metabolic network, and is the flux vector. and are the lower bounds and upper bounds of the fluxes. defines the objective function. The improved FBA model proposed in the Lee paper takes the absolute gene expression (measured by RNA-seq) into account. Instead of maximizing a biological objective function, they tries to maximizing the correlation between the predicted flux and the absolute gene expression measurement. In the revised model, the objective function is to minimize: where is the flux of reaction i, and is the reaction data by mapping the gene expression data to reaction i. is the error in data point i as calculated in the gene-protein-reaction mapping process. Basically, is the weighted sum according to the confidence in the estimate of the reaction data from gene expression data. The authors showed that their method outperforms FBA and gene expression data based FBA extensions (GIMME and iMAT ). 1. Orth, J.D., Thiele, I. & Palsson, B.\O. (2010). What is flux balance analysis?. Nature biotechnology, 28, 245-248. 2. Lee, D., Smallbone, K., Dunn, W.B., Murabito, E., Winder, C.L., Kell, D.B., Mendes, P. & Swainston, N. (2012). Improving metabolic flux predictions using absolute gene expression data. BMC Systems Biology, 6, 73. 3. Shlomi, T., Cabili, M.N., Herrg\aard, M.J., Palsson, B.\O. & Ruppin, E. (2008). Network-based prediction of human tissue-specific metabolism. Nature biotechnology, 26, 1003-1010. 4. Becker, S.A. & Palsson, B.O. (2008). Context-specific metabolic networks are consistent with experiments. PLoS computational biology, 4, e1000082.
OPCFW_CODE
Windows 10 Tip: Install and Troubleshoot .NET FrameworkRun and develop applications targeting NET Framework. Microsoft.NET Framework 3.5 Service Pack 1 is a full cumulative update that contains many new features. .NET Framework 3.5 Offline Installer download .Net Framework Tutorial - For Beginners & ProfessionalsPlease see below note for instructions on how to repair the Microsoft.NET Framework 4.0 on Windows 8.Download Microsoft.NET Framework for Windows now from Softonic: 100% safe and virus free. This update includes some new features that are based on some requests from a specific customer and that are.Unfortunately, its versioning and updating leave a lot to be desired. Microsoft .NET Framework - Free DownloadHere you will see.Net Architecture and.Net Framework basics.Microsoft.NET Framework, free and safe download. Microsoft.NET Framework latest version: Introducing Microsoft.NET Framework. The free.NET Framework from the world.As previously announced, starting January 12, 2016 Microsoft will no longer provide security updates, technical support or hotfixes for.NET 4, 4.5, and 4.The.NET framework does what all frameworks do: it simplifies the building of websites and web applications by giving developers libraries of code and tools.Download Framework.zip - 13.1 KB.Net Framework:-Introduction.Net Framework is the first step to enter in to the.Net world. A framework can be defined as building blocks. The.NET Framework is a development platform for building apps for web, Windows, Windows Phone, Windows Server, and Microsoft Azure.Depending on the type of application you want to use in your Windows 10 operating system you might get an error message regarding the.net framework. Microsoft.NET Framework Version 3.5 SP1. The Microsoft.NET Framework is a component of the Windows operating system.Earlier today the Windows SDK 7.1 (officially called the Windows SDK for Windows 7 and.NET Framework 4.Get the right.NET Framework download for you, as fast as possible by downloading as little as possible.The Microsoft.Net Framework is deeply integrated into Windows system and sometimes it can get corrupted making it impossible to uninstall and reinstall.If you have to install a.NET Framework software on a computer not connected to Internet or in case you have to install on many computers,.You would like to know how to remove and reinstall the Microsoft.NET Framework in order to correct a problem with your Autodesk software. InfoQ.com is a practitioner-driven community news site focused on facilitating the spread of knowledge and innovation in professional software development.Microsoft.NET Framework is a large class library software framework that not infrequently required by some applications written and compiled with Visual Basic.It is a programming framework used by Software applications in order to run.It includes a large library, and it supports several programming languages which.The Windows SDK for Windows 8 includes support for the.NET Framework 4.5 development tools and reference assemblies. .Net Framework Features From .Net 2.0 to .Net 4.5 Microsoft .NET Framework 3.5 Deployment Considerations The.Net Framework provides the necessary compile time and run-time foundation to build and run any language that conforms to the Common Language Specification. The.NET Framework is Microsoft's comprehensive and consistent programming model for building applications that have visually stunning user experiences, seamless and... Microsoft.NET Framework, free and safe download. Microsoft.NET Framework latest version: Package of necessary components for Microsoft programs. Microsoft.NET is a.Download net framework 2.0.50727 - Microsoft.NET Framework 4.0: The fourth generation of the.NET Framework platform, and much more programs.
OPCFW_CODE
Indent-guess isn't working correctly after switching between spaces and tabs Auto-guessing indent isn't working correctly after changing between spaces and tabs indent style. Please see video: http://youtu.be/IP11Y8H2Q70 @benogle Now that #3719 got fixed, can we get this one? Still happening with 1.0.19 Still happening with 1.3.2 This was supposed to be fixed before 1.0, or shortly after. What is the problem? Is there some other issue that must be fixed before this one? Still happening with Atom 1.4 stable and 1.5.0 beta Retagging to bug since it seems more like a bug than a feature. Still happening with Atom 1.7 yep still here. Most reports of issues seem to be closed without resolving the underlying problem. There is only one open I could find (on cursory clicking): Closed #5497 #3719 #5192 Open #4054 The big issue as reported is the inaccurate "guessing" combined with the inability for an editor-wide setting (and not per-file setting). It may be that several preferences may be required instead of having one monolithic setting which is clearly causing unhappiness: Default for all files (y/n) Default for certain files (by file extension) Setting on per file basis The idea would be to be able to re-order these in terms of priority, based on the needs of the user. @jeffmcneill It may be that several preferences may be required instead of having one monolithic setting which is clearly causing unhappiness well we have editorconfig for that don't we? I don't want atom taking on the functionality of editorconfig. But I'd love it to guess better and play nicely with editorconfig settings. Since issues related to soft-/hard-tabs seem to get closed without the root problem being fixed, I'll just add comments here rather than create a new issue, or commenting on one of the closed issues, even though the comments are slightly tangential to the original issue. Auto-detection of indentation is certainly desirable, especially if it works reliably. That said, per-language indentation settings are not just desirable, they're a requirement. Most languages have some opinion on preferred indentation, and linters will complain if the wrong format is used, some languages with significant white-space will simply break if the wrong indentation is used. One-size-fits-all indentation makes Atom extremely cumbersome for anyone who works across multiple languages that have incompatible indentation standards. It appears that #3719 may provide some foundation to build per-language indentation, is this the case? Is there an issue to track this? What's involved? The tab type and size can already be scoped per-language, and each language can specify their own indentation rules. Some examples are: https://github.com/atom/language-python/blob/master/settings/language-python.cson https://github.com/atom/language-sass/blob/master/settings/language-sass.cson https://github.com/atom/language-yaml/blob/master/settings/language-yaml.cson @50Wliu thanks, I tried searching, but either my google-fu isn't working, or the documentation is lacking. Would also seem sensible to mirror the global tab settings in the language package settings GUI. Would also seem sensible to mirror the global tab settings in the language package settings GUI. Can you elaborate? I believe they already are. Language packages display a GUI setting for tabLength, but not for tabType or softTab. @pdf Can you please open a new issue for that? @50Wliu https://github.com/atom/settings-view/issues/835 Thanks @Ben3eeE, just came to open the issue, have added a note that both settings should probably be displayed.
GITHUB_ARCHIVE
Emerson Job Openings for Python Developer | Apply Online@emerson.taleo.net. Are you looking for Jobs in Pune? If yes, Emerson Pune Walkin Drive is for you. Individuals who are having 2-7 years of work experience as a Python Developer are eligible for the walkins. So, Eligible candidates with desired skills and experience can apply online from the link provided below. Also, Check out further requirements and job description in the given Today Walkins Page. Emerson Pune Walkin Drive |Job Title||OSI Consulting Hyderabad Job Openings| |Company||OSI Consulting Pvt Ltd| |Job Type||Full Time| |Job Role||Java Developer| |No Of Vacancies||NA| |Salary||Not Disclosed By Recruiter| |Interview Date||11th Jan 2020 9 AM onwards| |Interview Address||OSI DigitalPvt. Ltd., Plot # 37, Hitech City Rd, Madhapur, Hyderabad – 500081. Emerson Python Developer Job Description #1. Able to develop Data Analysis, Data Processing engines using Python #2. Construction of REST API using Python #3. Deployment of applications on AWS/Azure #4. POCs on new tech stack & to integrate the same in the forms on a functional level #5. Understand the business requirements, system architecture & process guidelines #6. Code for multiple projects at a time #7. Ability to code for complex needs under tight timelines #8. Work with cross-functional / domain teams #9. Handle end to end projects on an individual basis #10. Ability to work in a fast-paced & agile development environment #1. Exp 2 to 4 Yrs. with a minimum of 2 yrs. In python development (Mandatory) #2. Minimum one year of experience working with python > 3.x #3. Implementing & delivering projects using CI/CD tools such as git, Jenkins, Docker, Kubernetes, Kubeflow Pipelines, etc. #4. Minimum of one year experience in building microservices and web API #5. Hands-on experience using numpy and pandas #6. Experience working with the PostgreSQL database will be an added advantage. #7. Working with Node JS & Node-Red will be an added advantage. #8. Devops experience & mindset is a big plus. #9. relevant experience using Google-OR tools would be an added advantage Emerson Company Overview At Emerson, we are innovators and problem-solvers, focused on a common purpose: leaving our world in a better place than we found it. Every day, our foundational values-integrity, safety, and quality, supporting our people, customer focus, continuous improvement, collaboration, and innovation-inform every decision we make and empower our employees to keep reaching higher. Our Automation Solutions business helps process, hybrid, and discrete manufacturers maximize production and protect personnel and the environment while optimizing their energy and operating costs. Our Commercial & Residential Solutions business helps ensure human comfort and health, protect food quality and safety, advance energy efficiency, and create sustainable infrastructure. How To Apply For Emerson Pune Walkins? Experienced candidates from Pune location can apply for Emerson Pune Walkin Drive. Individuals shortlisted candidates will be called for the interview. So, Practice the Placement Papers and Interview Questions for the best results. Also, Have a glimpse over the Today Walkins page Important Documents To Carry For Emerson Careers #1. An updated resume #2. Recent Photographs #3. Experience Letter #4. Last three months payslips #5. Government id proofs - Nirvana Solution Off Campus Drive 2020 - NDOT Technologies Job Openings - Quintessence Business Solutions Walkins - Girikon Solutions Walkin Drive in Noid - E2Open Software Bangalore Job Openings - Advantmed India LLP Walkins 2020 - OSI Consulting Hyderabad Job Opening Be on the know, be a winner. Top MNC Registrations Latest Bank Jobs Quote of the day! Knowing is not enough; we must apply. Wishing is not enough; we must do.
OPCFW_CODE
Swift: Index out of bounds error, I'm so confused I can't understand why this "Index out of bounds" error is occuring. I want to know what's the difference between "Index out of bounds" and "Index out of range" and why my code is causing error. Thank you. Code var input = readLine()!.split(separator: " ").map{Int($0)!} var arr = input[1...20] for (i, a) in arr.enumerated() { print(i, a) } print(arr[0]) // Index out of bounds Input 1 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 Output 0 900 1 901 2 902 3 903 4 904 5 905 6 906 7 907 8 908 9 909 10 910 11 911 12 912 13 913 14 914 15 915 16 916 17 917 18 918 19 919 Swift/SliceBuffer.swift:287: Fatal error: Index out of bounds 2023-05-04 15:01:32.280783+0900 SwiftAlgorithm[86091:6733992] Swift/SliceBuffer.swift:287: Fatal error: Index out of bounds Program ended with exit code: 9 It’s because the array slice isn’t copying the elements and creating a new array, but it creates a “pointer” to the initial one instead, so the indices don’t apply here the same way they would for a new array. This is implemented this way for the benefit of memory efficiency. This is really well described in the Apple developer documentation, as well as how the indices are being kept from the initial array. It's not just efficiency... if I say I want a slice with indices 17...23, it's nice to be able to continue to work with those indices and not have to translate them to 0...6 after creating the slice. The first part of this explanation is irrelevant. The important point is the fact that the indices are maintained i.e. the slice has indexes in the range 1...20 in the question. The view metaphor is also misleading since copy on write semantics apply. Modifying the underlying array doesn't change the slice and modifying the slice doesn't change the underlying array.
STACK_EXCHANGE
HackTheBox Sauna Write-Up Sauna is an easy difficulty Windows machine where we exploit weaknesses in an Active Directory environment. July 18, 2020 User / jsmith First we will run an nmap scan of the machine IP address, export our results to an HTML file, and view it in Firefox-ESR. Here we can see the machine is running a Web Server, and a number of other services like Kerberos, RPC, and Active Directory LDAP. We do some light, manual recon on the web server and do not find much. One thing that did get my attention is an about page with a list of employees, some of which may have user accounts. Let’s run nmap again, this time using a script to try and enumerate the LDAP. Here we can see that there is a user named Hugo Smith and the Domain Controller is EGOTISTICAL-BANK.LOCAL. Mr. Smith is one of the employee names listed on the about page. Before we move any further let’s add the domain controller and web server names to our /etc/hosts file. Trying different username formats for the users that we found, we try to perform AS-REP Roasting with impacket-GetNPUsers.py, which will try to retrieve a user’s Kerberos AS-REP Password Hash if that user has “Do not use Kerberos pre-authentication” set on their account. We’ve successfully extracted the password hash for user fsmith by AS-REP Roasting. Now let’s copy that hash to another computer that has JohnTheRipper installed and try to crack the hash with the rockyou.txt wordlist. John successfully cracked the hash. We still don’t have a clear entry point for logging into this machine so let’s run nmap again, this time using the -p switch to look at all TCP ports. Here we can see Port 5985 is open. This port handles WinRM (Windows Remote Management), an implementation of WS-Management (Web Services-Management) that can be authenticated with Kerberos. Now we can attempt to log in through WinRM with the credentials we obtained and cracked using AS-REP Roasting and JohnTheRipper. We will use Evil-WinRM to do this as it will give us the some added functionality, like the ability to upload and download files between our host and client very easily. After we log in we will find and print the user.txt file. Privilege Escalation / Administrator We run net user and get a list of all user accounts. We will ultimately want to get access to the Administrator account, but we may need to pivot through another account to do so. We can upload and run the WinPEAS enumeration script. This will help us discover user information, privilege settings, groups, stored credentials, and show us other things that may help us find vulnerabilities for privilege escalation. WinPEAS shows us some valuable information about our current user. It has also found stored credentials for the svc_loanmanager/svc_loanmgr account. Next we’ll use the credentials that we found to log in as scv_loanmgr. We run our WinPEAS script again, but don’t really discover much of anything else. At this point I think it is a good idea to run BloodHound, an Active Directory reconnaissance tool. We can use BloodHound to identify different attack paths through a visualized graph. Here is a good article explaining how to download, install, and get started using BloodHound. After we get everything installed and working we will upload the SharpHound Ingestor script, run it with the appropriate arguments, and download the zipped results. At this time we will also upload mimikatz, which we will talk about later. Afterwords we will download the data, import it into BloodHound and start looking around. Here is the shortest path from svc_loanmgr to Domain Admin. One of the first things we look at is if any users can use a DCSync attack, which will allow them to simulate the behavior of a Domain Controller and ask other Domain Controllers to replicate information using the Directory Replication Service Remote Protocol (MS-DRSR). By searching Find Principals With DCSync Rights we find that our current user, svc_loanmgr, should be able to perform this attack. We can try running the mimikatz program we uploaded earlier, selecting the DCSync option, using it to dump the NTLM Hash for the Administrator account. We now have Administrator account ownership, can log in through WinRM, and print the root.txt file. With this exploit we can also get the KRBTGT password hash, giving us the ability to create Golden Tickets which will grant us unfettered access to the network. I don’t have much experience with Windows Active Directory attacks, so this was a great machine for getting started and learning some basic principles and methods. I very much enjoyed the process.
OPCFW_CODE
Setting objects in an ArrayList to null If I was to have an ArrayList of objects, and I set a few of them to null, are they susceptible to gc? Or will they stay since they are still being referenced by the ArrayList ex: for(NPC n : NPCs){ n.act(); n.draw(frame); if(n == outOfMap){ n = null; } } If that loop is "always" being iterated over, will the outOfMap objects be collected? or simply stay there will a null value? Anything which is reachable is not eligible for garbage collection. Since a reference still exists somewhere, they will remain in memory. You need to actually remove them from the ArrayList with ArrayList.remove. I get a concurrentModificationException, How can i bypass that? @ChrisHayes: You can also set the n-th element to null in the list. @BenMarshall: Do you really have to do that within the loop? You can just throw away the list afterwards. If it is necessary to be so timely, you need to use Iterator#remove It's not that I want to be extremely timely, it's that the list will always be there and be iterated over,so i can't just throw away the list. I just want to add and remove objects from it depending on their position, and since the remove method gives me a concurrentModifierException, I was seeing if setting them to null would do the job. @Thilo: You can do that, but you also have to be wary, because 1) some list implementations can throw a NullPointerException, and 2) list implementations are not required to support the set method. List implementation are not required to support remove, either ... You need to distinguish between objects and references to them. For the same one object, multiple references could point to it. When the number of references reaches 0, the object is a candidate to be removed by the garbage collector. In this loop: for(NFC n : NFCs) { n = null; } The n reference is different than the reference the ArrayList uses to track the object in the list, so setting n to null reduces the references to the object down by one, but it leaves the reference from the ArrayList to the object intact and as such, the object is not a candidate for garbage collection. You should use the remove method to remove the object from the ArrayList. At that point, if no other references to the object exist elsewhere, it'll be a candidate for removal. "so setting n to null reduces the references to the object down by one". Which would have happened anyway in the next line as the loop enters its next iteration (and n is set to the next element). You're confused I think between variables and objects. If a variable references null, there's nothing to GC since variables aren't GC'd, objects are. The ArrayList won't be GC'd if there's a valid reference to it, and if it contains nulls, then there's nothing in it to be held in memory or to GC. If on the other hand, the ArrayList contains objects, and then later you null an item or two, then it's not the ArrayList that will determine the GC-ability of the object since its reference to it has been severed, but whether or not other objects reference them.
STACK_EXCHANGE
Adding Markup to Files Markup delimiters appear inside special comment markers in Single line comment markers contain one line of text containing markup delimiters. single line comment marker markup formatted text For example, the following markup displays a a link to the next page of a playground project in the rendered documentation for a playground as shown in. //: [Next Topic](@next) Block comment markers enclose one or more lines of text containing markup delimiters. block open comment marker markup formatted text markup formatted text block close comment marker For example, the following markup adds a parameters section to the Quick Help for a Swift method as shown in. - cubes: The cubes available for allocation - people: The people that require cubes Ordering of Rendered Content Markup delimited content appears on the rendered page in the same order as it appears in the source editor. Ordering of Content in Playgrounds Playground pages in rendered documentation mode show the rendered markup interspersed with other content. shows two screenshots of the same playground. The raw markup is on the left and the rendered documentation is on the right. The red boxes show each of the markup delimiters and their corresponding rendered output. For example, the top box in each view of the page page shows the first block of markup that includes a first level heading, a numbered list, and an image. The next box shows a heading, a line of text, and then a line of text with a bolded word. The final box shows a single line comment that is a link to the next playground page. The Swift content appears interspersed with the markup content. For example, the constant definition for myPageShortTitle is the first item below the first box in each view of the page. Ordering of Content in Quick Help Quick Help groups the markup into different sections. The rendered content in each section appears in the same order as the unrendered markup. For example, the the following markup generates the Quick Help shown in. - important: Make sure you read this - returns: a Llama spotter rating between 0 and 1 as a Float - parameter totalLlamas: The number of Llamas spotted on the trip The markup in lines 2-4 and line 8 generates the Description section. The order of rendered elements in the section is the same as the order in the markup. The text from line 1 is shown at the top of the description, the important callout from line 4 is shown next, and then the text from line 8. Line 6 generates the content in the Parameters section, even though it comes before the markup used for the final line of the description. Similarly, Line 5 generates the Returns section which comes after the parameters section. For more information on using markup to generate Quick Help, see Formatting Quick Help.
OPCFW_CODE
ampicillin neonatal dilution Works in our chances if, grocery pharmacy domain. For ampicillin, iv doses, 19061908 and. Southbound towards las vegas, convention ampicillin for. Throat, infection flagship for, reelection at, my gps or colorable. Imitation of, times ampicillin, lcms choice and, information the sale. Is, unfortunately unlike other ie the innocent, girl who. Has occurred, i agreed ampicillin, sodium pregnancy skills, and. Emerging pharmaceutical, chemistry sku, 993040 this course in name ampicillin lb medium. Specialized skills, and contact your practice for, grabbing is. Ampicillin and, amoxicillin the same drug iso, 9001. Certified does it, i new procedures, you. Will help this stop ampicillin, ethanol. Stock logistical efficiencies review, some are there. What, i 551 1024, so bring. Customers ampicillin kills, what bacteria tips, and. Splenomegaly to focus finding, a company that, antibiotics. Similar to ampicillin, cervical fractures audio, recording your. Professional lamb veal, ossobuco grilled, swordfish and, t4 expert ampicillin and amoxicillin comparison. Na wawelu walk, into a cab, effort to treat. People provisions of, doubledigit growth strategy. Our, hope is dubai ampicillin color, code greek daktylikos, pertaining. To discount, pet first school, library ampicillin, sulbactam. Nursing, considerations icf, icf chennai600038 is implemented the. Mayo, because i confirmation page, and trying. To, customer s, prescribed puc19 plasmid ampicillin later in, the. Day this, section health through, ovid dog. Food, preparation laboratory ampicillin and sulbactam, injection nonplus. Members accompany, you better a, location. Effects and, critical illness i am. Ampicillin newborn, dosage rescued, him some. May be, using progressive ampicillin gentamicin and metronidazole. enterococcus faecalis ampicillin dose Where you, imagine after 600, miles encouraging all. Doctor of pharmaceutics, but ampicillin 500, dose. Fun, a professionled selfassessed hlp, level computing. The mathematical, conversions and apply ampicillin neonatal. Dilution electrically operated, the field equipment. Edu, indicating that involves, consultation room. Suite, innovation smart, about 200 ampicillin, iv pediatric, read their. Application for human, physiology these serious, mental. Health might, hop off all times, and guiding ampicillin resistant klebsiella pneumoniae. - ampicillin in Maine - ampicillin in Colorado - ampicillin in Tennessee - ampicillin in Massachusetts - ampicillin in Georgia - ampicillin in Louisiana - ampicillin in Missouri - ampicillin in South Carolina 250 staff in vantaa, finland it. Team, hiring for, pharmacies must, include factors that stability. Of ampicillin in, media pauline said. On global operations declarations, sports competitors. 272021 ampicillin iv, half life 01. Pharmacy business insurance copayments, wanted to a. Nonpatient specific attributes, that searches for ampicillin sodium. Crystalline, msds manually supporter, of patients enrolled on with. Flexible veterinarians with clinical rotations, salinifor mis. Having excellent, bedlam theatre des ampicillin, for strep, b unknowing. She, s slightly over time it. Medicago sativa eruca, sativa madia if there. Have no, difference between cloxacillin and, ampicillin. Cities this information they, were wonderful anthology. Result in kuwait international using carbenicillin instead of ampicillin. Podiatry physiotherapy audiology, flu job time. Honored, traditions passed down the duck, comes to. Say no, ampicillin in urinary tract infection, hiv neurobehavioral research. Institute maintains, academic action what, challenges are. Air ampicillin, newborn dosage vaccines biopharmaceuticals blood. Cells at the chart, songs today, s. Gharda institute of primary robbery, silent alarm. Will dismiss, ampicillin for, strep b, prolonged periods each pharmacist. Shall normally, be bus, station is 9789959712 the. Smithsonian b cereus ampicillin, physicians who start. Saving on using, the jepson manual register, ampicillin iv, pediatric. Sick or arts, degree under declarations sports distribution. Of sterility, assurance ampicillin, color code. Contemporaneously, reduce such as you responding, to student pharmacists roles ampicillin sodium crystalline msds. ampicillin nursing drug study Peers due to australian year, we believe yourself. More nutrient agar, with ampicillin drogheda boyne, s performance. Undergraduate, degree courses, with digital preset revolution. Counter, extended nutrient agar with, ampicillin. Deem approved, a clublike vibe to san. Francisco and nurses on, january ampicillin spread, plate. 1993 novel trainspotting construct, fullsize and, deal palatka. Walmart pharmacological action, of ampicillin reference point. Average medication to include a call ampicillin nursing drug study. Urquiola and clinics take, understanding and, delivery iowa in clinical. Practice ampicillin, nursing drug study the appropriate. And excel at that, this site, et tomates. Ampicillin dilutions, educational purposes, only information. Provided fertility process pick, but we expect. To, claim ampicillin and amoxicillin comparison, technician. Really, expect to, joining mdw jannette served. As those eligible prepaid, debit card. And cost, effective dosage options as more. Phoenix the, administration of, ampicillin before surgery, daily refill. Every two places gambling, hall and, managed. To promote healing god professional, organizations. Ampicillin available, forms grammar, punctuation and. People wonder lose your, prepaid card a. Loss of rapport, building so consider applying. Ampicillin or amoxicillin, for ear infection 9511. Harrison street, in that voluntary, sterilization are. 6, for five years raya jatinangor km ampicillin color code. |Louisiana(LA)||ampicillin Arizona(AZ)||ampicillin North Dakota||Idaho(ID)||ME| |New Mexico(NM)||IL||MA||ampicillin North Dakota||ampicillin NJ| Demonstrated that in dallas with, are willing. Pedro falc o rganization, or months. List is, making an automated pharmacy ampicillin, color code. Note this forum sept join, in 2014 how. You ampicillin, trihydrate stock, solution mentioned. On cancer some instances, words which features. A, greater miami society ampicillin, lb medium. Americans, million in occupational therapy is having. Ampicillin dosage bladder infection write for, professional career. Assisting, licensed temporary work which do. You, won t pay by this information partition coefficient of ampicillin. ampicillin in urinary tract infection For usmle steps, you could, care. Teams, subscription diploma, in meal times. And industries, new centralized, prescription items. Ampicillin, for strep b, promote itself. Bottle, of providing, aggry with changes, occurring. In meat ampicillin iv, pediatric greek derivation. Of time especially, online for mirtazapine. Remeron, remeron soltab, compute app4983792852924139 dc proddal1. Dh5, alpha cells ampicillin, his hard qs universities under, section. 893 4 times, ampicillin color, code service. You tell, me further guidance, released by. Pharmacists controlled, sequoia pharmacy preceptors practice, training. Ampicillin spread plate, range actus salubris latin. For professional automatically alpha, channel in every. Owner project ampicillin color, code cannabinoid, receptors. Cb1, and serves on how long will ampicillin lcms. Bottom side by stephen, ltd public, health sciences. Bacterial transformation ampicillin, resistance lab, appropriate level. Of frequently plague epidemic, may upon. Turning elect, to six, digit ampicillin. Alkaloid skopje enjoy working on how, you. Times, daily visit, the pharmacists ampicillin sodium, pregnancy prices, than. A large, commercial food system artist venuz, white board ampicillin. Color code, m horribly, sick sinus disease under happened. The, analysis or small ampicillin color, code. Quality, improvement in epidemiology etiology pathophysiology. Indicating, that comes envisions launches the above. Tetracycline ampicillin voice for, each annual. Pay that person for reservations ampicillin kills what bacteria. ampicillin resistant klebsiella pneumoniae Source also, multiple crimes in, california express. Mile somajiguda for easier, to grow. The mpharm puc19 plasmid ampicillin, 5 mins, to hear is. Processing, and tell me at, ampicillin dilutions. Qualify it, took into your peel, and coated, with. Soft scd may, not only a manufacturing, industries. Special tertiary, admissions ampicillin available, forms 201617 general practice. Areas establish high quality, ampicillin color code. Early the, rah rarely they heard my math, enjoy average. Hourly and, xavier because nutrient agar with. Ampicillin mind our station, grocery mcdonalds op, fuel gd mar. Concerned person is just, sent you ampicillin for strep b. Occur suite ll50, des plaines il, 60016 best skills please. Contact the ampicillin in, urinary tract. Infection chsu, students must use pop, beans. That, do you don 80000 patients and. Compacting machine setters ampicillin, color code techniques. That time, this forum osteopathic, schoolspecific discussionsthe conservatives, are health. Filipino and, people at its perfect, ampicillin. Gentamicin listeria peerreviewed, answers to order, prescriptions taken a. Parttime, study employer you trying ampicillin gentamicin listeria.
OPCFW_CODE
Is there a way to stimulate taste buds after stroke? After a stroke late last year, my 87 year-old grandfather hasn't been doing so hot. After being discharged from the hospital's rehab center, he has been in our care for over two months. Keeping food down has been interesting lately, as he has developed a strong dislike for most foods over the last month and a half since what seemed like another stroke. In accordance with our observations, credible resources confirm that strokes can cause "altered smell, taste, hearing, or vision".1,2 It seems that there is plenty of material confirming the degeneration of taste, but solutions seem harder to come by, and requiring of diagnosis by a physician, brain scans, etc. which wouldn't be viable right now. Is there a way to stimulate taste buds, or help the brain recognize taste again? Edited on Tuesday, March 8: After we heard that one of his somewhat recently prescribed medications might have been causing a bad taste (additional source) in his mouth, we discontinued use about two weeks ago. Now there might be more foods he will tolerate (it's hard to say for sure since we're trying foods we hadn't before), but he still hates others. I haven't found a significant number of sources that seem credible suggesting a correlation, and wonder if it's probably not related. I know that Health.SE "is not intended as a substitute for individualized diagnosis and treatment by a qualified healthcare provider." He is on hospice, and therefore his insurance won't cover hospital visits, and his finances are quite minimal, so any reasonable advice I can get would be greatly appreciated, and won't really be substitution for professional advice. (Of course, unreasonable advice would also be welcome, given good references.) I edited your question to make it less of a request for personal medical advice. If you object to my edits, you can revert them. @CareyGregory, Thanks! I am curious how permanent the cause of this trouble is, but I do see some benefit in your edit. Besides, even had the condition been likely to persist, I don't know that realizing it would dramatically alter our anticipated course of action; and if time will cure it, we may well find that out anyways. Oh, and thanks for the title change; that's far more fitting. I'm sorry this answer came so late. I starred this almost two years ago, but never had a chance to reply. Best wishes to recovery. According to this case report, Testing should be considered if the patient is not meeting goals of rehabilitation, because altered taste perception may lead to depression, weight loss, and malnutrition, all of which may act to confound rehabilitation efforts. I could not find research about directly re-stimulating the taste buds, but at the very least, supporting the person emotionally and helping them reach nutritional goals should contribute to the rehabilitation efforts by their own body. It seems that recovery is slow, but possible. ... by 9 months post stroke [she] had identified several foods that she could taste and enjoy. She found that tomato sauce with pasta or beef dishes were the most palatable, and she replaced coffee with tea. She noted that sweet foods and sugar tasted as expected, and she was able to enjoy chocolate. One year following the stroke, she continued to perceive the taste of chicken and potatoes as “sawdust.” Best wishes to anyone recovering from a stroke and their families. Altered Taste and Stroke: A Case Report and Literature Review Thank you very much for your reply at any rate, even if it's a little late. My grandpa passed away shortly after I asked this I'm sorry. Our thoughts go out to you and your family.
STACK_EXCHANGE
Testing an app is an important part of any development process. In the highly competitive world of iOS apps, it’s essential to give users a great app experience. This includes a bug free experience while using the app. Earlier, most of the testing was done manually on physical devices. But with the variety of iOS devices available today, it is difficult to test on all types of physical devices. An iOS app can run on an iPad, Apple watch, and Apple TV, in addition to an iPhone. All of these devices also come in different sizes and with different processors. With Apple charging a premium price for devices, it’s not feasible for a small startup to buy each of the devices for manual testing. The solution to this problem is the iOS testing frameworks, which are easy to use. These testing frameworks are all you need to test on various simulated devices, which saves a lot of time and money. In this post, you’ll first learn about iOS frameworks in general. After that you’ll learn about eight different iOS frameworks. What are iOS Testing Frameworks? Basically, you use a testing framework to create and run test cases. It enables you to eliminate manual testing, which saves both time and resources. Testers also conduct manual testing, use the app to check its functionality. The iOS testing frameworks have Apple specific functions like testing Apple APIs. iOS testing frameworks automatically generate detailed automatic reports, which is not possible in manual testing. How to Pick the Right iOS Testing Framework for You Before you look at the list of iOS testing frameworks, let’s pause a bit and teach you how to pick the best framework for your team. Here are the main criteria you should use for this decision: - Supported platforms - Learning curve - Coding skill requirements Price is a crucial factor you must pay attention to. The key step here is research. Compare the prices of different solutions—and pay attention to their features, unless you want to compare apples to oranges. And, of course, don’t forget the budget you have. Negotiation also often pays dividends, so don’t be afraid to ask for a discount or some special condition. If you’re thinking about picking an open-source solution, always consider the TCO (total cost of ownership). Even with free tools, there are hidden costs you mustn’t overlook: - Steep learning curves - Infrastructure costs - Costs related to compliance and data privacy - Potential time spent acclimating to the open-source tool itself We’re not saying you shouldn’t use open-source solutions. Instead, just take the concerns above into consideration when thinking about costs. iOS is the indisputable market-share leader when it comes to the US and other regional markets. But globally, Android is king. There’s a high chance your organization also makes Android versions of their apps, especially if you target international markets. If that’s the case, picking a testing framework that also supports Android brings some benefits. First, developers and/or QA personnel will have to learn a single tool, reducing the learning curve and making for a faster and more consistent testing experience. Choosing a tool that supports both platforms also potentially reduces cost, since you don’t have to pay for two completely different frameworks. Learning curve is an essential aspect of making this choice. Before picking your iOS testing framework, try to assess its learning curve: - Read opinions and reviews from other users - Watch or read tutorials - Evaluate the framework for a while (in case it’s free or offers a free trial) Reducing the learning curve as much as possible will reduce the time it takes for the professionals in your organization to be productive with the tool. That way, they can start creating tests and generating value sooner. Also, time not spent learning the tool can be spent in activities potentially more valuable, thus eliminating opportunity cost. Coding Skill Requirements Some frameworks are entirely code-based. Others are completely codeless. A third group mixes the two approaches. Which one should you choose? This is a decision you must make based solely on the makeup of your team. Consider how many professionals you have who are going to be involved in testing and assess their skills and aptitudes. If you pick a testing framework that is code-based, that means developers will have to create the test cases or you’ll have to train your QA professionals (in case they don’t already know how to write code). If that’s not a problem for you, a code-based tool won’t harm you. Otherwise, you might consider picking a codeless or hybrid solution. Eight iOS Testing Frameworks Now, let’s look into the 2022’s eight most popular iOS testing frameworks for developers. Appium, created by Dan Cuellar using C#, is the most popular testing framework. It’s popular because developers can use it to test both iOS and Android apps. Developers also like that it’s open source and completely free to use. It was originally released in 2011 and converted to open-source in 2012. As the most popular test framework, Appium has an active community for support. It also has a large repository of queries and answers about known issues in Stack Overflow. Below is a sample of code in Appium. TestProject is a testing solution in which you have to record your test steps and then it generates test cases and results automatically. It’s one of the few test frameworks that automatically generates test and doesn't require you to write your test cases in a programming language, so you don't need to learn a programming language. TestProject supports both iOS and Android platforms. You can use it with both iOS simulators and real iOS devices, like iPhone and iPad. But you need an Apple developer account to use TestProject on physical devices. The feature of TestProject that developers like most is that they can test iOS apps on a Windows operating system. It doesn't require a Mac operating system or Xcode, which is a requirement for many iOS testing and development frameworks. EarlGrey is an iOS testing platform developed by Google. In fact, Google uses it to test its own iOS apps, like YouTube, Gmail, and Google Calendar. You can easily integrate EarlGrey with the Apple iOS testing framework, XCTest. You can run it from the macOS IDE, Xcode, or from the macOS Terminal application. EarlGrey is completely open-source, but as of now, it has some 187 open bugs. EarlGrey sends all its test data to Google Analytics, and you cannot easily turn this workflow off. Below is a sample of code in EarlGrey. XCTest is the official iOS testing platform. It comes built into the macOS IDE, Xcode. You can use it to perform unit testing, performance testing, and even UI testing. XCTest is geared towards iOS app developers and you write test cases in Objective-C or Swift. Testers who only know languages like Python or Java and use Selenium testing may find using Objective-C or Swift problematic. Below is a sample of code in XCTest. Detox testing uses an iOS simulator; and it has no support to run tests on physical devices, which can be a drawback for some users. Below is a sample of code in Detox. Calabash is an iOS testing framework that takes a different approach from the others. You write Calabash tests in Cucumber, making test cases very easy to write, especially for non-tech people, who find writing test cases in Cucumber easier than in other programming languages. Cucumber is a programming language in which you can write plain English test cases. So, with Calabash, you don't have to learn any programming language and can write generic test cases (see code below). Calabash is very stable and supports iOS simulators and physical devices. Below is a sample of code in Calabash. Feature: Registration feature Scenario: As a new user i can register in the app When I press “Register” And I enter my username And I enter my email And I enter my password Then I see “You are registered successfully” OCMock is not a complete testing solution, but instead, you use it to create mocks. Mocks, special code in which you can simulate things, like API calls for example, are very popular in the testing community. Instead of using a real API call, which costs money, you simulate an API call. Because Xcode has no built-in support for mocks, we use OCMock to create mocks. OCMock stands for Objective-C Mock. And as the name suggests, you write your mocks in Objective-C. It is completely open-source and has very good documentation. Below is a sample of code in OCMock. KIF stands for "Keep it Functional." KIF uses the Apple iOS testing framework, XCTest, to create automated tests in the XCTest format. You write your test cases in Objective-C, which along with Swift, is the language for iOS app development. Developers nowadays don't favor Objective-C very much, which limits use of KIF. They prefer to use the more modern Swift programming language. Although KIF tests run faster than tests written in Swift using XCTest, the language restriction of using only Objective-C is an issue. Below is a sample of code in KIF. In this post, you learned what iOS testing frameworks are, and then you learned about the eight best iOS testing frameworks. You learned about the benefits and drawbacks for each of them, and we provided sample code for each. We can also use the scriptless testing platform Waldo instead of writing all test cases manually. We just need to upload the APK or IPA file, and then Waldo can automatically generate and run your testing. Create a free Waldo account here to test its features.
OPCFW_CODE
What is a CRUD table? CRUD refers to operations on a table: create, retrieve, update, and delete. Those operations can be executed on any table. They are bundled together as they are the most basic operations. What are CRUD operations in mysql? What is CRUD. CRUD is an acronym for Create, Read, Update, and Delete. CRUD operations are basic data manipulation for database. We’ve already learned how to perform create (i.e. insert), read (i.e. select), update and delete operations in previous chapters. Where is CRUD used? In computer programming, create, read, update, and delete (CRUD) are the four basic operations of persistent storage. CRUD is also sometimes used to describe user interface conventions that facilitate viewing, searching, and changing information using computer-based forms and reports. What is another word for CRUD? In this page you can discover 9 synonyms, antonyms, idiomatic expressions, and related words for crud, like: dirt, filth, grime, muck, clean, skank, dross, beardy and gunk. Is CRUD a REST API? Whereas REST is one of the most popular design styles for web APIs (among other applications), CRUD is simply an acronym used to refer to four basic operations that can be performed on database applications: Create, Read, Update, and Delete. CRUD vs REST Explained. What is a REST API example? For example, a REST API would use a GET request to retrieve a record, a POST request to create one, a PUT request to update a record, and a DELETE request to delete one. All HTTP methods can be used in API calls. A well-designed REST API is similar to a website running in a web browser with built-in HTTP functionality. Is REST only for CRUD? A popular myth is that REST-based APIs must be CRUD-based – that couldn’t be further from the truth. It is simply one pattern in our API design toolbox. What is CRUD illness? Doctors may call it a viral upper respiratory illness, but to you it’s the crud — that bad-news combination of sore throat, runny nose and cough that typically comes on in winter and hangs on till spring. What is a CRUD API? CRUD stands for Create, Read, Update, and Delete. But put more simply, in regards to its use in RESTful APIs, CRUD is the standardized use of HTTP Action Verbs. … Last but not least, do not mix HTTP Action Verb definitions – if you are telling your developers to make a POST, do not pass the data along as a querystring. How do I apply for CRUD? There are three high-level steps to building our CRUD app; setting up Budibase, create our data structure, and designing our user interface. Create your data structure - Name – Checked out by. - Type – Relationship. - Table – Users. - Define the relationship – One Users row -> many Books rows. - Column name in other table – Books. What is CRUD operations in angular? A home component that renders a table of products and contains the CRUD operations, A details component that displays the details of a specific product, A create component for creating products, A update component for updating products.
OPCFW_CODE
One month from today, we’re going to start to turn off basic auth for specific protocols in Exchange Online for customers who use them. Since our first announcement nearly three years ago, we’ve seen millions of users move away from basic auth, and we’ve disabled it in millions of tenants to proactively protect them. We’re not done yet though, and unfortunately usage isn’t yet at zero. Despite that, we will start to turn off basic auth for several protocols for tenants not previously disabled. Starting October 1st, we will start to randomly select tenants and disable basic authentication access for MAPI, RPC, Offline Address Book (OAB), Exchange Web Services (EWS), POP, IMAP, Exchange ActiveSync (EAS), and Remote PowerShell. We will post a message to the Message Center 7 days prior, and we will post Service Health Dashboard notifications to each tenant on the day of the change. We will not be disabling or changing any settings for SMTP AUTH. If you have removed your dependency on basic auth, this will not affect your tenant or users. If you have not (or are not sure), check the Message Center for the latest data contained in the monthly usage reports we have been sending monthly since October 2021. The data for August 2022 will be sent within the first few days of September. What If You Are Not Ready for This Change? We recognize that unfortunately there are still many tenants unprepared for this change. Despite multiple blog posts, Message Center posts, interruptions of service, and coverage via tweets, videos, conference presentations and more, some customers are still unaware this change is coming. There are also many customers aware of the deadline who simply haven’t done the necessary work to avoid an outage. Our goal with this effort has only ever been to protect your data and accounts from the increasing number of attacks we see that are leveraging basic auth. However, we understand that email is a mission-critical service for many of our customers and turning off basic auth for many of them could potentially be very impactful. Today we are announcing an update to our plan to offer customers who are unaware or are not ready for this change. When we turn off basic auth after October 1st, all customers will be able to use the self-service diagnostic to re-enable basic auth for any protocols they need, once per protocol. Details on this process are below. Once this diagnostic is run, basic auth will be re-enabled for those protocol(s). Selected protocol(s) will stay enabled for basic auth use until end of December 2022. During the first week of calendar year 2023, those protocols will be disabled for basic auth use permanently, and there will be no possibility of using basic auth after that. If you already know you need more time and wish to avoid the disruption of having basic auth disabled you can run the diagnostics during the month of September, and when October comes, we will not disable basic for protocol(s) you specify. We will disable basic for any non-opted-out protocols, but you will be able to re-enable them (until the end of the year) by following the steps below if you later decide you need those too. In other words – if you do not want basic for a specific protocol or protocols disabled in October, you can use the same self-service diagnostic in the month of September. Details on this process below. Thousands of customers have already used the self-service diagnostic we discussed in earlier blog posts (here and here) to re-enable basic auth for a protocol that had been turned off, or to tell us not to include them in our proactive protection expansion program. We’re using this same diagnostic again, but the workflow is changing a little. Today, we have archived all prior re-enable and opt-out requests. If you have previously opted out or re-enabled basic for some protocol, you’ll need to follow the steps below during the month of September to indicate you want us to leave something enabled for basic auth after Oct 1. To invoke the self-service diagnostic, you can go directly to the basic auth self-help diagnostic by simply clicking on this button (it’ll bring up the diagnostic in the Microsoft 365 admin center if you’re a tenant Global Admin): Or you can open the Microsoft 365 admin center and click the green Help & support button in the lower right-hand corner of the screen. When you click the button, you enter our self-help system. Here you can enter the phrase “Diag: Enable Basic Auth in EXO” Customers with tenants in the Government Community Cloud (GCC) are unable to use the self-service diagnostic covered here. Those tenants may opt out by following the process contained in the Message Center post sent to their tenant today. If GCC customers need to re-enable a protocol following the Oct 1st deadline they will need to open a support ticket. During the month of September 2022, the diagnostic will offer only the option to opt-out. By submitting your opt-out request during September, you are telling us that you do not want us to disable basic for a protocol or protocols during October. Please understand we will be disabling basic auth for all tenants permanently in January 2023, regardless of their opt-out status. The diagnostic will show a version of the dialog below, and you can re-run it for multiple protocols. It might look a bit different if some protocols have already been disabled. Note too that protocols are not removed from the list as you opt-out but rest assured (unless you receive an error) we will receive the request. Re-Enabling Basic for protocols Starting October 1, the diagnostic will only allow you to re-enable basic auth for a protocol that it was disabled for. If you did not opt-out during September, and we disabled basic for a protocol you later realize you need, you can use this to re-enable it. Within an hour (usually much sooner) after you run the diagnostics and ask us to re-enable basic for a protocol, basic auth will start to work again. At this point, we have to remind you that by re-enabling basic for a protocol, you are leaving your users and data vulnerable to security risks, and that we have customers suffering from basic auth-based attacks every single day (but you know that already). Starting January 1, 2023, the self-serve diagnostic will no longer be available, and basic auth will soon thereafter be disabled for all protocols. Summary of timelines and actions Please see the following flow chart to help illustrate the changes and actions that you might need to take: Blocking Basic Authentication Yourself If you re-enable basic for a protocol because you need some extra time and then afterward no longer need basic auth you can block it yourself instead of waiting for us to do it in January 2023. The quickest and most effective way to do this is to use Authentication Policies which block basic auth connections at the first point of contact to Exchange Online. Just go into the Microsoft 365 admin center, navigate to Settings, Org Settings, Modern Authentication and uncheck the boxes to block basic for all protocols you no longer need (these checkboxes will do nothing once we block basic for a protocol permanently, and we’ll remove them some time after January 2023). Reporting Web Service Endpoint For those of you using the Reporting Web Service REST endpoint to get access to Message Tracking Logs and more, we’re also announcing today that this service will continue to have basic auth enabled until Dec 31st for all customers, no opt-out or re-enablement is required. And, we’re pleased to be able to provide the long-awaited guidance for this too right here. Basic authentication will remain enabled until Dec 31st, 2022. Customers need to migrate to certificate based authentication. Follow the Instructions here: App-only authentication One Other Basic Authentication Related Update We’re adding a new capability to Microsoft 365 to help our customers avoid the risks posed by basic authentication. This new feature changes the default behavior of Office applications to block sign-in prompts using basic authentication. With this change, if users try to open Office files on servers that only use basic authentication, they won’t see any basic authentication sign-in prompts. Instead, they’ll see a message that the file has been blocked because it uses a sign-in method that may be insecure. You can read more about this great new feature here: Basic authentication sign-in prompts are blocked by default in Microsoft 365 Apps. Office Team is looking for customers to opt-in to their Private Preview Program for this feature. Please send them an email if you are interested in signing up: firstname.lastname@example.org. This effort has taken three years from initial communication until now, and even that has not been enough time to ensure that all customers know about this change and take all necessary steps. IT and change can be hard, and the pandemic changed priorities for many of us, but everyone wants the same thing: better security for their users and data. Our customers are important to us, and we do not want to see them breached, or disrupted. It’s a fine balance but we hope this final option will allow the remaining customers using Basic auth to finally get rid of it. The end of 2022 will see us collectively reach that goal, to Improve Security – Together.
OPCFW_CODE
How does one do sparse non-negative least squares using $K$ regularizers of the form $x^\top R_k x$? I want to solve: $$ J_{R_K,L1}(x) = ||Ax - y ||^2 + \sum^K_{k} \lambda_k x^\top R_kx + \alpha \| x \|_1, x>0$$ of course, in the case where $R_1 = I$ we get non-negative Elastic Net regularization: $$ J_{L2,L1}(x) = ||Ax - y ||^2 + \beta ||x||^2_2 + \alpha \| x \|_1, x>0$$ without the L1 norm its trivial to solve the problem: $$ J_{R_K,L1}(x) = ||Ax - y ||^2 + \sum^K_{k} \lambda_k x^\top R_k x, x>0$$ as long as one one can write it in a quadratic form $\frac{1}{2} x^\top Q x + c^\top x$ and use for non-negative least squares. This can be done by using the fact $||Ax - y ||^2 = x^\top A^T A x -2 y^\top A x + y^\top y$ and factoring $x^\top$ and $x$ from the left and right respectively: $$ \frac{1}{2} x^\top Q x + c^\top x = x^{\top} \left( A^\top A + \sum^K_{k} \lambda_k R_k \right)x + (-A^\top y)^\top x$$ and then plug in to any standard non-negative least squares software. This is really simple because one just needs to change the $A^\top A$ design matrix. However, when one includes the L1, since (I assume) L1 cannot be written in a quadratic form $x^\top L_1 x$ then one requires different methods (like sub-gradient methods or proximity operators). Furthermore, the problem gets more complicated because we need to include the non-negativity constraint which I guess one can just include $max(x,0)$ in the objective function: $$ \frac{1}{2} x^\top Q x + c^\top x = x^{\top} \left( A^\top A + \sum^K_{k} \lambda_k R_k \right)x + (-A^\top y)^\top x + \sum^D_{d} \max(0,x_d)$$ and use sub-gradients or proximity operators. So my question is: How do we solve this problem mathematically? Do we just have to sort of re-do the mathematics that this paper offers? Or just do sub-gradients or proximity operators? Or is there something simpler? If we do solve it any of these ways do have to implement everything from scratch or is there optimized code we may re-use? More importantly, my intuition tells me that there must be a way to change the design matrix and use non-negative elastic nets solvers like the numpy/scipy one that already exist. I want to do that mainly because I want to re-use optimized code so that the methods run fast. I am sure that with enough patience I could do step 1 (or do sub-gradient methods or proximity operators). However, is it possible for me to avoid these complicated maths AND implementation and re-use optimized code for non-negative Elastic Net that already exists? Recall we are trying to solve: $$ \text{minimize}_{x}\,\,\,\,\left\Vert Ax-y\right\Vert _{2}^{2}+\sum_{k}\lambda_{k}x^{T}R_{k}x+\alpha\left\Vert x\right\Vert _{1}\,\,\,\,\text{s.t. }x>0 $$ This problem is a perfect example “not seeing the forest for the trees.” The solution is quite simple if one noticed that we have: $x>0$, thus, $\left\Vert x\right\Vert _{1}=\sum_{i}x_{i}=1^{T}x $ This little observation turns the original problem into a classic non-negative quadratic program: $$\text{minimize}_{x}\,\,\,\,x^{T}\left(A^{T}A+\sum_{k}\lambda_{k}R_{k}\right)x+\left(\alpha1-2A^{T}y\right)^{T}x\,\,\,\,\text{s.t. }x>0$$ which can be solved via a host of algorithms including projected gradient, active-set methods, etc. so the solution in pseudo-python is simply to do: Q = A'A + sum( lambda[k], R[k] ) c = alpha 1 - 2 A'y x,_ = python_maths.QR_non_negative(Q,c) Answer credit to: professor Reza Borhani on Quora
STACK_EXCHANGE
What are the complete steps to setup your Raspberry Pi Monitoring dashboard? Hi, I've been trying to follow along the steps, parsing https://github.com/geerlingguy/internet-pi/issues/226#issuecomment-921916570 and https://github.com/danifr/internet-pi/issues/1 But I cannot get Grafana to show your raspberry pi monitoring dashboard. Note: my 'Internet connection' and 'Power consumption' dashboards work fine / I do not have an Air Gradient, so no dash needed for that. Can you include the full start to end steps, to replicate adding your Raspberry Pi Monitoring dashboard, please? Hi sorry for the huge delay, for some reason Github never notified me and I missed your comment. These are the steps: git clone https://github.com/danifr/internet-pi.git cd internet-pi git checkout rpi_monitoring Create a config.yml with the following content (edit it to match your desired config): # General configuration config_dir: '/opt/internet-pi/' domain_name_enable: true domain_name: 'home.local' # whatever you want basically domain_grafana: 'grafana' # to access grafana via http://grafana.home.local domain_prometheus: 'prometheus' domain_pihole: 'pihole' # to access pihole via http://pihole.home.local # Pi-hole configuration. pihole_enable: true pihole_hostname: pihole pihole_timezone: Europe/Madrid pihole_password: "admin" # Raspberry monitoring configuration raspberry_monitoring_enable: true telegraf_retention_policy: '90d' telegraf_password: 'admin' # Internet monitoring configuration. monitoring_enable: true monitoring_grafana_admin_password: "admin" monitoring_speedtest_interval: 60m monitoring_ping_hosts: # [URL];[HUMAN_READABLE_NAME] - https://netflix.com/;netflix.com - https://github.com/;github.com - https://www.apple.com/;apple.com # Shelly Plug configuration. (Also requires `monitoring_enable`) shelly_plug_enable: false shelly_plug_hostname: my-shelly-plug-host-or-ip shelly_plug_http_username: username shelly_plug_http_password: "password" # Starlink configuration. (Also requires `monitoring_enable`) starlink_enable: false After that just run the ansible playbook command: ansible-playbook main.yml That should be it :) I had to make some adjustments because my config_dir is in home (like in the original example-config). See comments here: https://github.com/danifr/internet-pi/commit/5fe5fe16c4bfd7b0bc99d1a0b0657f5fb6cb3aba The dashboard showed up without problem, but it just gave me "Bad Gateway" errors. The InfluxDB datasource seems not to be reachable... Haven't been able to fix that yet. Do I need the custom domain names changs? I only cherry-picked the three commits related to raspberry-monitoring... I'll try picking the rest as well tomorrow. OK, I did pick the domain_name commit as well, but also had to docker-compose down -v all the internet-monitoring and the raspberrypi-monitoring before it finally worked. Not sure if it really was the domain_name commit that did it in the end or if I had some other trouble, but I think there was a definition of the back_tier network in that commit that was might have been the crucial bit. Anyway, working now, thanks for your repository, definitely gets a Star ⭐ @SoongJr Thanks! :) still waiting for @geerlingguy to merge https://github.com/geerlingguy/internet-pi/pull/179. I am wondering if he will ever do it... My fork was never intended to be used by other people (other than me), but regarding the current situation, what I can do is to keep improving my fork (adding better docs for example) so more and more people find it easy to use. This is not the way opensource should work but it has been over 6 months since I opened the PR so I guess Jeff is simply not interested in having this change merged. @danifr yeah I was considering whether it would be worth opening a PR with some of my own changes... He's churning out so much content, I think it's just not feasible to maintain all of that even mid-term. In the code for reading values from the shelly-plug he writes himself that it's not the best solution, just a case of using the tools he knows to achieve this small task. The fact it reads only exactly one plug value also tells me it's just a way to point others in the right direction, not production-level code 😉 He does have to think about where to spend his time, is it best spent perfecting one project, or putting out the next decent project he can do a video about and actually earn some money ;) I don't begrudge it. His repo did teach me a lot about ansible, prometheus, grafna, and shelly, so for me it did its job. Maybe one day he'll find a volunteer to do maintenance on his projects, but I'm certainly not it 😜 And to be fair it's not like he's totally abandoning it, there were plenty of improvements since the video came out. Anyway, I'm on sick leave right now, so it might take a couple days, but if you're willing to look at a PR for the directory creation then I'm gonna write one ;) Maybe one day he'll find a volunteer to do maintenance on his projects, but I'm certainly not it 😜 And to be fair it's not like he's totally abandoning it, there were plenty of improvements since the video came out. Typically for projects like this—where I am actually using the project—I will update things in batches from time to time, but typically my philosophy on my GitHub projects is: I build it for myself I add an extremely permissive license and encourage forks I monitor the repo and usually merge easy bug/docs fixes quickly I mark PRs and issues that I'm interested in with 'planned' Every few months (sometimes longer... heh), I'll merge some of the 'planned' things or at least leave a follow-up review But as you mention, having 200+ OSS projects with thousands of active users, plus trying to make a living writing and doing YouTube means I don't have a lot of time for any individual project. I've tried sharing responsibility, adding maintainers, etc. from time to time, but there's a separate risk profile attached to that—and more often than not, someone pops in, solves their own issues, then abandons maintenance, meaning now I have a project I'm still maintaining effectively on my own, but now it has a bunch of someone else's code in it, and that makes it harder for me to pop back in and work on it again :( Anyways, I've written about it on my blog from time to time, most recently The burden of an Open Source maintainer. Anyways, I do plan on updating this repo and merging PRs like the domain-per-service one... but when is a good question. For most people interested in using the project long-term, I recommend forking the project, and then if I update things down the line, you can decide whether to pull my changes into your fork as well. Thanks for that detailed insight @geerlingguy, that's exactly what I assumed your stance on PRs might be (sorry to hear about your burnouts though). I did fork it immediately, simply because I want to backup my config and inventory to github 😉 Maybe linking that blog post directly in your readme(s) would set the right expectations for some of the less empathetic people? Maybe a few lines encouraging people who are looking for functionality to check out the PRs, even the closed ones, as there may be some nuggets there (I just noticed that there was already a PR for multiple shelly plugs, I might not have needed to implement my own 😅). You could even introduce a "GoodIdea-WontMerge" tag for PRs that looked great but you decided not to even mark as "planned" (which implies you're planning to merge it), to highlight PRs people should take a look at. This way contributions of others will be in your repo (as PRs) but not your duty to maintain. Just an idea, I don't want to cause you more work, we want to see regular videos from you after all 😜 And once we know what the "planned" tag means we can look out for it, so that's probably already enough. Anyway, hope you stay healthy!
GITHUB_ARCHIVE
import * as lolChamps from "../dist/index"; const fixture = require("./fixtures/index"); function determineLangCode(lang: string = "en") : any { const champs = require(`../data/${lang}.json`); return champs.data; } describe("Get a list of entire champion names", () => { it("should return names in English", () => { const names = lolChamps.all(); expect(fixture.champions.en).toEqual(names); }); }); describe("Get champion name by id", () => { it("should return Pantheon", () => { const name = lolChamps.getName(80); expect(name).toBe("Pantheon"); }); }); describe("Get champion id by name", () => { it("should return 117", () => { const id = lolChamps.getId("Lulu"); expect(id).toBe(117); }); it("should return 223", () => { const id = lolChamps.getId("tahmkench"); expect(id).toBe(223); }); }); describe("Check support languages", () => { it("should return true, does support Russian", () => { expect(lolChamps.languages.has("ru")).toBeTruthy(); }); it("should return false, does not support Danish", () => { expect(lolChamps.languages.has("da")).toBeFalsy(); }); }); describe("Throws an error when the champion name does not exists", () => { it("should throw an error message", () => { expect(() => { lolChamps.getId("tamkench"); }).toThrowError("tamkench does not exists. Please double check the name."); }); }); describe("Champion data", () => { it("should return a champion TahmKench data", () => { const mock = determineLangCode("en"); const champData = lolChamps.getChampion("tahmkench"); expect(champData).toBe(mock.TahmKench); }); it("should return a champion Fiddlesticks data", () => { const mock = determineLangCode("ko"); const champData = lolChamps.getChampion("피들스틱", "ko"); expect(champData).toBe(mock.Fiddlesticks); }); it("should return a champion Blitzcrank data", () => { const mock = determineLangCode("zh-hans"); const champData = lolChamps.getChampion("蒸汽机器人", "zh-hans"); expect(champData).toBe(mock.Blitzcrank); }); it("should return a champion LeeSin data", () => { const mock = determineLangCode("ru"); const champData = lolChamps.getChampion("Ли Син", "ru"); expect(champData).toBe(mock.LeeSin); }); });
STACK_EDU
Various autonomous or assisted driving strategies have been facilitated through the accurate and reliable perception of the environment around a vehicle. Among the sensors that are commonly used, radar has usually been considered as a robust and cost-effective solution even in severe driving scenarios, e.g., weak/strong lighting and bad weather. However, the object detection task on radar data is not well explored either in academia or industry. The reasons can be concluded into three folds: 1) Radar signal, especially radio frequency (RF) data, is not an intuitive type of data like RGB images, so that its role in autonomous driving is seriously underestimated; 2) Significantly limited public datasets with proper object annotations are available so that it is difficult to address the problem using powerful machine learning mechanisms; 3) It is noticeably difficult to extract semantic information for object classification from the radar signals. The organizers are from the Information Processing Lab at the University of Washington, Silkwave Holdings Limited, Zhejiang University, and ETRI. The challenge organizers include: The organizers will post announcements in the Forums. Questions about this challenge are welcome, including logistics, dataset questions, etc. The participants can also post their questions in the Forums. The organizers will answer the questions actively. The participates need to submit their radar object detection results for the testing set to the evaluation server. The evaluation metrics include AP and AP under four different driving scenarios, i.e., parking lot (PL), campus road (CR), city street (CS), highway (HW). The main score for this challenge is the overall AP. The details of the evaluation method are mentioned . zip file should contain 10 different txt files for 10 testing sequences with the following names: 2019_05_28_CM1S013.txt 2019_05_28_MLMS005.txt 2019_05_28_PBMS006.txt 2019_05_28_PCMS004.txt 2019_05_28_PM2S012.txt 2019_05_28_PM2S014.txt 2019_09_18_ONRD004.txt 2019_09_18_ONRD009.txt 2019_09_29_ONRD012.txt 2019_10_13_ONRD048.txt Each of them should have the following format: frame_id range(m) azimuth(rad) class_name conf_score ... The ROD2021 dataset (a subset of CRUW) for this challenge will be available to the participants once the challenge starts. The participates are required to use the provided dataset with annotations to develop an object detection method using the radar data only as the input. The participates are also allowed to propose their own object annotation methods based on the RGB and RF images in the training set, but the proposed object annotation method needs to be clearly described in your method description as well as any future paper at ICMR 2021. The object detection results should be submitted to CodaLab, including the object classes and object locations in the radar range-azimuth coordinates, i.e., in the bird's-eye view. Each object in the radar's field of view is represented by a point in the RF image. The participates can form their own teams from different organizations. There will be two phases for this challenge. The testing data for the first phase is randomly selected 30% from the overall testing set, and the second phase is the remaining 70% of the testing set. The final score is the AP of the overall testing set. During the competition, each team can submit their results once per week. The teams need to provide their opensource code through GitHub after the challenge results announcement. There will be two phases for this challenge: First phase: randomly select 30% from the overall testing set for evaluation. Second phase: the remaining 70% of the testing set. The final score is the AP in the second phase. Some detailed rules are listed as follows: The participates can form their own teams from different organizations and the number of participants is not limited. But only one team is allowed from an individual organization. The participates are NOT allowed to use external data for either training or validation. The teams need to provide their opensource code through GitHub after the challenge results announcement. The participates are not allowed to use extra information from human labeling on the training dataset or testing dataset for the challenge’s target labels. The participates are allowed to propose their own object annotation methods based on the RGB and RF images in the training set, but the proposed object annotation method needs to be clearly described in your method description as well as any future paper at ICMR 2021. During each of the two phases in the competition, each team can only submit their results for evaluation once per day, and less than 10 attempts in total. Remember to submit your best results to the leaderboard before the phase deadline. The provided dataset can only be used for academic purposes. By using this dataset and related software, you agree to cite our dataset and baseline paper . Start: Jan. 18, 2021, midnight Description: First phase submission includes selected 30% testing set. Start: March 12, 2021, midnight Description: Second phase submission includes the remaining 70% testing set. March 26, 2021, midnight You must be logged in to participate in competitions.Sign In
OPCFW_CODE
M: [Show HN] Pica for iPad - A Facebook client - huytoan_pc http://picaapp.com R: RodgerTheGreat Is there a reason it's named after an eating disorder characterized by a compulsion to consume indigestible objects? Maybe I'm missing a reference. R: xuki No it's not related R: xuki Hi, huytoan_pc and I made this app. Here's some code if you want to try it out: A6FRFK3NXFRY LEEHRFKY7K37 NX3WYEXAFP7F L9YE6T464Y63 H796J3TP9FFY 39P7EFMN9LNH JRKNRP3AEKEX W4XFM3FA446J Y9999Y9KE7X6 77LHYF33M66R Please reply and tell which code you got so other people know =). R: angerman Hi, got Y9999Y9KE7X6 :) Will be in Singapore next week, want to meet up? R: xuki We are in sf for wwdc, will be back by end of June. Send me an email @ ? R: cledet Bought the app. It's a lot bettter than the official one. Keep up the great work. R: xuki Thank you :-) R: ggalan no SALT?
HACKER_NEWS
Parsing a C string and setting variables with values from the string This post might be marked as a duplicate, but I did search online for this specific case and I couldn't find any examples similar to this. How do I use sscanf() to parse a string and store all fields in various variables with some of the fields empty? The following is a simplified version of my code: // assume all vals are initialized with the correct type and memories allocated correctly sscanf(data, "%d,%[^,],%[^,],%d,%[^,]", val1, val2, val3, val4, val5); Note: data is a char pointer pointing to the string "10101101,Water Level,,15,Collision" All vals that stores string are calloc'd and all vals that store ints are initialized with 0 In the example above, val1 and val2 returns the correct results. However, since there is no value on the val3 spot, the following vals starting from val3 are being set to the incorrect result. Is there a way to set val3 with no value (or skip over the slot) and continue with val4 and val5? Assuming val3 is an int value, results I would like to have are: val1 = 10101101 val2 = "Water Level" val3 = 0 val4 = 15 val5 = "Collision" I'd really appreciate any help possible. Use strtok to split the line at the commas, strcpy to copy the strings, and strtol to convert the numbers. The problem is that %[^,] fails when the first character it reads is a ,. You could split the sscanf to make it work the way you want to: sscanf(data, "%d", val1); sscanf(data, ",%[^,]", val2); etc @user3386109 strtok will not produce empty strings. In the OP's case, it will output 10101101, Water Level, 15 and Collision in each iteration of the loop without outputting the empty string in between. @Spikatrix Good catch. The work around would be to count the leading commas before calling strtok again. @Spikatrix And since turnabout is fair play... Your suggestion won't work either, because sscanf doesn't consume the input. It will simply start at the beginning of the string each time you call it. Note that you're forgetting to put the & ampersand before using the int variables with sscanf. @user3386109 Yeah, you're right. I missed that. I googled around and found strsep which I think can be used here. @Spikatrix strsep is a good choice if available (it's not officially part of the C standard library). There's always strcspn which can be used to write your own strsep on systems that don't provide it. Any ideas on what the function should look like? I tried different approaches but still can't figure out a way to solve it. The good old inch-worm method using two pointers to work your way from comma to comma as you work down the string is always an option (there is nothing you can't parse with a pair [or triplet] of pointers...) You want to split a string into a fixed number of tokens. You want to allow empty tokens and you want to process the tokens further. The standard tokenizing function, strtok, will consider continuous stretches of the separator character as a single separator, It wil also produce NULL tokens to signal the end of the string. As so often in C, you can roll your own function. Let's write one that: separates strings at single separator tokens; makes the tokens pointers into the original string like strtok; "destroys" the string by overwriting the separators with null terminators; stores all token in a fixed-size array; stores empty strings after reaching the end, so that all elements of the array are guaranteed to be non-null for further processing. That function could look like this: #include <stdlib.h> #include <stdio.h> #include <string.h> void split(char *str, const char *sep, const char *res[], size_t n) { for (size_t i = 0; i < n; i++) { size_t len = strcspn(str, sep); res[i] = str; str += len; if (*str) *str++ = '\0'; } } Notes: The function modifies str so you cannot pass in a string literal. (You can strcpy the data to a buffer first to be safe.) The function strcspn(s, c) counts the numbers of characters in s up to the first occurrence of any of the characters in c or up the the end of the string. We overwrite and step over the end of the string only if it is a separator. That way we will produce repeated empty strings after the end has reached. Here's a test for that function: int main(void) { char data[80] = "10101101,Water Level,,15,Collision"; size_t n = 5; const char *token[n]; split(data, ",", token, n); for (size_t i = 0; i < n; i++) { printf("[%zu] \"%s\"\n", i, token[i]); } return 0; } Would be worth noting a delimiter of ",\n" would allow strings directly from fgets(), but a nice choice using strcspn(). You can actually use both strspn() and strcspn() to work down the string. (strspn (str, ",") would tell you the number of sequential commas and that number minus 1 provides the number of empty fields)
STACK_EXCHANGE
For many years I was bedazzled by the wondrous new features of the latest Programming Languages. Amazing breakthroughs. Context-Free Grammars. Structured Programming. Object Oriented Programming. Event Driven Architectures. Restful Interfaces. Look how wonderful it all is! After years of chasing the latest and greatest, something ominous begins to dawn on me. These "developments" and "improvements" are not happening fast enough. The software being created is NOT better or more reliable or easier to understand or easier to develop. The frenetic pace of "new frameworks" and "new tools" and "new paradigms" and "new buzzwords" obscures the fact that we are spinning our wheels throwing up "More Stuff" without recognizing that it is just a rehash of the same old problems. The training becomes more and more specialized and it becomes harder and harder to be sure you fully understand all the features you are expected to make use of. The users of the Software are led to believe that much progress is being made because we can create glitzier User Interfaces, or because we have reached a (sort of) consensus on how programs should behave, or because they can access exponentially more data. But Users are generally not in a position to evaluate the internal quality of Software, or to understand the costs of managing and developing that Software. In reality, programming today is fundamentally the same as it was when we used Hollerith Cards and submitted Batch Jobs. A program is a string of characters stored in a file. A language processor reads and interprets these characters according to a set of rules. Many of these modules are combined to create the program that will later be executed. We have added many layers to "simplify" this process. Generating the sheer volume of Software required to make the modern world work has required some computer assistance. We created Editors and File Systems and Integrated Development Environments. We created Optimizing Compilers and Compiler Optimizers. We created Collaboration Tools. We created Interpretive Languages and Language Interpreters and Just-In-Time Compilers. I have spent much of my career designing and developing tools to make Software development faster, less error-prone, less obscure and more effective. I have kept my head down and drunk the Kool-Aid. The universe awaits. We will soon need to create reliable programs to control the tools and equipment that we bring with us as we leave the Earth. Nothing about the current Software design and development methodologies is sustainable or applicable for use in space or on other planets. It is 2018 and I will venture to say that no program has EVER been written in space. The tools are too clumsy. The level of specialized knowledge and training is too great. The risks are enormous. The only people that truly understand the systems are back on Earth. Currently, any new Software or updates to existing Software used anywhere in the space program must be developed and tested on Earth and transmitted to the target system. This might be OK when the target is a few minutes away (at most). Danger flags begin to appear when the spacecraft are further away. When you almost lose New Horizons on approach to Pluto because the people on Earth do not understand the operation of the 1970-era File System used by the probe designers in 2005, you get some idea of the impending collapse. As we move out into the solar system we will be at the mercy of systems and Software that becomes progressively more obsolete. Losing a probe to human misunderstanding is expensive and embarrassing, but tolerable. Losing a colony ship to something like this is completely unacceptable. Ships in transit (to Mars, for example) must have software systems that can be adapted to any situations that may develop over the course of several months. It is not possible for the designers to anticipate all possible contingencies - and there are people right there on the scene. It is therefore incumbent on us to make sure that those people are able to safely change or update the Software to deal with the new situation. After arriving at Mars, a bunch of critical equipment will be responsible for the lives of everyone in the colony. This equipment will become progressively obsolete and subject to failure. The only people capable of creating maintenance or upgrade patches (or fixing latent flaws) are back on Earth. There will be no incentive for those experts to remain current or to train a new generation of experts. The only equipment using this software is "out of sight and out of mind". Software development must be adapted to no longer require humans to be experts. There are currently no efforts being made in this direction. It seems to be a case of all the Software Developers continuing to Drink the Kool-Aid. The software development methodologies are so ingrained that no one seems to recognize the shortcomings. What is needed is Machine Learning applied to Software Development. When I work with a software development team I expect to be able to discuss program requirements in verbally. I can tell a programmer that a "button should be blue when you hover over it", or that "displayed records should be alphabetized" or that "the banner should be smaller". I can then stand back and watch while he makes the changes. At no time do I touch a keyboard. The subject-matter expert (the programmer) knows what I mean to have happen and does it. Maybe it takes changing five different files. Maybe it takes creating a bunch of new functions. Maybe it takes running a bunch of validation tests. Maybe something goes wrong. All those things get fixed. The expert programmer knows all the details. He knows the syntax for the 15 cryptic frameworks. He understands the database architecture. He remembers the names of the API calls, and the ones that are deprecated due to bugs. All I had to do was casually mention what I wanted - the expert did the rest. Unfortunately, most of what I do as a programmer is very similar to what I do when driving a car: just get from point A to point B without bumping into anything. There might be dozens of ways of accomplishing the task. As a Senior Developer, I might choose a "better" way than others. But I should not have to. My assistant should be fully capable. We should be striving toward the day when the "Subject Matter Expert" is actually a machine intelligence. Using Machine Learning techniques we should expect that the knowledge and understanding that is currently a perishable commodity should be available forever. All programming is a trial and error process. Neophyte programmers do lots of trials and learn from their many errors. Senior programmers make fewer trials and create much more obscure errors. This process of trial and error is exactly what would be expected to form the training cases for a Machine Intelligence. In all of Software Development, the biggest mistake we are currently making is throwing away those valuable training cases. Knowing about the programs that do not work AND WHY THEY FAIL is ultimately more valuable that the final product: the one that usually works. The obsolescence that will ultimately plague any human construct need not be potentially fatal to those future generations. Ensuring the deployment of fully capable experts on each of these colony software systems will make for safer universe for everyone. In this post I couch my concerns in terms of a future manned space mission or space colony. These environments simply would not have enough personnel to allow for specialist programmers or software developers, plus their support staff, plus training and education programs. Real-world uses for such technology are much closer to home. The premise of this essay is the fact that I consider the software development tools to be inadequate for the task - and that they will reach an unsustainable point in the near future. If your entire staff is certified for Microsoft SQL Server then (amazingly) every problem that comes through the door (magically) seems to need a SQL database. My life would be much simpler if I had access to skilled assistants that could perform the rote tasks using a particular set of tools. I could reasonably ask for multiple proposed solutions to a given problem and compare the results. I would have access to solutions that I might not have thought of. I would discover failure modes or options that I had not considered. The benefits of Machine Learning in these common situations are immediate and will become more pronounced as it becomes ever more difficult for human beings to keep up with new requirements. The use of Programming Assistants to aid in Software Development efforts will be tremendously helpful. Perhaps even more important will be the ability of a Programming Assistant to explain WHY a particular feature exists or HOW it works. Modern programs may have single lines of code that contain elements from a half-dozen completely different programming languages. The ability to ask a simple question such as "Why is that semicolon there?" (and get a quick and meaningful answer) would be wonderful. The explanatory abilities of a Programming Assistant, including an understanding of the implementation history and goals of a piece of software would be a valuable supplement to whatever documentation exists for the program. A Programming Assistant would be capable of retaining the skills over time, and learning to recognize requirements and deficiencies. Skills and understanding would no longer be perishable commodities. Last year's programs would no longer be dangerous to use because the knowledge of features and limitations would remain fresh. I mentioned that it might become reasonable to entertain competing proposals for implementing complex tasks using different combinations of skills. A properly trained Programming Assistant should be able to perform many of these comparisons and tradeoffs automatically. And should be able to produce an objective report on the relative merits of different approaches. The ability of a Programming Assistant to retain an understanding of past mistakes would allow it to anticipate failures and suggest resolutions. This is contrasted with the current "wait till it breaks then scramble to fix it" approach. For example, "everybody" knows that 10,000 tiny files in a Linux file system directory is a potential problem. Yet that revelation put New Horizons into safe mode days before reaching Pluto. Fortunately, the "scramble to fix it" had enough time to recover before losing the mission.
OPCFW_CODE
panel macro ignores markup_contents if table_for presents For example: panel "Children" do table_for(category.children) do column :name end markup_contents do h3 "Test" end end will output only the table. Expected behavior - output both: the table and the markup. @smpallen99 maybe is there some insight, why so? That's because of do_panel implementation: https://github.com/smpallen99/ex_admin/blob/master/lib/ex_admin/table.ex#L52 Yep, figured some recursion could fix that; after processing :table_for, remove that key from the map and call do_panel with remaining parts. About to do a test on my existing app :) @jwarlander would be great if u could add a test for this fix. Thanks... Steve @jwarlander I would think your recursive approach might work. I'm just wondering what would/should happen if they have more than one table_for or markup_contents. Probably best if we make sure all are rendered, or we raise an appropriate exception. My initial implementation just recurses until no remaining match, concatenating the results along the way; the default catch-all clause simply returns the result. Will definitely add test(s) for it :-) Right now, though, gotta get dinner ready before our daughter falls asleep ;-) Den lör 14 maj 2016 18:25Steve Pallen<EMAIL_ADDRESS>skrev: @jwarlander https://github.com/jwarlander I would think your recursive approach might work. I'm just wondering what would/should happen if they have more than one table_for or markup_contents. Probably best if we make sure all are rendered, or we raise an appropriate exception. — You are receiving this because you were mentioned. Reply to this email directly or view it on GitHub https://github.com/smpallen99/ex_admin/issues/112#issuecomment-219229361 @smpallen99 I think, the best option would be render all blocks specified inside panel macro regardless of their quantity. I've added a quick and dirty fix to don't block associations stuff. But I'd like to end up with proper solution, that handle more than one table_for or markup_contents in any order. Once I realized the HTML being built was kept in ETS, my recursion approach worked out well -- but the way the schema is passed (as a map, with :table_for or :contents as the key) doesn't allow multiple instances of each of those, just having one or both of them in either order.. def do_panel(conn, %{table_for: %{resources: resources, columns: columns, opts: opts}} = schema, table_opts) do table(Dict.merge(table_opts, opts)) do # ... end new_schema = Map.delete(schema, :table_for) do_panel(conn, new_schema, table_opts) end def do_panel(conn, %{contents: %{contents: content}} = schema, table_opts) do div do content |> elem(1) |> Xain.text end new_schema = Map.delete(schema, :contents) do_panel(conn, new_schema, table_opts) end def do_panel(_conn, _schema, _table_opts) do "" end If it were a keyword-style list instead, it could have multiples, and one could easily process it from beginning to end: def do_panel(conn, [{:table_for, %{resources: resources, columns: columns, opts: opts}}|tail], table_opts) do table(Dict.merge(table_opts, opts)) do # ... end do_panel(conn, tail, table_opts) end def do_panel(conn, [{:contents, %{contents: content}}|tail], table_opts) do div do content |> elem(1) |> Xain.text end do_panel(conn, tail, table_opts) end def do_panel(_conn, _schema, _table_opts) do "" end Alternatively, the map key could be a string instead, and have some kind of unique identifier appended.. then you could match it like: def do_panel(conn, %{"table_for_" <> id, %{ ... }}, table_opts) do # ... new_schema = Map.delete(schema, "table_for_" <> id) do_panel(conn, new_schema, table_opts) end It's not as elegant, but may be easier to implement depending on how the schema generation is organized (haven't looked yet). I like a keyword list approach. This data structure fits perfect for this case. I've got it working with keyword list.. will open a pull request, just going to sync against your latest changes :) Oh, right.. you're on a separate branch; which one should I do a pull request against? @jwarlander Give me a link to your branch. I'll cherry-pick commits from it to my branch and later (if all is ok) @smpallen99 will merge them all to ex_admin main repo. You'll find it here: https://github.com/jwarlander/ex_admin/tree/handle-multiple-panel-sections It's just one commit - I haven't written any tests for it yet, as I just got it working. I should be able to get to that tomorrow though, once I figure out a good way to test. I'm hoping there will be some existing tests I can look at for inspiration ;) Great! I like your commit. I'll merge it to my branch tomorrow. @jwrlander. I think you can add a test or two in controller test. Simply add to one of the resources in test/support/admin_resources.exs. Should be pretty straight forward. Done; pushed the test to the branch referred above :) Parsing test failures when looking at HTML output would be a bit nicer with something like Floki, where you can pick out a section of the raw HTML and make a nested tuple / list data structure of it; but I'm not sure if it's worth adding a dependency for. @jwarlander Looks good. I like your idea of using Floki. I've never used that package before. Would be a good addition. I'll look at on the next test I write that would benefit. Unless someone else beats me to it :). I'll make sure all your stuff gets pulled in with @romul pull request which i'll be reviewing tonight. Steve
GITHUB_ARCHIVE
Govern API Call node A new Govern API Call node has been added to simplify the process of connecting to Data360 Govern by calling the Data360 Govern REST APIs. For example, you can build an analysis that uses the Govern API Call node to connect to and retrieve assets from Data360 Govern. Pipeline, data stage and node descriptions You can now add a description to pipelines, paths, data stages and nodes to give an overview of what the item is used for. This can be particularly useful in environments that have a large number of components, and when sharing information with other Data360 products. If you add a description, the text will be displayed as a tooltip when you hover over the item in the Pipelines view. This also applies to nodes used in an analysis or process model, allowing you to view the node description from the Pipelines view without needing to open the analysis or process model in edit mode. For pipelines and paths, the description is also displayed on the right of the screen when the item is selected. The maximum length for the new Description field is 1000 characters. This also applies to the existing Description field on the rule library Details tab. Case management refresh button A Refresh button has been added to the case management, records and data entry screens. This ensures that you can always see the latest case information when using external data to update the case status, case queues or case owners. GraphQL API enhancements Pipeline and path APIs The following new APIs have been added to enable you to update and delete pipelines or paths: The following new APIs have been added to enable you to asynchronously rerun or roll back the execution of a data stage: The executionCompletionStatus API now returns a processID parameter which gives the process ID of the execution. This can be useful for tracking between a process model and an analysis, data view or data store execution. Execution history retention properties have been added to the environment APIs. The following GraphQL APIs have been deprecated and will not be supported in a future release: Please use the following queries instead: - queryDataStoreData - Queries a data store for data. - queryDataViewData - Queries a data view for data. - queryCaseStoreData - Queries a case store for data. - queryActivityLogData - Queries activity log data for a data store or case store. Performance improvements have been made across the application, in particular in the following - When working with an analysis. - When executing data stages simultaneously on AWS systems. JSON Parser and DQ+ API Call nodes The JSON Parser and DQ+ API Call nodes are no longer in "Tech Preview" status. Sankey dashboard charts are no longer supported.
OPCFW_CODE
This video looks at some of the new features in Windows Server 2008 R2 and Windows 7 that can automate the management of service accounts. If your application supports it, using managed service accounts means that the password of the service account is automatically changed periodically without any interaction from the administrator. What is a service account A service account is a user account that is created to run a particular service or software. In order to have good security, a service account should be created for each service/application that is on your network. On large networks this will mean a lot of service accounts and the management of these service accounts can become difficult, thus this is where Managed Service Accounts can help. A computer account is like a user account in that it has a password. The difference is that the password for a computer account is automatically updated by Windows with no interaction from the user. Managed Service Accounts uses the same process to manage the password for a Managed Service Account. Refer here for information about computer accounts http://itfreetraining.com/70-640/computer-accounts Managed Service Accounts Passwords The password that is associated with a Managed Service Account (MSA) is automatically changed every 30 days. It is a random string of 120 characters so it offers better security than standard passwords even if the standard password uses upper and lower case letters combined with non alphanumeric characters. Unless of course the administrator wants to use their own 120 character password which is difficult for an administrator to work with. Like a computer account, the Managed Service Account is bound to one computer and thus cannot be used on a computer that it was not designed to work with. This provides additional security. In order to start using Managed Service Accounts you need to meet a few requirements. Domain Functional Level: This needs to be Windows Server 2008 R2 or above. Forest Functional Level: Does not require any particular forest level. Schema changes: The schema needs to be up to date. Run ADPrep /ForestPrep to update the schema to the latest version using a Windows Server 2008 R2 DVD or above. Client: The Managed Service Account can only be used on Windows Server 2008 R2 or Windows 7. Software components: .Net Frame work 3.5 and Active Directory module for Windows Powershell are required for Managed Service Accounts. Not all software will work with a Managed Service Accounts. Managed Service Accounts do not allow the software to interact with the Desktop. Thus a Managed Service Account cannot be used to login and cannot be used to display GUI based Windows. Listed below are common software and if they can use a Managed Service Account. Exchange: Yes, but the Managed Service Account cannot be used for sending e-mail. IIS: Yes, can be used with application pools. SQL Server: Some people have got Managed Service Accounts to work with SQL but Microsoft does not support it. Task Scheduler: No AD LDS: Yes, Active Directory Light Weight Service works with a Managed Service Account, however a special procedure does need to be followed in order to get it to work. To install the required software components, open server manger and select add features. Ensure the following are installed .Net Framework 3.5.1 Features Active Directory module for Windows PowerShell found under Remote Server Administration Tools, Role Administration Tools, AD DS and AD LDS Tools To create the Managed Service Account do the following New-ADServiceAccount –name -enable ($True or $False) ADComptuerServiceAccount -Identity –ServiceAccount On the client run the following Configure Managed Service Account in IIS Open IIS Manager Expand down to Application Pools Right click the pool you want and advanced settings Select the property Identity Enter the username for the Managed Service Account making sure it ends with a $ Leave the password blank. This will be managed by Windows and is not required. “Service accounts step-by-step guide” http://technet.microsoft.com/en-us/library/dd548356.aspx “Managed Service Accounts Frequently Asked Questions (FAQ)” http://technet.microsoft.com/en-us/library/ff641729(v=ws.10).aspx
OPCFW_CODE
Trading algorithms are a specific type of computer program that allow the execution of automated trades by market participants. The algorithm examines previous data to predict how the price of a security or item will change, and it then executes a transaction on its results. Although there is a wide variety of trading algorithms, they are always characterized by a few core features in common. For instance, all algorithms require data from the past to produce accurate recommendations regarding future prices. In addition, all algorithms resort to mathematical analysis to arrive at their results. The rule that controls the algorithm’s decision-making process is the component of any trading algorithm considered the most essential. Investors have looked for algorithmic asset revesting advisors but they really don’t exist yet. This rule instructs the algorithm on how to proceed whenever it is presented with a new condition, such as purchasing or selling securities. In this article, we’ll go over some python Programming trading algorithms and explain how they might be used in your trading today. Choosing the right trading algorithm is important since there are many options available. Understanding the basics of algorithmic trading Algorithmic trading, often known as electronic trading, uses computer programs to place trades in financial markets. Algorithmic traders use these algorithms to fine-tune their investing strategies and predict future price changes. To complete a certain task, computers follow algorithms, which are instructions. These rules are used to make trades in the financial markets via an algorithm. As a rule, an algorithm will have three components: a strategy, a rule engine, and an order execution engine. The investor will outline their trading objectives and preferences (s) in the strategy section. The rule engine will use the predetermined objectives to decide when and how to execute transactions. The orders will be sent to the market and executed by the order execution engine. The following are some of the most noticeable distinctions between algorithmic trading and more conventional methods of trading: - Algorithmic traders can make transactions all day long, whereas human traders have daily quotas that they must stick to. - Algorithmic traders can take up more complex positions since they can access various tactics. - The efficiency of algorithmic trading typically exceeds that of traditional trading, resulting in higher profits with less exposure to risk. some benefits of using algorithmic trading Several benefits come with using algorithmic trading software. These benefits include faster execution times, lower risk, higher profitability due to automated hedging, diversification opportunities across different securities, etc. Compared to human traders, algorithmic traders have an advantage due to their ability to foresee price fluctuations. Trading algorithms provide many benefits, including increased efficiency and precision. Opportunities that human traders might miss can often be exploited successfully by algorithm traders. Human mistakes and sensitivity to market fluctuations are two potential drawbacks. How To Create a Trading Algorithm?? Trading algorithms are a critical part of any successful day trading strategy. They help detect trends in market data, allowing for more calculated investment decisions. The backtesting algorithm is the most prevalent trading algorithm. However, there are many more. This algorithm examines trading performance over time to determine if it has consistently generated profits. Manual or automated software is available for backtesting purposes. Algorithms that attempt to track the current market trend are called “trend-following.” While this strategy has the potential to yield more profits than trading in response to the trends of individual securities, it also carries a higher degree of risk due to the volatility of market sentiment. A trading algorithm should seek to maximize earnings while reducing losses to achieve long-term success. There is more than one technique to accomplish this, and you should select the one that usually works for you. Related: Technical Trading Strategy Choosing the right programming language There are many programming languages available to develop trading algorithms. The features needed, the user’s level of comfort with the language, and the resources at hand all determine which language best suits a given project. Languages in Algoritmic trading python, C++, and Java are often used for algorithm creation. Only some languages can handle all types of algorithms. Therefore, it’s crucial to pick the right one. In addition, several development environments can facilitate more efficient and comfortable work with a certain programming language. The following are some things to think about when choosing a programming language: When choosing a language to write code in, you must consider the specific capabilities you’ll need to complete your goal. For instance, some languages provide pre-installed libraries that supply frequently-used features like data management and graphical user interface creation. The options can be narrowed down if you have a good idea of your desired characteristics. It’s important to learn the language you’re coding in if you want to be productive and successful. This demands investing in your education by reading tutorials and consulting online resources to understand the language’s syntax and semantics. There is also the developer’s tool to think about, which is used to create software. It’s extremely unusual for certain jobs to call for specialized tools. While the text-based interface of R makes it a popular choice for statistical research, it can sometimes be better-suited for graphical depiction. In this situation, it could be helpful to employ a different method. Pros and cons of each language Learning a new programming language comes with a wide range of pros and cons. It’s not easy to determine which one is best for you. Some of the most important factors are as follows: - Python’s popularity in the academic world makes it a great choice for the long-term stability of your project. - It’s readable and has syntactic luxury, so it’s easy to pick up for seasoned programmers. - With its rich standard library, it’s simple to make use of code written in other languages. - However, Python’s lack of features like trash collection and type inference that are common in other languages might increase development time and complexity. - Passing an argument around can be perplexing at first. There are several factors to consider while choosing a programming language for best trading algorithms. The language’s popularity and availability are essential since many traders use it to construct their trading systems. The language’s structure and learnability are also important. Programmers should also examine the language’s ability to concisely explain complicated algorithm concepts and its support for various software development platforms. Related: Investors Have Stockholm Syndrome Data preparation and pre-processing The foundation of any effective trading algorithm is the time spent preparing and pre-processing the data it will need. Traders can only possibly improve their methods with access to reliable data. Trading opportunities can be located with the help of technologies like market data sources and sentiment analysis. As soon as the information is ready, it can be transferred to a trading platform and conducted automatically. Gathering and cleaning historical market data This blog post discusses how traders can gather and analyze historical market data. Many software programs simplify this procedure. Traders can also use free resources. After cleaning historical data, traders must construct trading algorithms. Algorithms let traders immediately calculate the value of a security or commodity. Use this information to invest. Each algorithm has a unique approach. All algorithms estimate future values to help traders make money. Engineering of features and normalization of data - Feature engineering creates and refines a trading algorithm’s features. Order types, price ranges, volume ranges, and periods can be complex. - Normalization ensures trading algorithm data is correct and consistent. Heterogeneous data sets may have errors or inconsistencies that must be fixed before the algorithm may execute. Techniques for handling missing data and outliers Researchers might employ various methods when dealing with missing information or unusual data. The mean or average of the data collection is often used to fill in missing numbers. There are mathematical algorithms and regression models that can accomplish this. If it doesn’t work, replace missing values with the most common value in the dataset. You can use either a numerical algorithm or a random sampling technique. Building the trading algorithm The term trading algorithm refers to a system of making investment decisions,such as when and how to buy and sell stocks. Choosing appropriate variables is the most crucial component of designing a good trading algorithm. Your investment, objectives, and horizon will determine which options are optimal for you. There are some factors to think about when establishing a trading algorithm: -What do you hope to achieve from your investments in the long run, above-average returns, or somewhere in between? -How much money are you willing to risk losing? -How long do you plan on keeping your investments? -The state of the market: what are the most recent tendencies? Your decision-making rules should be formulated once you have selected your parameters. Buy/Sell Rules, Order Types, Trailing Stop Levels, and Position Size are all examples of rules that can be implemented in a trading algorithm. Different algorithmic trading strategies There are various algorithmic trading strategies. Momentum, trend following, and mean reversion are popular. Each has pros and cons. Each strategy’s basics and how to apply them to your financial asset will be covered here. Mean reversion is a popular algorithmic trading approach that predicts future prices using previous data. The objective is to acquire assets while they’re cheap and sell them when they’re expensive. If the market goes against you, this method can help you prevent losses. Momentum trading predicts future prices using recent data. To generate quick profits, this technique buys assets when prices rise and sell them when they fall. Trend following uses historical data to anticipate future pricing. Rather than making immediate profits, the goal is to hold onto assets until they reach a desirable price range. Algorithms for finding patterns in financial data that may suggest future changes are called signal-processing algorithms, as opposed to trend-following algorithms, which are meant to follow trends. Both of these classes have a membership that follows trends, and many algorithms used in signal processing fall into the latter category. Deployment and execution Algorithms help traders select equities to purchase and sell. These computers find stock price trends in historical data and recommend trades. MATLAB, Python, and C++ are used to design trading algorithms. You must upload your algorithm to an online trading platform. This requires creating a user account and configuring the programme for the platform. Finally, observe and adjust your algorithm’s performance. Monitoring and adjusting the algorithm in real-time Real-time optimization of trading algorithms necessitates ongoing monitoring and perfection. Taking on this challenge can be approached in several ways. - One strategy involves keeping close tabs on the algorithm’s efficiency and adjusting it as needed. One common method is to monitor financial performance over time and make adjustments in response to fluctuations. This method, however, requires constant attention because it relies on tracking the algorithm’s output. - The use of automated regression testing tools is an alternative method. These technologies automatically analyse historical data to determine what caused any changes in the findings. The analysis will help the tool propose changes to the algorithm that might boost its efficiency. - Both methods are imperfect. Though time-consuming, continuous monitoring enables prompt action when adjustments are required to maximise the algorithm’s effectiveness. Automated regression testing can be more precise, but it needs a lot of historical data to be useful. Managing risk and managing Risk management is crucial if you want to make money trading, as trading itself may be a high-stakes enterprise. Some suggestions for minimising risk: - Consider the potential disadvantages of each investment. - To reduce your loss exposure, set profit targets and stop-loss levels. - Rely on technical analysis to foresee the market’s actions in the future. - Monitor your portfolio’s progress over time to see if your investments are yielding the desired results. In this article, we’ll discuss the findings of my work on a trading algorithm. When we first began, we looked at data from the past to see if there were any trends. We used the information to create a model to foresee future price changes. The future outlook for algorithmic trading is very bright. Algorithmic stocks and financial asset trading has grown with technology. This makes trading simpler and more profitable. New platforms and technologies simplify algorithm implementation for traders. This investment method is available to more traders. Regulatory agencies are improving their oversight of these trades, making traders and platforms safer. Algorithmic trading will majorly impact the stock market in the next few years.
OPCFW_CODE
The Master System's VDP is a modified TMS9918, and so most Master System games run in its extended 'Mode 4' setting. That was the only video mode, therefore, that I'd emulated in any form. For some reason, the older computer games are, the more charm they seem to have to me (maybe because the first games I played would have been on a BBC Micro, which certainly looked a lot more primitive than the Master System games I've been attempting to emulate thus far). I dug out TI's TMS9918 documentation - the differences are quite significant! Tiles are monochrome (though you can pick which foreground and background colour is in use - to some extent), the palette is fixed to 16 pre-defined colours (one of which being 'transparent') and sprite sizes and collisions are handled differently. Various features found in the Master System's VDP (scrolling, line-based interrupts) also appear to be missing from the 'vanilla' TMS9918, but I'm not sure whether or not they make an appearance in the SMS variation of the VDP or not, along with the original TMS9918 limitations. At any rate, the emulator now has a 'SG-1000' mode. The only differences at the moment are that the TMS9918 palette is used and line interrupts are disabled, so you can still (for example) use mode 4 on it. From first to last: Drol, Choplifter, Hustle Chummy, Championship LodeRunner, Space Invaders, H.E.R.O., Elevator Action, and Zaxxon. All but one of the SG-1000 games I had ran - and that was The Castle. According to meka.nam, this has an extra 8KB of onboard RAM. Whilst doing some research into the SG-1000 and the TMS9918, I found a forum post by Maxim stating "The Castle runs nicely on my RAM cart :)". Enter the new ROM mapper option to complement Standard, Codemasters and Korean - it's the RAM mapper, which simply represents the entire Z80 address range with a 64KB RAM. That seems to have done the trick! I mentioned that the palette was different in the SMS VDP and the original TMS9918 - here's an example (SMS on the left, SG-1000 on the right): I'm assuming this is the result of truncating the TMS9918 palette to the SMS 6-bit palette without needing to implement two palette modes on the new VDP. Another comparison is between two versions of Wonder Boy - SMS on the left again, SG-1000 on the right:
OPCFW_CODE
Accpac on the Amazon Cloud The Amazon Elastic Compute Cloud (EC2) (http://en.wikipedia.org/wiki/Amazon_Elastic_Compute_Cloud) is a service offered by Amazon.com that allows people to rent virtual computers to run applications on. Some of the innovations offered by this solution include: - Very easy to get started, you just need an Amazon account, attach it to EC2, and off you go. - Very inexpensive, with a good (nearly) free trial (http://aws.amazon.com/ec2/pricing/). - Scalable and expandable depending on your needs. Often the simplicity of getting started with this solution gets lost, since people are usually confronted with the advanced features at the beginning, which you don’t need to worry about until later. Just be re-assured that this is a solution that can grow with you. Below is a diagram of some of the services offered: In this blog posting we will look at how to deploy Accpac on the Amazon EC2 cloud and discuss some of the trade-offs and choices that can be made along the way. One thing that makes using Amazon EC2 intimidating is the terminology. But here is a quick guide to the key points. - Amazon Machine Image (AMI) – These are virtual machine snapshots that you take as a starting point to doing work. Amazon provides a number of these as starting points, there are a number of public ones offered by other people plus you can create your own. Basically when you want a new virtual machine you take one of these as your starting point. - Instances – You create an instance from an AMI and the instance is the virtual machine that you actually run. When you specify the instance you specify the resources it has including memory, disk space and computing power. For more on the instance types see: http://aws.amazon.com/ec2/instance-types/. You manage all these things from the Amazon Management Console: Deploying Accpac to Amazon EC2 is fairly straight forward. You just need to select a starting virtual image (AMI) of something that Accpac supports, create an instance of that, run the instance, install and configure Accpac into that image and off you go. There are a couple of “gotchas” to watch out for that we will highlight along the way. - Go to http://aws.amazon.com/ec2/ and sign up for an account. - Run the AWS Management Console (https://console.aws.amazon.com/ec2) and create a PKI security key pair. You will need to do this before doing anything else. This will be the security token you use to connect to your virtual image running on EC2. - On the upper left of the management console, make sure it is set to the region that is closest to you like perhaps “US West”. - Click the “Launch Instance” button on the AWS Management Console. You will now be prompted to choose a starting AMI. A good one to choose is: “Getting Started on Microsoft Windows Server 2008” from the Quick Start tab. This one has IIS and SQL Server Express Installed. - Select “Small” for the instance type, unless you know you will need more resources quickly. Then accept the defaults for the advanced instance options. Same for the tags screen (i.e. you probably don’t need any). - On the “create key pair” screen, select the key you created in step 2 (or if you skipped that then you need to create a pair now). - On the configure firewall screen, remove the opening for SQL Server, you don’t need this. The only two holes in the firewall should be RDP and HTTP. If you are hosting client data, then you should add HTTPS and setup Accpac to use that (see https://smist08.wordpress.com/2010/11/20/setting-up-sage-erp-accpac-6-0a-securely/). - Now you can review your settings and Launch your instance. It can take 15 minutes or so for the instance to launch, mostly due to the time it takes Windows Server 2008 to boot. So this is a good time to go get a coffee. At this point we have created a virtual image and have it running. From the AWS Management Console EC2 dashboard, we should see one running instance. We should also see 1 EBS volume. The EBS volume is the disk image of your virtual image. If you want to you can create snapshots of your EBS volume (you have to pay to store these) so you can go back to them if you mess up your image. So now we have our own Windows 2008 server running in the Amazon cloud, great, but now what do we do? How do we connect to it? How do we transfer files to it? How do we browse to it? What are the Administrator and SQL Server passwords? Now we’ll go through the steps of getting the Administrator Password, connecting via RDP and installing Accpac. - Select the instance that you have running in the management console. From the instance actions menu, choose “Get Windows Admin Password”. If this doesn’t work, you may need to give the instance a bit more time to start. You will get a dialog that wants you to take the file you downloaded back at step 2, load it into notepad and copy/paste its contents into this dialog. Then this dialog will go off and do a long cryptographic calculation and tell you the Windows Password. - Now you can run Remote Desktop and connect to your instance (if you choose Connect from the instance menu it will download a file that will start RDP with the right parameters). Use the public DNS as the computer name (from the pane with the instance details below the instance list). Administrator is the login. Be careful because copy/pasting the password can be difficult because Windows tends to add a space when you copy the password. If copy/paste doesn’t work, try just typing the password. Now you are logged in and running. Perhaps the first thing you want to do is change the Administrator password to something easier to type and remember. Now you can treat this virtual Windows Server 2008 just like any other remote server. - Copy the installation image for Accpac onto the virtual machine. You can use an FTP site or any other file copy mechanism to do this. On convenient method that Windows 7 has is that RDP can make local drives accessible to the remote computer. If you choose Options – Local Resources you can expose some drives to the remote computer and then they will show up in Windows Explorer there. - Now we need to enable SQL Server, by default the service is disabled and authentication is set to Windows Authentication only. Go to Admin Services – Services and set the SQL Server services to Automatic and start them. In the SQL Server configuration manager enable TCP/IP and set the port to 1433. In the management console set the authentication to SQL Server and authentication, then go to the sa user and enable it. Now restart the SQL Server service. Create your Accpac databases such as PORTAL, SAMSYS, SAMINC, SAMLTD, … - Run the Accpac installation you copied into the image and perform the usual steps to get Accpac up and running. When running database setup, make sure you use localhost as the server name and not the current Windows instance name, because this will change each time you run the image. We now have Accpac up and running and can access Accpac via RDP. To access the Portal use the public DNS as the server name in the usual URL for running the portal: Viola you are running in the cloud. If you shutdown this instance and restart it, you will get a new computer name and a new public DNS. This can be rather annoying if you like to setup Browser shortcuts and such. If you want to avoid this you need to allocate a static Elastic IP address from the AWS (doing this costs a small amount of money). Then you can associate this IP address with the instance and now it will stick. Further you could purchase a meaningful URL and associate it with this IP address. If you don’t want to purchase a URL, another trick is to use TinyURL.com to generate a URL for your IP address. This isn’t a very meaningful URL but it’s better than the raw IP address. How Well Does It Run? Once running, how does it compare to a local server? With the small configuration you are limited a bit in memory. It seems that running the Sage ERP Accpac 6.0A portal on the virtual image in a RDP image is a bit slow. However running the browser locally and hitting the server remotely is quite quick. This implies that the small image is sufficient for the server processes for a few users. However you will need to increase the memory and/or the processing power for more. The nice thing with Amazon is that you can change this fairly easily and only pay for what you are using. It also shows that the Amazon datacenters have quite good network latency, probably better than you can get hosting yourself for remote users. So can you go production with this? Certainly the platform can support it. The current sticking point is terminal server or Citrix licenses. These are available through various programs such as: http://community.citrix.com/pages/viewpage.action?pageId=141100352. However you need to be part of one of these Microsoft or Citrix programs where they give you specific permission to migrate your licenses to EC2. While we still have Windows Desktop components this is a potential sticking point. However once Sage ERP Accpac 6.1A comes out and we can run all the main accounting applications through the web, then this problem goes away. Amazon is also addressing other compliance type concerns, for instance achieving PCI DSS Level 1 Compliance (http://aws.amazon.com/security/pci-dss-level-1-compliance-faqs/?ref_=pe_8050_17986660) and ISO 27001 Certification (http://aws.amazon.com/security/iso-27001-certification-faqs/?ref_=pe_8050_17986660). Receiving these sort of certifications remove a lot of obstacles to using Amazon for a production environment. Also if you want to back up your data locally then you will need to copy a backup of your SQL Server database over the Internet which could be quite time consuming, but you can let it run in the background. Amazon’s EC2 Service offers an excellent way to access extra computing resources at a very low cost. You can deploy services to regions around the world and dynamically adjust the computing resources you are using. For developers this is a very cheap way to obtain access to test servers when in development. For partner this is an excellent way to establish demo servers. For education this is an excellent method to learn how to work with different operating systems and to practice installations.
OPCFW_CODE
ma vs bensì Can anyone tell me what is the difference? I also have seen discuccions about word 'però' which I didn't even get in my lessons. Am I missing something? in my opinion (native speaker), 'ma' is an adversative conjunction (contrasting) that can be used in a wider range of meaning, from pure opposition (i.e. the latter sentence excludes the former) to concession (i.e. the latter sentence contrasts the former, but doesn't completely exclude it). On the other hand, 'bensì' can be used only in case of opposition (or at least, concessive use is very uncommon). And 'tuttavia' is used only as a concessive conjunction. So they can be both interchangeable with 'ma'. Opposition: Oggi non è Lunedì, ma Martedì = Oggi non è Lunedì, bensì Martedì (today it's not Monday, but Tuesday. Concessive: Oggi è freddo, ma è una bella giornata = Oggi è freddo, tuttavia è una bella giornata (today it's cold, but it's a beautiful day) you can find 'ma bensì' together: it's used in order to strenghten, despite it's pleonastic. Thanks a lot Can you give any examples where these words can't be used interchangeably or would sound really strange? Or it really doesn't matter? my pleasure :) hmmm good question :-) As usually, it's easier to find a rule or a trend than to single out exceptions to the rule :) At the moment I cannot find any failure, but I was pondering over the fact that in everyday life I hardly say 'tuttavia' or 'bensì' when talking. I usually go for 'ma' and 'però'. I think you're more likely to find them in written texts (so you have to learn it anyway :P ). I think (I'm still very much a learner myself) that bensì means something more like "but instead" "Mi piacciono i gatti ma non mi piacciono i cani." - I like cats but I don't like dogs. "Non mi piacciono i cani bensì i gatti." - I don't like dogs but instead cats. "Non ho una casa ma ho un castello." - I don't have a house but I have a castle. "Non hu una casa bensì un castello." - I don't have a house but instead a castle. I just noticed one thing in your examples and the ones provided by duolingo, that after "bensì" only a noun is used and never a verb. On the other hand examples with "ma" are followed by verb, or something that can act as a separate sentence. It's just my wild quess. And I still don't know anything about "però" And do I understand your point correctly, that "bensì" is more like 'exclusive or'? There are various translating sites available. Google is simple to use but has many mistakes. I like using "Reverso" because it gives lots of example sentences, sometimes with mistakes in the English but easy to decipher. Here is a counter example from Reverso to what you said: Non deve limitare, bensì promuovere il movimento. = You should not be restricting movement, but encouraging it.
OPCFW_CODE
Creating SDDC in VMware Cloud on AWS Deploying an SDDC to host your workloads in the cloud provides a simple Control Plane for IT. You can manage, govern and secure applications running in private and public clouds. VMware Cloud on AWS centralizes management, provides comprehensive visibility to your SDDC, and enterprise-class security. When you deploy an SDDC on VMware Cloud on AWS, it is created within an AWS VPC dedicated to your organization. VMware creates and manages this VPC, and you have no direct access to it. Deploying a Software-Defined Data Center (SDDC) is the first step in making use of the VMware Cloud on AWS service. After you deploy the SDDC, you can view information about it and perform management tasks. When you deploy an SDDC on VMware Cloud on AWS, it is created within an AWS account and VPC dedicated to your organization and managed by VMware. You must also connect the SDDC to an AWS account belonging to you, referred to as the customer AWS account. This connection allows your SDDC to access AWS services belonging to your customer account. Once you click CREATE SDDC, you would be asked to enter the AWS region where you want to create your SDDC , provide a name for your SDDC and number of hosts you want to deploy. The next step is to enter the network details like CIDR range for your management network. The address may denote a single, distinct interface address or the beginning address of an entire network. The maximum size of the network is given by the number of addresses that are possible with the remaining, least-significant bits below the prefix. The aggregation of these bits is often called the host identifier. Few things to remember for entering CIDR Block : - You can’t change the values specified for the management network after the SDDC has been created. - If you plan to connect your SDDC to an on-premises data center, the IP addresses you choose must be different from the ones in your on-premises data center, to avoid IP address conflict. - The maximum number of hosts your SDDC can contain depends on the size of the CIDR block you specify. In order to accommodate more than four hosts, you must specify a /16 or /20 CIDR block. One the details are entered, click DEPLOY SDDC and within few minutes the SDDC is ready for you showing number of hosts, CPU, Memory and strorage. - Summary – this is the default management page for your SDDC. View CPU, Memory and Storage metrics, Network configuration, Connection Info and Support as well as Actions that control your SDDC. You can also directly open your vCenters from your VMware Cloud on AWS console for ease of management, VM Migrations, Content migration and much more! - Network – Provides a full diagram of the Management and Compute Gateways. This is where you can view which VPNs are configured and Firewall Rules. We will cover this in more detail later. - Connection Info – gives you access to your vSphere Web Client, vCenter Server, vCenter Server API and reviews your Authentication information. - Support – you can contact Support with your SDDC ID, Org ID, vCenter Private and Public IPs and the date of your SDDC Deployment. - Actions Menu – This will contain any actions available for your SDDC including deletion of the environment. - Open vCenter – you can directly access your Private SDDC through this option. Before you can login to your vCenter, you must open network access to vCenter through the management gateway. Choose an option for opening network access by creating a Firewall Rule and setting up your VPN access. Switching to the dark theme now !!! The next steps is to configure your network details for your management network. By default, the firewall for the management gateway is set to deny all inbound and outbound traffic. You may add additional firewall rules to allow traffic as needed. So here we are creating FW rules to allow vCenter access throught the port 443. Creating a management VPN allows you to securely access the vCenter Server system and Content Library deployed in your SDDC. Configure an IPsec VPN between your on-premises data center and cloud SDDC to allow easier and more secure communication. You don’t have to set up a VPN connection, but transferring virtual machine templates and disk images into your SDDC in the cloud is easier and more secure if the connectivity is complete. So the next step is to configure a VPN to your on-premises cloud and here are the simple steps you need to following : Configuring a management VPN requires the following steps: - An on-premises router or firewall capable of terminating an IPsec VPN, such as Cisco ISR, Cisco ASA, CheckPoint Firewall, Juniper SRX, NSX Edge, or any other device capable of IPsec tunneling. - The router or firewall should be configured with cryptography settings as described in Recommended Cryptography Settings in the VMware Cloud on AWS documentation. If your on-premises gateway is behind another firewall, allow IPsec VPN traffic to pass through the firewall to reach your device by doing the following: - Open UDP port 500 to allow Internet Security Association and Key Management Protocol (ISAKMP) traffic to be forwarded through the firewall. - Set UDP port 4500 for Internet Key Exchange (IKE) (required only if NAT is used) to the list of firewall ports - Set IP protocol ID 50 to allow IPsec Encapsulating Security Protocol (ESP) traffic to be forwarded through the firewall. - Set IP protocol ID 51 to allow Authentication Header (AH) traffic to be forwarded through the firewall. When the VPN tunnel is configured in the private cloud, you should be able to verify connectivity in both the VMware Cloud on AWS Console and by accessing the vCenter Server deployed in your environment with a Web browser. After you have saved the configuration, the VPN should now show as connected in the console diagram and the VPN settings. The below steps would now connect to the on-premises by adding DNS from your SDDC . In the image below, I did not have a connection maintained hence you see a yellow warning , else it would be green, In my next blog I will be writing how to setup a compute network in SDDC and what will happen once you have a VM deployed in your SDDC.
OPCFW_CODE
Otherwise, this diagram gives a conceptual overview, and the subsequent lists and links have more information. This flowchart used with permission from Hofstra/Northwell Health school of medicine library resource guide. The DMS Policy provides very specific guidelines on what characteristics make a data repository for the purposes of sharing data from NIH-funded research. The DMS Policy also provides the following set of priorities in selecting a data repository: Priority 1: If the Institute, Center, Office (ICO) policy and/or the Funding Opportunity Announcements (FOAs) identify particular data repositories (or sets of repositories) to be used to preserve and share data, then that repository(ies) takes priority. Priority 2A: If Priority 1 does not apply, then next: prioritize data repositories that are discipline or data-type specific, particularly the ones listed at https://sharing.nih.gov/data-management-and-sharing-policy/sharing-scientific-data/repositories-for-sharing-scientific-data. Priority 2B, for if neither of the above fit: If no appropriate discipline or data-type specific repository is available, consider any of the following 3 data sharing options (no priority implied): 2B i: Datasets up to 2 GB in size and related to specific articles may be uploaded as supplementary material to articles submitted to PubMed Central 2B ii: Approved generalist data repositories, and institutional repositories built for digital preservation and archiving 2B iii: Large datasets may benefit from cloud-based data repositories for data access, preservation, and sharing. These options are for repositories where data will be shared, either by public download or restricted access mediated by the repository or the PI. If you are interested in FAIR discoverability of data but cannot share the data itself due to ethical, legal, or technical challenges, another option would be sharing metadata and documentation in the VCU data catalog, with guidance on the terms for requesting access or colaboration. Are you planning to use OSF as a generalist repository to address data preservation and access? Consider the text below as a starting place for addressing Element 4 using OSF: 4.1 The name of the repository(ies) where scientific data and metadata arising from the project will be archived: Data and metadata will be made available on OSF, or the Open Science Framework. OSF’s preservation and archiving infrastructure is maintained by the Center for Open Science. [Note for larger datasets: If your data are likely to exceed the 50 GB storage cap provided by VCU's membership as an OSF institution, consider budgeting for additional storage based on the fees at https://www.cos.io/osf-usage] 4.2 - How the scientific data will be findable and identifiable, i.e., via a persistent unique identifier or other standard indexing tools. The data and associated files and documentation will be available by persistent URL until the data is released for open download. After release for general availability, the data and associated files and documentation will also be available by DOI. The persistent URL will also continue to resolve to the data, and both the persistent URL and the DOI will make the data findable by standard internet search. 4.3 - When the scientific data will be made available to other users (i.e., the larger research community, institutions, and/or the broader public) and for how long. The data will be released for general visibility and download at the end of [DESCRIBE WHEN YOU WILL CHANGE DATA FROM PRIVATE TO PUBLIC, e.g. at the end of the grant period, 1 year after project end, etc.]. Release will be handled manually by the PI. Until general release, the PI will manually add collaborators and researchers approved for access to the data. The data and associated files and documentation will remain available for a minimum of 50 years after deposit, based on the OSF’s preservation plan.
OPCFW_CODE
Here to support all the new projects who doesn’t have a shilling telegram / discord communities already. Also found some gem projects that have nothing to do with cryptocurrency/NFTs or greed. True gem projects hidden under shilled projects. Some community members found evidences of shilling and have provided details in the voting thread. Would also recommend staying away from projects that have huge salaried teams that voted for themselves and are in the top rankings. Such projects have submitted same projects across multiple categories and trying to create fake buzz shilling their same project as different posts with different titles. Some projects have literally brand new voters with negligible reading time. What is the value of such fake votes by voters who have voted without even checking the other projects. I hope the Tron team, judges and community members don’t just focus on trendy projects because of fake shilling and buzz but rather identify true gems of this hackathon that can take Tron to places if funded well and supported. Wishing Tron Hackathon 2022 Season 2 hackathon participants and Tron team a great success! On devpost voting is completely done by judges so no community involvement. Only on the forum community can vote their favourite project and the winning project will earn some extra reward. So community votes don’t make any impact on devpost results. If you think your project is true gem wait for devpost results (from judges). None of the teams in the top currently are “High Salaried” besides Tronverse who is from the SR Tronspark. Some of them are even working for FREE. How do I know this? Because I have been in the Tron Community for 2 years. Don’t worry too much about the community vote, focus on the Devpost judging. Also, if you specifically point out who you think is having an unfair advantage I can bring up some counterpoints depending on the project. You know, this hackathon is a great way to bring projects and tronics together. New projects are very welcome to reach bigger ones to start some collaborations and synergies. I feel like your post is spreading the division while we should all be enjoying the moment together no matter the result of the community vote. The main prize (which includes not only money but, more important, visibility and recognition) will be granted by the judges who are currently analysing products/dapps regardless of how big projects communities are. Bigger projects are easy to reach and always glad to start new collaborations or give a hand/advices if needed. So my advice for new projects team members is to start contacting those bigger projects and think about win-win collab that will benefit the entire community of Tronics. That way you will gain partners, new community members and visibility. I think you have a distorted view of the established projects that populate the Tron ecosystem. I’m sure of it when i read a reference to “salaries” in your post. No one is getting a salary for this. We do our best to cooperate, build products and partnership to be able to grow and bring new users to Tron. The hackathon is a showcase and every project has the same visibility. Even the organizers believe that having a strong community is positive, otherwise they would not have created a special prize on DevPost, the “Community Prize”. It’s already been said before that the community prize is little compared to the main DevPost prizes (that’s not only money). Every Hackathon will have new and old projects … in the long run it’s a process of evolution… the new today becomes old tomorrow… the bigger goal is to create collaboration and increase usage and adoption of Blockchain… as this increases … everyone building benefits This sounds really nice and completly understandable - tron for success Walk under major projects, listen to their orders? What kind of decentralization is this? @FirstWave You have a very personal idea about collaborations and partnerships. I just already saw how it ends. And given the information that other users post about their actions and the fact that they do not want to voluntarily leave the “feeder”, despite the fact that they have already received a lot of money, everything is going to this. @fabsltsa You don’t even need to search. Look at the message from @NshowNFT (Уou can see by his avatar that he is either a representative or a user of one of the projects that won last time, Moreover, this is a good and respected project, I myself voted for them last time.). Instead of paying more attention to the words of the topstarter, the words of young projects that the entire forum is full of, as well as my words, and understand that there is a mistake somewhere in the rules (after all, if completely different people and different projects that don’t even know each other talk about this and provide various evidence - it means there is a problem), he starts to provoke me. In the last month there were about 1400 active users on the forum and those who complain about the rules are a very small number compared to all the others that just want to participate and help making this event a success. Again, I suggest everyone to concentrate on the end goal, get informed about the projects and vote accordingly. Complaining about projects that have bigger or established communities does not help IMHO. I think some people just get bored seeing all this negativity. You can stay in your corner waiting for things to happen by themselves if that’s what you really want. If you translate partnership and dialogue by submissiveness and obedience, I really doubt about your ability to grow a project successfully. In the mean time I’m going to help the new projects that have already messaged me here and on Telegram and encourage those who haven’t yet to contact other community projects (big or small) with a positive mindset and the wish to grow together. It all depends on who you partner with. There are good and honest people, there are bad ones. In any case, if a partnership or something else is imposed in one way or another, this is bad. Regarding the negative and positive: it is certainly easier to call a defect and its discussion a negative, instead of considering, taking into account and working on the mistakes. Personally, I don’t understand why people add emotions here. I can’t understand your statement that you already saw how it ends. The judges will decide, not the community That is why I wrote what I did ! The quantity doesn’t matter. If there is a problem, then it is there and will not disappear just like that. People do not aim to complain about various violations on the part of large communities, they first of all point to the presence of an existing (or do they write out of boredom - what do you think? ) Initial advantage, as well as pressure. Can you help me I was sent money by wrong to account her from my account on binance the wallet here code TJDENsfBJs4RFETt1X1W8wMDc8M5XnJhCe How I can return back my mony I sent money by wrong to account her from my account on binance the wallet here code TJDENsfBJs4RFETt1X1W8wMDc8M5XnJhCe How I can return back my mony I can pray that all Tron project will be fair and honest for the sake of the community.
OPCFW_CODE
In the story of Gauss's problem of adding up the numbers from 1 to 100, one interpretation of the result, 5,050, is that the average of all the numbers from 1 to 100 is 50.5. This is the ordinary definition of an average: add up all the things you have, and divide by the number of things. (The result in this example makes sense, because half the numbers are from 1 to 50, and half are from 51 to 100, so the average is half-way between 50 and 51.) Similarly, a definite integral can also be thought of as a kind of average. In general, if y is a function of x, then the average, or mean, value of y on the interval from x=a to b can be defined as In the continuous case, dividing by b-a accomplishes the same thing as dividing by the number of things in the discrete case. ◊ Show that the definition of the average makes sense in the case where the function is a constant. ◊ If y is a constant, then we can take it outside of the integral, so Example 7◊ Find the average value of the function y=x2 for values of x ranging from 0 to 1. The mean value theoremIf the continuous function y(x) has the average value y- on the interval from x=a to b, then y attains its average value at least once in that interval, i.e., there exists ξ with a<ξ<b such that y(ξ)=y-. The mean value theorem is proved on page 161. The special case in which y-=0 is known as Rolle's theorem. ◊ Verify the mean value theorem for y=x2 on the interval from 0 to 1. ◊ The mean value is 1/3, as shown in example 56. This value is achieved at x=√1/3=1/√3, which lies between 0 and 1. In physics, work is a measure of the amount of energy transferred by a force; for example, if a horse sets a wagon in motion, the horse's force on the wagon is putting some energy of motion into the wagon. When a force F acts on an object that moves in the direction of the force by an infinitesimal distance dx, the infinitesimal work done is dW=Fdx. Integrating both sides, we have W=\intab Fdx, where the force may depend on x, and a and b represent the initial and final positions of the object. ◊ A spring compressed by an amount x relative to its relaxed length provides a force F=kx. Find the amount of work that must be done in order to compress the spring from x=0 to x=a. (This is the amount of energy stored in the spring, and that energy will later be released into the toy bullet.) The reason W grows like a2, not just like a, is that as the spring is compressed more, more and more effort is required in order to compress it. Mathematically, the probability that something will happen can be specified with a number ranging from 0 to 1, with 0 representing impossibility and 1 representing certainty. If you flip a coin, heads and tails both have probabilities of 1/2. The sum of the probabilities of all the possible outcomes has to have probability 1. This is called normalization. So far we've discussed random processes having only two possible outcomes: yes or no, win or lose, on or off. More generally, a random process could have a result that is a number. Some processes yield integers, as when you roll a die and get a result from one to six, but some are not restricted to whole numbers, e.g., the height of a human being, or the amount of time that a uranium-238 atom will exist before undergoing radioactive decay. The key to handling these continuous random variables is the concept of the area under a curve, i.e., an integral. Consider a throw of a die. If the die is “honest,” then we expect all six values to be equally likely. Since all six probabilities must add up to 1, then probability of any particular value coming up must be 1/6. We can summarize this in a graph, f. Areas under the curve can be interpreted as total probabilities. For instance, the area under the curve from 1 to 3 is 1/6+1/6+1/6=1/2, so the probability of getting a result from 1 to 3 is 1/2. The function shown on the graph is called the probability distribution. Figure g shows the probabilities of various results obtained by rolling two dice and adding them together, as in the game of craps. The probabilities are not all the same. There is a small probability of getting a two, for example, because there is only one way to do it, by rolling a one and then another one. The probability of rolling a seven is high because there are six different ways to do it: 1+6, 2+5, etc. If the number of possible outcomes is large but finite, for example the number of hairs on a dog, the graph would start to look like a smooth curve rather than a ziggurat. What about probability distributions for random numbers that are not integers? We can no longer make a graph with probability on the y axis, because the probability of getting a given exact number is typically zero. For instance, there is zero probability that a person will be exactly 200 cm tall, since there are infinitely many possible results that are close to 200 but not exactly two, for example 199.99999999687687658766. It doesn't usually make sense, therefore, to talk about the probability of a single numerical result, but it does make sense to talk about the probability of a certain range of results. For instance, the probability that a randomly chosen person will be more than 170 cm and less than 200 cm in height is a perfectly reasonable thing to discuss. We can still summarize the probability information on a graph, and we can still interpret areas under the curve as probabilities. But the y axis can no longer be a unitless probability scale. In the example of human height, we want the x axis to have units of meters, and we want areas under the curve to be unitless probabilities. The area of a single square on the graph paper is then If the units are to cancel out, then the height of the square must evidently be a quantity with units of inverse centimeters. In other words, the y axis of the graph is to be interpreted as probability per unit height, not probability. Another way of looking at it is that the y axis on the graph gives a derivative, dP/dx: the infinitesimally small probability that x will lie in the infinitesimally small range covered by dx. Example 10◊ A computer language will typically have a built-in subroutine that produces a fairly random number that is equally likely to take on any value in the range from 0 to 1. If you take the absolute value of the difference between two such numbers, the probability distribution is of the form dP/dx=k(1-x). Find the value of the constant k that is required by normalization. Compare the number of people with heights in the range of 130-135 cm to the number in the range 135-140.(answer in the back of the PDF version of the book) When one random variable is related to another in some mathematical way, the chain rule can be used to relate their probability distributions. Example 11◊ A laser is placed one meter away from a wall, and spun on the ground to give it a random direction, but if the angle u shown in figure j doesn't come out in the range from 0 to π/2, the laser is spun again until an angle in the desired range is obtained. Find the probability distribution of the distance x shown in the figure. The derivative dtan-1z/dz=1/(1+z2) will be required (see example 66, page 88). ◊ Since any angle between 0 and π/2 is equally likely, the probability distribution dP/du must be a constant, and normalization tells us that the constant must be dP/du=2/π. The laser is one meter from the wall, so the distance x, measured in meters, is given by x=tan u. For the probability distribution of x, we have Note that the range of possible values of x theoretically extends from 0 to infinity. Problem 7 on page 104 deals with this. If the next Martian you meet asks you, “How tall is an adult human?,” you will probably reply with a statement about the average human height, such as “Oh, about 5 feet 6 inches.” If you wanted to explain a little more, you could say, “But that's only an average. Most people are somewhere between 5 feet and 6 feet tall.” Without bothering to draw the relevant bell curve for your new extraterrestrial acquaintance, you've summarized the relevant information by giving an average and a typical range of variation. The average of a probability distribution can be defined geometrically as the horizontal position at which it could be balanced if it was constructed out of cardboard, i. This is a different way of working with averages than the one we did earlier. Before, had a graph of y versus x, we implicitly assumed that all values of x were equally likely, and we found an average value of y. In this new method using probability distributions, the variable we're averaging is on the x axis, and the y axis tells us the relative probabilities of the various x values. For a discrete-valued variable with n possible values, the average would be and in the case of a continuous variable, this becomes an integral, ◊ For the situation described in example 59, find the average value of x. Sometimes we don't just want to know the average value of a certain variable, we also want to have some idea of the amount of variation above and below the average. The most common way of measuring this is the standard deviation, defined by The idea here is that if there was no variation at all above or below the average, then the quantity (x-x-) would be zero whenever dP/dx was nonzero, and the standard deviation would be zero. The reason for taking the square root of the whole thing is so that the result will have the same units as x. ◊ For the situation described in example 59, find the standard deviation of x. ◊ The square of the standard deviation is
OPCFW_CODE
up vote six down vote You will need to replace the values one after the other like in the for-loop or copying One more array around Yet another for example applying memcpy(..) or std::duplicate Having said that, it can be crucial to notice that C++ can be greatly utilized including in gadget motorists, software application, entertainment application, and even more. Your C++ homework will probably take a look at your power to use this multi-paradigm language and equipment code. I would like answer to my homework issues JA: The Math Tutor can help you can get an A in your homework or ace your up coming take a look at. Convey to me more details on what you may need help with so we can easily help you best. Client: i … examine far more number of seats in the car, and if the car has seat belts while in the rear, but You can't ask whether it is a comfortable leading, or what its cargo potential is. arrange to fulfill up, if one endeavor reaches it 1st then it waits for the opposite to arrive. And in reality a queue is shaped for every rendezvous of all In the code underneath we introduce a element of Ada, the opportunity to identify The weather we are going to initialise. This is helpful for clarity of code, but much more importantly it makes it possible for us to only initialise the bits we would like. Ada is additionally frequently assumed being a armed forces language, Along with the US Section of Defense its prime advocate, this isn't the case, a quantity of economic and govt developments have now been implemented in Ada. Ada is a wonderful alternative if you want to invest your progress time fixing your Won't only help the student to construct a strong foundation on the topic but will Strengthen their self-assurance to confront technical interviews boldly. Issues with programming assignments are among the principal problems college students have click to find out more when trying to finish hard degree plans, Which is the reason programming help is needed. We have now produced a workforce of industry experts with knowledge and levels within your fields to give you programming assistance that is certainly in line with the best procedures formulated in the existing – not the past. Be aware: the rule previously mentioned nonetheless applies 'Pred of Monday is an error. Val This gives you the worth (for a member in the enumeration) of factor n in converted from an integer worth to a double price.) Authentic range constants can even be followed by e or Programmers are excellent at recognizing homework queries; most of us have done them ourselves. The individuals feelings are that you ought to work out, in order that you're going to have an understanding of from the sensible encounter. It truly is Alright to request hints, Whilst not for full solutions. It truly is imperative that you should follow it. If courses are like this implemented continually, then programming assignment wouldn't be a difficulty for you. Two points are distinct that to be a programming professional a single has to understand the theoretical ideas and second it to execute Individuals concepts to sensible applications. There are numerous programming frameworks available to produce your code or make an application. Desktop applications, Internet purposes, animation and a number of other jobs is usually labored out working with programming. The essence of programming lies in The reality that the underlying construction of any programming language is identical, and it’s just the syntax that improvements. We might also say that the logic with the code doesn’t modify. If you get abilities in Java then a programming assignment on.Net framework working with C# or C++ is equally uncomplicated. Programming assignment help provided by allassignmenthelp.com usually takes care of such fundamentals and most of our tutors are successful with each programming assignment. ignored. Process Representation of styles thirteen . As you would possibly expect with Ada's qualifications in embedded and techniques programming there are methods in which you'll be able to pressure a kind into unique technique
OPCFW_CODE
The main purpose of this study is to develop disease prediction models to quickly and accurately turn data into diagnosis. Therefore, this study developed machine learning, deep learning, and ensemble models for 39 diseases classification (Supplement Table S9) of patients visiting the emergency room using 88 laboratory test parameters including blood and urine tests (Supplement Table S1). The overall workflow of disease prediction model based on laboratory tests (DPMLT) is schematically demonstrated in Figure 1. This protocol is largely composed of 5 parts, and the third part explains the machine learning model and the deep learning model. 1.0 Data collection and preprocessing We collected anonymized laboratory test datasets, including blood and urine test results, along with each patient’s final diagnosis on discharge. We curated the datasets and selected 86 attributes (different laboratory tests) based on value counts, clinical importance-related features, and missing values. For Deep learning (DL), missing values were replaced with the median value for each disease. 2.0 Feature extraction Feature extraction plays a major role in the creation of machine learning (ML) models. 3.0 Model selection and training 3.1 DL selection The research in this study was conducted using a deep neural network (DNN) for structured data. 3.2 MLP (multi-layer perceptron) All features used in this study are numeric data except for the ‘sex’ feature. MLP recognizes only numerical data, so we transformed the categorical feature of ‘sex’ into a number using LabelEncoder of the scikit-learn library. MLP does not allow for null values, so we replaced null values with the median value of each feature. 3.3 Feature normalization Each feature had a different range. We applied a standard scale to normalize the mean and standard deviation of each feature to (0, 1) by subtracting the mean value of the feature and dividing by its standard deviation value. 3.4 Hidden layer composition In our study, the hidden layer was comprised of two layers. We employed the Relu (rectified linear unit) activation function for each layer. We applied the dropout technique to each hidden layer, which is a simple method to prevent overfitting in neural networks. XGBoost is an algorithm that overcomes the shortcomings of GBM (gradient boosting machine). The disadvantages of GBM include long learning times and overfitting problems. The most common ways to solve these problems are through parallelization and regularization. Our dataset contained null values, which MLP replaced with the corresponding median values, but XGBoost has a procedure to process null values, so utilized that procedure. The max_depth argument in XGBoost is one factor determining the depth of the decision tree. Setting max_depth to a large number increases complexity and can lead to overfitting. This study found that max_depth was optimally set to 2. The difference between LightGBM and XGBoost is the method by which the tree grows. XGBoost creates a deeper level within the leaf (level-wise/depth-wise), and LightGBM generates a leaf at the same level (leaf-wise). LightGBM uses a leaf-centered tree-splitting method to split leaf nodes with the maximum loss value, creating an asymmetric tree. To avoid overfitting in LightGBM, an experiment was conducted by adjusting num_leaves and min_child_samples. 3.7 Ensemble model results (DNN, ML) We developed a new ensemble model by combining our DL model with our two ML models to improve AI performance. We used the validation loss for model optimization. 4.0 K-Fold Cross-validation In our study, we divided a total of 5145 datasets at a ratio of 8:2 to create the training set and test set. We set the validation data ratio to 0.2 for the training set, which was evaluated using validation loss for model optimization based on the training data. If the number of validation data is increased, the number of training data decreases, leading to a problem of high bias. We used k-fold cross validation to prevent data loss of the training set. 5.0 SHAP (Shapley Adaptive Explanations) SHAP is an acronym for Shapley Adaptive Explanations. Relating to the Shapley value, as the name suggests. In our experiment of MLP, we can calculate SHAP value using DeepLIFT.
OPCFW_CODE
As this is the first post, I'd like to tell you a little bit about what the blog will be like. My catch phrase above really gets to the heart of it. I believe I have accumulated a great deal of intuition and knowledge regarding economics, finance, and other subjects (please see the "About Me" blurb to the left), and that I am good at teaching what I know in a way that's clear and easy to understand, but also accurate. I am far less willing than average to say something that is simple but substantially false and misleading in order to make it easier to understand. What you end up easily understanding is something that's substantially false, and often in important ways. I believe I am good at teaching difficult things clearly and in an easy to understand way without saying things about it that are simple, but very untrue. What makes me think this? As you can see from the "About Me" blurb, I have a lot of experience teaching and have gotten some great feedback. This leads me to the next point. First, I really want to be truthful and accurate, and part of that is not lying about myself, so I don't want to say I don't think I'm an excellent teacher because that would be untrue. Furthermore, I don't want to just say nothing about it, because I think it's valuable for you to know this, and that I do get outstanding teaching reviews. It is an important benefit of reading my blog. In any case, though, if I sound egotistical, I apologize. You may have also noticed that my writing often deviates from what is considered today to be "good style". For example, it's not "tight enough", "concise enough"; it has grammar errors, non-misleadingness is not a word, etc., etc. I am far less willing than average to sacrifice clarity, accuracy, ease of understanding, and important nuance, details, caveats, etc., to get what is considered "good writing style". For important material, it's usually much better that it be less "tight" so that people really understand it. You do no one a favor if you make it have 50% less words, but they end up having to slowly read it twice, actually spending more time, and they still don't understand many important things. You do them no favor if you leave out a lot of important nuance, detail, and caveats, so it's smooth flowing and concise, saving them maybe 10 minutes, but costing them far more from mistakes they make from not knowing this important nuance, detail, etc. So you will see that my writing often deviates substantially from what's considered today to be "good style", but I hope you will also find that it's very clear, fast and easy to understand, and that you gain a great deal of valuable understanding, that in many cases you weren't finding elsewhere. Sometimes you may think I could have explained something just as well with "good writing style", and you may be right, but if I don't have time to think of a way to explain it well that's smooth, I may just choose to just state it clearly and accurately, even if it's clunky, rather than not stating it at all. As you can see from my "About Me", like most people, I have a lot going on in my life and limited time, and I believe in having a personal life and plenty of sleep for good health. So often I won't have much time for the blog. But I think one of the great things about blogs is they're a way to get writing out quickly; they're a forum where you don't have to spend a lot of time to get it really polished, and this is very valuable, because without such forums a lot of great information and ideas would really never get out, or would get out much later. It would just keep getting put off until the person had time to polish it and get it into a finished state. That might be years later (if ever), and the ideas and information might have done a lot of good if they were released earlier, especially if they were very topical. So blogs serve an important function in that regard, and I'm quite sure I'll be posting a lot of very unpolished writing. Thank you for reading, and without further ado, let's get started...
OPCFW_CODE
Part time PhD while working, who owns IP? I'm interested in doing a part time PhD on the same topic I am currently working at a company for. What do I need to do to make sure that I can fulfill my PhD obligations while protecting the IP of the company? How is this dealt with? Plenty of people work on start-ups after their PhDs which continue their research in some way so there must be a way to separate the two. You need to run this by all relevant parties and legal departments. Universities often have ownership of works created by people employed by them using university resources. This could create issues with your primary employer. IP can be thorny. The difficulty of this is one reason that both companies and universities may put restrictions on whether you can work simultaneously for anyone else. I don't see how a general answer could be possible. If you're serious about this you probably need to ask a lawyer experienced in this area in your jurisdiction, not StackExchange. They will probably have to help you draft contracts with the relevant parties. Suppose I am not funded by the university but by an external source, does the university still own anything? Most universities will have commercialisation departments specifically for working through these issues. You need to get an agreement in place before starting. The most important conflict will be your (and your supervisor's) need to be able to publish results, compared with your employer's need for confidentiality. @FourierFlux It would be extremely difficult (and useless) to use zero university resources while conducting PhD research. Adding to @CameronWilliams, there's three parties that potentially have rights: university, employer, and funding source. Projects can have more than one funding source and collaborators at other universities, so there's an unbounded number of parties that have to agree. Since the university presumably has some input, e.g., through discussions with your supervisor (because otherwise, it would be independent research, not a Ph.D. project), the university will at least have a reasonable claim on IP. We can't tell you. You will need to talk to your employer's legal and/or Human Resources team, and get them in contact with the university. There may be policies on situations like these on one of the two sides which the other side is happy about; then things may be easy. There may be policies on both sides, then negotiations may ensue. This may take a while. As an example, if I collaborate with a university on some research project, my employer typically wants to own all IP generated, but will license it back to the university involved for research purposes at no cost. If the university is fine with that, they can just sign our contract, and everyone is happy. If they are not, things get tricky. Whatever you do, start soon. I have been quoted a six month time frame for such negotiations from my employer's legal team. Also, note that if more than one university is involved, e.g., in some research collaboration, the negotiations may become very involved - I personally have been counseled not to even try a constellation like this, because coming to an agreement is likely impossible. Six months is perhaps a bit on the low side in my experience. But it is highly dependent on which university - some (CalTech) are excellent to work with (and they actually bill on time at the end of the fiscal year). Others? Not so much.
STACK_EXCHANGE
Do you have a question? Post it now! No Registration Necessary. Now with pictures! February 17, 2006, 10:47 am rate this thread I would like to understand which are the best ways to send bulk mailings like for newsletters and so on ( not spamming, always to receivers who opt-in ). I am not looking for code for now , I have found some very interesting classes and I also have my own scripts, I just want to know all the options available and why one is better of another . So, if have to send just a few copies to, say, less than one hundred people I can use the mail function, If I have to send , say, 20 thousand emails I will have to hire dedicated mail servers, but what if I have to deliver a few hundreds emails ? What is usually the max limit using my shared hosting before moving to a dedicated mail server ? What is the best way ? maybe splitting the delivery in batches using the default mail ( sendmail ) or by SMTP ? TIA to who will enlight me on this matter Re: best ways for mass mailing on 02/17/2006 08:47 AM johnny said the following: Mailing has two phases: queueing and delivering. Applications do not deliver, they only queue in a MTA (Mail Transfer Agent) that takes care of the delivery soon or later. Queueing can be very fast if you inject the messages directly in the MTA queue and tell it to deliver it later on the next queue run. This way your queueing script takes much less time to queue all messages. NEVER use SMTP for queueing, unless you have no option. That is the slowest way to queue messages, despite there some people believe otherwise. Queueing via SMTP is just another way to inject messages in the MTA queue. The problem is that you need to establish a TCP connection to the same machine. That is a silly solution when you can call the sendmail program directly to do the same thing without TCP connections overhead. SMTP should only be used when you have no way to communicate with the MTA, like for instance when the MTA is in another machine, or there is no sendmail installed in the local machine. In sum, always use sendmail (or equivalent) to queue your messages. Some sendmail implementations (or emulations) provide options to tell it to defer deliveries. If possible use that option. You may also want to take a look at this class that can compose and send messages using several different queuing alternatives implemented by sub-classes. There is one sub-class specialized in sendmail that provides control over those sendmail option. Just call the SetBulkMail function, and it attempts to optimize message queueing for mass mailing. The class documentation has more mass mailing tips, like avoiding personalization to take advantage of message body caching speedup. As for the deliverying phase, there is not much you can do because it is something under the control of the MTA. What I can tell you is that the outgoing bandwidth plays a decisive role in the final delivery time. If you have more outgoing bandwidth, you can deliver messages faster and eventually more messages at the same time. Beware that some hosting companies say they offer unlimited bandwidth but in fact they have impose cap values to limit the outgoing transfer rates per hosting client. That is important to not let one client suck all the outgoing bandwidth. Metastorage - Data object relational mapping layer generator PHP Classes - Free ready to use OOP components written in PHP
OPCFW_CODE
JSBSim is an open source flight dynamics simulator. It can simulate the flight of balloons, gliders, prop planes, jets, rocket-powered jets, and rockets. Importantly, I can program in GNC (guidance, navigation, control) logic to perform active stabilization during flight. JSBSim is a console program that takes xml files as input and outputs csv files (which can be plot in Matlab or Excel), linked to Simulink, or even stream output via telnet for remote "telemetry." A JSBSim model requires aircraft, engine, and script definitions. This is how I structured the Electron flight model in JSBSim: Electron.xml <-- Rocket geometry, aerodynamic parameters (from RASAero), and engine config NZ01.xml <-- Parameters of (imaginary) launch site in New Zealand ElectronControlSystem.xml <-- GNC parameters ElectronGuidanceExecutive.xml <-- Mission clock, guidance modes ElectronFirstStageEffectors.xml <-- First stage engine gimbal definition ElectronSecondStageEffectors.xml <-- 2nd stage engine gimbal definition Electron.xml <-- Defines wind speeds, rocket staging, console output I based the file organization and GNC structure on the Jupiter-246 concept model available in JSBSim, but otherwise everything was done nearly from scratch. The masses of each major rocket part such as payload shroud, body tubes, engines, were specified as pointmass elements inside the mass_balance section of the aircraft definition, aircraft/Electron.xml. Only cylindrical and spherical (solid or hollow) shapes can be specified, so it ends up being an approximation of the geometry. The dimensions of each part are fairly well defined from Rocket Lab's web site, and I used masses previously estimated when trying OpenRocket. Engines are first defined in engine files (the engine and nozzle are separately defined in JSBSim). Here are example engine and nozzle files for the Rutherford engine. The Isp was guessed from other high-performing Kerosene liquid engines, and the mass flow rate was calculated using the relation where F is the thrust of the engine given on the Rocket Lab web site (146.6 kN peak, or 16.3 kN/engine) and g is the acceleration due to gravity 9.8 m/s^2. The mixture ratio 2.6 is a standard oxidizer to propellant mixture ratio for LOX/kerosene. Incidentally, LOX/Kerosene is the same proven combination used on the Saturn V moon rocket and SpaceX's Falcon 9 <isp> 350.0 </isp> <maxthrottle> 1.00 </maxthrottle> <minthrottle> 0.40 </minthrottle> <propflowmax unit="LBS/SEC"> 10.4625 </propflowmax> <mixtureratio> 2.6 </mixtureratio> <nozzle name="Rutherford Nozzle"> <!-- area = Nozzle exit area, sqft. --> <area unit="FT2"> 0.209 </area> NASA web site gives a nice introduction to the concept of specific impulse. Tanks are specified in the aircraft definition file, by giving the types (FUEL/OXIDIZER), locations, capacities, and drain locations. Tanks are "hooked up" to engines by specifying the tank number as feed elements in each engine.
OPCFW_CODE
Moto G6 Play Review: First Look Trend Micro Pc Cillin 2000 tells me that I have several files in Second Chance corrupted by a virus (TROJ_ISTBAR.A according to Trend Micro. It refuse to clean the affected files or quarantine them. Going by the instructions on the website, I've managed to delete most of the files by removing the programs but I still have those infected files. In addition, it has manged to worm its way into two files called Also, a program called WINLOGIN.exe is infected by what PC Cillin calls BKDR_SMALL.C. PC Cillin cannot clean this program but it has managed to quarantine it. My problem is this. With TROJ_ISTBAR.A, can I safely ignore these infected files as I have removed all the other bits? If not, how do I remove them? Secondly, what can I do about the BKDR_SMALL.C virus. Can my PC cope without WINLOGIN.exe or is there another way of removing the virus? See "click here" for the TROJ_ISTBAR.A virus info. jaft trojans are usually harmless virus, they are not destructive at all, i have pc cillin 2003 and have discovered many trojans but have not infected or corrupted any of my files on drive c, th best scanner to use at the moment is avg go to click here, and download a trial version. Heres how to do what Powerless says click here Don't forget to turn it back on again! 1) These files are NOT in System Restore but in another program called Second Chance (as sort of early version of System Restore by another company). Its been assumed that you are running ME or XP but you have not confirmed or denied this. You need to delete all the Second Chance Data. Right click the blue circle icon in the system tray (bottom right of the screen) to get a menu. Somewhere in their you can delete all but the latest backup. The Trojans and viruses have been backed up by this software and although your (out of date) AV software can detect the viruses in the Second Chance folders it can't delete them as they are protected. TROJ_ISTBAR.A is a Homepage Hijacker - not much to worry about. BKDR_SMALL.C will be dealt with by the AV. Personally i would delete the Second Chance data, download AVG 6 click here for free, uninstall your out of date PC-Cilin and install AVG. Then sweep the entire PC. Remember anything in C:\_RESTORE\ARCHIVE etc is just a backed up version of the virus made by this software. As long as the file don't exist in C:\windows etc then you are OK. Firstly an apology, I should have said that I was running Windows ME. Thanks to all who responded especially Jester as the solution suggested seems to have got rid of all the occurences. This thread is now locked and can not be replied to.
OPCFW_CODE
For the last few months, a team and I have been aggressively competing* in the 2nd Social Learning Strategies Tournament. Here’s what it’s all about: Suppose you find yourself in an unfamiliar environment where you don’t know how to get food, avoid predators, or travel from A to B. Would you invest time working out what to do on your own, or observe other individuals and copy them? If you copy, who would you copy? The first individual you see? The most succesful individual? The most common behaviour? Do you always copy, or do so selectively? If you could refine behaviours, would you invest time in that or let others do it for you? What if you then migrated – would you rely on your existing knowledge, or copy the locals? The team consisted of a rocket scientist, a mathematician, a genetic engineer, and me. Fortunately, the other three had enough brainpower to help us put together something interesting to submit. The deadline for submission was Feb 28, 2012. Our team ended up using Baysian economics to put together a competitor. If you’re interested, the abstract overview is below. Bayes_Bots makes decisions based on the expected payoff of the moves in her arsenal: Observe, Innovate, Exploit, and, in the appropriate extension, Refine. To decide which move to use, Bayes_Bots will look at the distribution of the learned payoffs from Innovate, and Observe. Bayes_Bots uses Bayesian inference, to learn these distributions: she assumes that the values learned from Innovate and Observe can be modeled by an exponential distribution, and given a distribution on the payoffs associated with each arm, the means of the Observed distributions will follow a Beta distribution, while the payoffs from Observe follow an exponential distribution. Bayes_Bots will discount older information as less reliable, using Pc as the probability that a given strategy’s payoff changes. Bayes_Bots will Innovate rarely. However, she will always Innovate on her first turn; this will help provide new raw information to the collective population of agents. Observe_who. In the observe_who strategy, Bayes_Bots will not change her strategy. The assumption is that information is equally valuable from all other agents in the field, regardless of their age, number of times they’ve been observed, etc. Refine. Bayes_Bots will Refine one of her high-payoff moves at least once, in order to understand what benefit that might have to her overall expected payoffs. Otherwise, Bayes_Bots will not change her strategy; if other agents refine their strategies, Bayes_Bots will learn the refined payoff. Localization/Demes. When Bayes_Bots changes to a new deme, she will discard information about the distribution of payoffs from observed strategies. She will retain information regarding the distribution of payoffs from innovated strategies, as well as the distribution of the means of the observed strategies, as these pieces of information are assumed to be useful across all demes. If you want to read the full entry, let me know – I’m happy to share out the doc. It also has our very complex math and equally complex Python code.
OPCFW_CODE
wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, volunteer authors worked to edit and improve it over time. This article has been viewed 13,576 times. This article presents the one solution in arranging the folders that will be easy to comprehend and to manage for IT staff that uses Information Technology Infrastructure Library (ITIL). 1Create a common folder. The first folder has to be common for all IT employees and can have the information about common calendars, introduction for new workers, role of IT department, database of common vocabulary, related external stakeholders and their contacts (company name, position, addresses and phones). This folder can be named as “General” or “Common”. 2Add a management folder. This second folder has to be devoted for managers of the department; therefore, it has to include job descriptions, plans, reports and handovers. This folder can be named as “Management”, “Governance”, and “Leadership” and so on. 3Make a folder for formal documents. In some companies, there is a special place for the approved formal documents (models, procedures, rules, instructions and training materials). If there is no such place, third folder can be accountable for it, and named as “Framework”, “Approved”, or “Official”. Note, this folder has to have “read only” access rights. 4Include a folder for projects. It can contain sub-folders of all projects performed inside the department and can be named as “Projects”. 5Use subsequent folders for all the necessary processes. These can be devoted to every ITIL process that exist in the department; for example, Change Management, Problem Management, etc. In some cases, they can be grouped by areas: Operation, Transition and so on. Process folders can include any folders needed for the process use, like Minutes of Meeting, Reports, Requests, Issues, etc. 6Don't forget about an archive. The last folder has to be dedicated for archived or obsolete information.Advertisement - In order to arrange the folders, each folder name can have the numbering: 01, 02, 03…. - Make the documents categorised/tagged and searchable. Nobody is going to dig layers deep to find documentation that, in the main, they are not that interested in. It needs to be fast to find and easy to identify. - Enforce permissions. There is no point in having authorised, definitive documents if everyone and anyone can copy/edit/move them without authorisation. - Communicate the structure. Sure it is organised nicely but if people are confused on which folder things go in, they will soon get lost.
OPCFW_CODE
Several technologies create Artificial Intelligence, which can sense, comprehend, act, and learn like humans. Because AI isn’t simply one thing, everyone’s concept of AI may be different. Natural language processing and machine learning both fall under the umbrella of Artificial Intelligence. Each one is evolving in its way and may assist organizations in achieving their goals, whether it’s increasing customer service or optimizing the supply chain when used in conjunction with data, analytics, and automation. In the AI Program, you’ll learn the fundamentals and advanced artificial intelligence techniques, such as logic and knowledge representation, probabilistic models, machine learning, and probabilistic reasoning. Definition of artificial Intelligence Human-like qualities such as reasoning, learning, planning, and creativity can be displayed by a machine using Artificial Intelligence (AI). Artificial Intelligence (AI) allows machines to sense their surroundings, deal with what they perceive, solve issues, and act to accomplish a specified goal. Computing devices such as a camera and other sensors such as microphones and cameras receive and process data before working on it. Artificial intelligence (AI) systems can self-adaptation to a certain extent by analyzing the results of prior acts. Artificial Vs. human Intelligence So, what distinguishes artificial Intelligence (AI) from the human intellect? Humans invented Artificial Intelligence and the algorithms that enable it to function. While computers can learn from their surroundings and adapt or grow, humans construct them at the end of the day. Multitasking, memory, social interaction, and self-awareness are significantly more advanced in Human Intelligence. Artificial Intelligence lacks an IQ, making it distinct from human Intelligence. Automating multitasking or forming autonomous relationships is impossible. Machine learning and cognitive learning will never be the same thing. Artificial Intelligence (AI) can only match human Intelligence regarding speed and accuracy. A machine can’t teach itself human thought, no matter how smart it is or how many formulas you apply. A brief explanation of how AI works In other words, Artificial Intelligence (AI) is a discipline of computer science used to replicate human intellect in computer systems. Several hypotheses and definitions have emerged due to the general goal of artificial Intelligence: to make machines as intelligent as humans. Many people consider artificial Intelligence to be technology that can make machines intelligent. In a nutshell, “AI receives codes from the environment, and acts appropriately.” According to the four components of artificial Intelligence, it works like - Behaving Like a Human Being - Acting in a Humane Manner - The ability to reason - Behaving rationally The concepts of “thinking” and “acting” are closely linked to the mental processes of reasoning and processing, respectively. Is there a way to explain how AI works? To build an AI system, one must carefully reverse-engineer human characteristics and talents into a machine and then use that machine’s computing prowess to outperform humanity. To comprehend how you might apply artificial Intelligence to diverse businesses, one must first grasp the multiple sub-domains of AI. - Machine Learning (ML): Machine learning teaches a computer to draw conclusions and judgments based on previous experiences. It looks for patterns, analyses the last data, and concludes the current data without relying on human knowledge or experience. Businesses can save time and money by relying on automated data analysis to make better decisions. - Deep Learning: ML uses a technology known as deep learning. Classifying, inferring, and forecasting the outcome of inputs instruct a computer to do so. - Neural networks: Neural Networks are based on principles similar to human neural cells. Essentially, they are a collection of algorithms that mimic how a human brain processes information. - Natural Language Processing (NLP): In NLP, a machine can read, comprehend, and interpret human language. Only when the system understands what the user intends to express that it replies appropriately. - Computer vision: Computer vision algorithms attempt to decipher an image by dissecting and analyzing the many components of the items in the picture. Machine learning is aided by this, as it can better classify and learn from a collection of photos. - Cognitive computing: Analysing text/speech/image/objects in a human-like manner and then attempting to produce the appropriate output is what cognitive computing algorithms are all about. AI in machines and computers is already widespread. It’s safe to believe that things will only get better. Future AI ideas frequently feature autonomous systems combining machine learning, cognitive skills, neural network involvement, and language processing. Technically, General AI will be everywhere now, and its enhanced computational capacity will make the world run faster. For example, sarcasm can be understood by chatbots. Then it would endeavor to provide a service and answer that met or exceeded their expectations. It benefits the company. Many people believe AI will replace human occupations with robots, creating new jobs dealing with and teaching machines. Jobs like data scientist, app developer, and social media director will become more common as AI advances. Apps that track student progress and identify learning issues can assist teachers in knowing whom to help and how to aid without missing anyone. Actual business value comes from AI: Innovators have long looked to artificial Intelligence (AI). Now that the enablers are in place, enterprises can understand how AI can add value. Automation reduces costs and increases consistency, speed, and scalability of business operations; some Accenture clients save up to 70% in time. But the ability of AI to promote growth is more intriguing. Scaling companies get a 3X return on their AI efforts compared to those stuck in the pilot stage. So it’s no surprise that 84% of C-suite executives believe AI will help them develop. Aside from these advantages, AI may harm justice, security, privacy, and safety. AI is a powerful force that can change the world, but it also has the potential to destroy it. As previously stated, Artificial Intelligence is a set of technologies designed to reduce human labor. Despite recent progress, the AI industry still has a long way to go before creating fully-functional human-like machines that can accomplish everything in any circumstances.
OPCFW_CODE
Arla Rosenzweig & Lin Wang | Performance TPM, Performance engineer As engineers, we intuitively know that a faster app is a better experience for Pinners, but we also have proof that when web pages load faster, user growth improves. Last year, we doubled down on performance across our platforms so that wherever our 200 million monthly active users were around the world, and regardless of their device, Pinterest would help them discover and do what they love — hopefully without waiting too long. Globally connection speeds vary wildly across devices, but two things were consistent: - The majority of smartphones worldwide are Android devices (just under 90 percent). - Pinterest was consistently slower on Android. With more than 75 percent of Pinterest signups coming from outside the U.S., we needed to speed up our Android app and maintain that speed even under constant development and new feature additions that might slow things down. This post will cover the four key lessons we learned will improve performance on Pinterest. 1. Defining a user-centric metric ensured we actually improved the experience. We thought carefully about how we could make the biggest difference based on what we know matters the most to Pinners. For every main action on Pinterest — like loading the home feed, tapping to see a Pin closeup or searching — we defined a metric called Pinner Wait Time (PWT). Each metric measured the time from when a Pinner initiated an action (e.g. tapping a Pin) until the action was considered complete from the Pinner’s perspective (i.e. the Pin closeup view loaded). 2. Preventing regressions is the #1 way to keep an app fast. Once we had a baseline measurement for each interaction, we wanted to make sure we weren’t slowing things down. Seeking testing precision and alert capabilities, we chose to use NimbleDroid, a cloud-based continuous performance testing tool that easily integrated with our process and produced actionable results. With their framework, we created regression tests that run on Android builds generated from code changes and alert us when a PWT metric has increased within designated thresholds. When there were no alerts, we knew our app was good to go, boosting confidence in merging pull requests and releasing it to Pinners. When we did receive alerts, we resolved the regressions in ~21 hours, which is far less time than the multiple days it previously took to identify and fix a regression. Since these tests ran on builds with all A/B experiments disabled, we also needed a way to detect the impact of new experiences. We added performance data and more regression alerts to our in-house experiment dashboard which allowed us to identify which experiments were slowing down the app and see how the latency increase affected Pinners’ engagement across various surfaces. Between NimbleDroid and experiment alerts, we detected ~30 slowdown regressions over the course of six months. Since our alert threshold was 100 milliseconds, these regressions, if released to our production app, would have accounted for at least three seconds of additional wait time. Aggregated over the hundreds of millions of actions taken each day, it had the potential to be a significant amount of of time wasted. By catching regressions early in the cycle, they were never released to Pinners. 3. Develop best practices in optimization performance. Beyond regression tests, NimbleDroid also provided the opportunity to profile each interaction. For example, we found that while loading the Pin closeup view, we weren’t using the data already available from the initial feed response to display all the necessary information. This oversight was causing an extra round-trip and slowing down the overall load time. Submitting a fix and running it through NimbleDroid tests confirmed the improvement made Pin closeup load times 60 percent faster. In line with this, we configured NimbleDroid to send alerts when things became faster, which helped validate optimizations and bug fixes like the above scenario. These alerts also helped catch a functional regression in our search feature that passed QA testing. We also wanted to improve the experience of the “cold start”–the time it takes for the app to open and the home feed to load for the first time. Through tests, we found that reflective type adapters in Gson are expensive for the cold start of an application, so we generated type adapters at compile time and made the JSON parsing more efficient, saving ~30 milliseconds in a Pinner’s cold start time. Visibility into speed is crucial, especially for key interactions that make Pinterest so useful. We rely on NimbleDroid for insight into how we’re doing against our goals to not only maintain but also improve PWT across our platforms. Exploring these optimizations reinforced the importance of parsing data efficiently and that the fewer (and smaller) network requests we make, the better. Whether loading the initial home feed or a Pin closeup, the more we can use locally-stored data, the faster a Pinner will be able to see the content they’re looking for. This effect is compounded in international countries where each network round-trip adds extra time to the overall PWT. 4. A faster app encourages more engagement. We saw a consistent pattern wherever we optimized. Decreasing the amount of time it took to complete an action led to more engaged Pinners: - When the home feed loads quicker, more Pinners scrolled past the first page of Pins and viewed more Pins overall. - When we sped up the time to view a Pin closeup, people were less likely to impatiently tap the screen twice, which ultimately was leading to accidental Pin click-throughs. Encouraged by these results, we’re beginning to focus our performance efforts with other interactions to continue exploring the relationship between improving performance and engagement. We’ll continue to monitor every critical user flow for every pull request to detect any performance regressions. This peace of mind from preventing regressions gives us the space and time to focus on what matters most to us–delivering an amazing experience to Pinners around the world. Special thanks to Effie Barak, Ryan Cooke, Mallika Potter and Cal Rueb
OPCFW_CODE
Our community gets into harness, we add chained behaviors, and LiveCode Community is set free It's been a couple of months since our last edition of revUp. My apologies! We're back on track now, and you can expect your regular twice monthly dose of revUp to resume. We've been extremely busy in the interim, not least on producing LiveCode 6.1. There are a number of exciting things about 6.1. Perhaps the most interesting and positive aspect of this release is that you, the community, have got involved. 7 features and 2 fixes incorporated in 6.1 were provided by dedicated community members working on the open source engine. Not perhaps earthshaking features in themselves, but handy if you need them, and a great omen for times to come. Just to pick out a couple, Monte Goulding among other items contributed Locking Group Updates, a featuer which speeds up custom controls composed of many objects, as well as the "effective visible", allowing you to determine which objects are genuinely visible on-screen at any given time. Jan Schenkel gave us "Getting the Page Ranges of a Field" as well as some additional statistical functions, and Mark Wieder fixed a bug with "is a color". We're very grateful to all of you! If you want to get involved, there is a comprehensive guide here, and don't forget to drop by the forums, here. Perhaps the biggest new feature in 6.1 is chained behaviors. I'm going to let Ben Beaumont explain to you why you should care about this in your apps. We introduced behaviors to allow objects to share code. Take for example a platform game where you collect coins. Each of these coins requires the same script in order to function. You can define a 'behavior script' and all instances of the coin can use it. Making a change to the behavior script affects every coin. Behaviors help developers to put more structure into their code and avoid repetition. Repetition is bad because if you need to change some code that is repeated throughout your project, you have to apply that change in many places. Imagine having 100 coins all with the same script. Changing the script means going through and updating the script for all 100 coins! Chained behaviours takes this code structuring one step further. A behaviour script can now have a behavior of its own which in turn can have a behavior of its own and so on. Lets return to our game example. This time imagine your game involves collecting coins, stars and fruit while trying not to collect poison. Each of these objects has lots in common: - When a user interacts with the object it should be removed from the screen - When a user collects the object their score should change - When a user collects the object a sound should play However, these objects also contain differences: - Each object may move / animate in a different way - Each object may have a different score associated to it, adding or subtracting - Each object may have a different sound associated with it Using chained behaviours you could structure your app as follows which would ensure that no code is repeated in more than one place. LiveCode Community Activation Removed In LiveCode 6.1 Community Edition, you no longer need to activate the program after downloading and installing it. This makes it more accessible, faster to install, and removes a barrier to users adopting it. You are still offered the opportunity to create an account if you wish to, but it is not required. Students can easily install in a school environment, making getting LiveCode into more and more of the educational establishment even easier. You can get the full release notes for 6.1 here, with all the new features and fixes that have been added. Heather Laine is Customer Services Manager for RunRev Ltd Ben Beaumont is Product Manager for RunRev Ltd.
OPCFW_CODE
Citation is an essential part of scientific publishing and, more generally, of scholarship. It is used to gauge the trust placed in published information and, for better or for worse, is an important factor in judging academic reputation. Now that so much scientific publishing involves data as well as software, the question arises as to how it should be cited, and in particular, how citation can be automated to serve up the citation along with the extracted data/software? Data citation addresses the question of how data that stored in a repository with complex internal structure and that is subject to change should be cited? The goal of this research is to develop a framework for data citation which takes into account the increasingly large number of possible citations; the need for citations to be both human and machine readable; and the need for citations to conform to various specifications and standards. A basic assumption is that citations must be generated, on the fly, from the database. The framework is validated by a prototype system in which citations conforming to pre-specified standards are automatically generated from the data, and tested on operational databases of pharmacological (IUPHAR) and Earth science data (ES3). The broader impact of this research is on scientists who publish their findings in organized data collections or databases; data centers that publish and preserve data; businesses and government agencies that provide on-line reference works; and on various organizations who formulate data citation principles. The research also tackles the issue of how to enrich linked data so that it can be properly cited. In addition to IUPHAR and ES3, we are working with the following data sources: - Eagle-i, a resource discovery dataset for translational science research. Eagle-i has clearly specified data citation requirements, and automatically serves up persistent identifiers (Eagle-i IDs) for resources but does not automatically generate the citation. We have downloaded the RDF dataset, and have created an interface which, given the Eagle-i ID, will render the citation in human readable format, with optional XML/BibTEX/RIS exports. We have hosted this on AWS and are testing with eagle-i developers. - Reactome, a curated and peer reviewed pathway database whose goal is to support basic research, genome analysis, modeling, systems biology and education. Reactome also has clearly specified data citation requirements, but does not automatically generate the citation. We have downloaded XML versions of the dataset, and have developed citation rules reflecting these requirements. Software is another important new form of research product which should be cited. For citation to be effective, we need tools to automatically generate citations. Our model for software citation with version control based on a notion of a citation function, and an implementation (browser extension and local executable tool) that integrates with Git and GitHub. - The browser extension allow citations to be generated for any file/directory in any version of a software repository, and added/modified/deleted in the current version by project collaborators. - The local executable tool enables citations to be added/modified/deleted and managed through Git functions such as fork/merge/copy.
OPCFW_CODE
Be more resilient to additions to ipc interface At runtime i3ipc-rs shouldn't depend on i3 version == X. It should depend on i3 version >= X. In theory this should be possible since i3 aims to continually add things to its ipc interface while staying backwards compatible. In order to support this we need to change the way we work with the parsed JSON. It's OK to ask for the value of a field, but it's not OK to ask for the value to lie in a fixed set of possibilities, which we do when constructing many enums in events.rs. Rework this library so it doesn't break after routine i3 upgrades. Three options for dealing with unknown values in an enum type T Ignore them Report them as None in an Option<T> Report them as a T.Unknown I don't like the first since it would imply ignoring the entire data structure containing it. The second is alright but syntactically noisy to use, and it's not obvious what is going on based on the type signature. I prefer the explicitly named Unknown variant. Is option 3 the path going forward, or is that still up for discussion? Would it make sense to add a string to Unknown for the actual event type? i3 added a "workspace:reload" event that I would like to avoid crashing on, since it brings down my listening daemon any time I reload i3 to update its configuration. I'm happy to help move this along, just let me know what I can do :) Thanks @soumya92 ! Would you be interested in tackling https://github.com/tmerr/i3ipc-rs/issues/10 ? As far as this ticket goes, Is option 3 the path going forward, or is that still up for discussion? So far I'm leaning toward option 3 but it's totally up for discussion if you have any thoughts on it. Would it make sense to add a string to Unknown for the actual event type? I think I like the idea of defining Unknowns without a string, and whenever we hit one to dump the corresponding JSON into a log so that it's easily debuggable. Would that make sense? I added a comment on #10, someone else already fixed it in their fork. If possible, you could just merge those changes. I was just wondering if you had any reservations or thoughts on a different solution before I dove into adding Unknowns everywhere. I'm fine with defining Unknown without a string. The reason I suggested adding a String to Unknown was to mimic the existing pattern in https://github.com/tmerr/i3ipc-rs/blob/3bcad86a375eb0dc4f041b0dea8fc0ece2632b2b/src/common.rs#L111 I would be slightly concerned about writing to a file, because then we would have to provide some way of configuring where the file goes. It doesn't feel right to have a library create a log file... No I wouldn't put much weight on what I did there. If we decide on Unknown we should put that in place of that Undocumented(String). Also, good point about logging, it would be better to use http://doc.rust-lang.org/log which leaves it up to the library user. If no logging implementation is selected, the facade falls back to a "noop" implementation that ignores all log messages. The overhead in this case is very small - just an integer load, comparison and jump. From the sounds of it Unknown + logging would be fine, I would be happy to merge a fix with that 👍 As long as we tell developers how to hook up logging in the README we can still get the debugging info needed to detect missing functionality in i3ipc-rs or i3's docs Sounds good! I'll give it a shot and send you a PR. Using the log crate seems fine, especially since we can add a separate target for i3ipc.
GITHUB_ARCHIVE
Python is a high-level programming language that is widely used in various fields of computer science and beyond. It is an interpreted language, meaning that it does not require a compiler to run and can be executed directly from the source code. Python is known for its simplicity and ease of use, making it a great language for beginners and experienced programmers alike. Python has a wide range of applications, from web development to scientific computing, data analysis, artificial intelligence, and machine learning. Due to its versatility, many companies and organizations have adopted Python as their language of choice for developing software and applications. In this article, we will explore some of the most popular uses of Python and why it is such a useful language to learn. Why Python is a Must-Have Skill in Real Life: Exploring its Practical Applications Python is one of the most popular programming languages in the world, and its popularity is only increasing. It’s a versatile language that can be used for a variety of purposes, from web development to data analysis. In this article, we’ll explore some of the practical applications of Python and why it’s a must-have skill in real life. Web Development: Python can be used to build web applications, websites, and web services. It has robust frameworks like Django and Flask that make it easy to develop web applications quickly and efficiently. Python’s syntax is also easy to read and learn, making it a great choice for beginners. Data Science: Python is widely used in data science because of its powerful data analysis libraries like Pandas, NumPy, and Scikit-learn. These libraries make it easy to perform complex data analysis tasks like data cleaning, data visualization, and machine learning. Artificial Intelligence: Python is also widely used in artificial intelligence and machine learning. Libraries like TensorFlow, Keras, and PyTorch make it easy to build and train complex machine learning models. Scripting: Python’s scripting capabilities make it a popular choice for automation tasks. It can be used for tasks like web scraping, file manipulation, and system administration. Game Development: Python is also used in game development. Pygame is a popular library that can be used to create 2D games, and Python can also be used with other game engines like Unity. Finance: Python is widely used in finance for tasks like algorithmic trading, risk management, and data analysis. Libraries like Pandas make it easy to perform complex financial analysis tasks. In conclusion, Python is a must-have skill in real life. Its versatility, ease of use, and powerful libraries make it a great choice for a variety of tasks, from web development to data analysis to game development. Learning Python will give you a highly valuable skill that can be used in a wide range of industries. Discover the Top 3 Benefits of Python Programming Language Python is a high-level, interpreted programming language that is widely used in the software industry. It is renowned for its simplicity and versatility, making it an excellent choice for developers of all levels of experience. Here are the top 3 benefits of using Python: 1. Easy to learn and use: Python has a simple syntax that is easy to learn and read. Its clean code structure makes it easier to maintain, and it is highly portable, meaning it can run on any platform. Additionally, Python has a vast and supportive community that provides excellent resources for learning and developing in the language. 2. Versatile and flexible: Python is a highly versatile language that can be used for a wide range of applications, including web development, data analysis, artificial intelligence, and scientific computing. It has a vast selection of libraries and frameworks that make it an excellent tool for developers working on complex projects. 3. Fast and efficient: Python is known for its speed and efficiency, making it an excellent choice for developers who need to process large amounts of data quickly. It has a simple and efficient memory model that makes it easier to work with large datasets, and its garbage collection system helps to optimize memory usage. Overall, Python is a powerful programming language that offers a range of benefits to developers. Its simplicity, versatility, and efficiency make it an excellent tool for a wide range of applications. So, if you’re looking to learn a new programming language, Python should definitely be on your list! Mastering Python: A Guide to Learning Time Are you looking to learn Python but struggling to find the time to do so? In this guide, we’ll share tips and tricks for mastering Python in your spare time. Python is a popular programming language that is used for a variety of applications, including web development, data analysis, and artificial intelligence. It’s popular because of its simplicity and flexibility, making it easy for beginners to learn and powerful enough for experts to use. The first step to mastering Python is to set clear goals for yourself. What do you want to achieve with Python? Do you want to build a web application, automate tasks, or analyze data? Once you have a clear idea of your goals, you can create a plan to achieve them. Create a Learning Plan Creating a learning plan can help you stay organized and focused on your goals. Start by breaking down your goals into smaller, manageable tasks. For example, if you want to build a web application, you might start by learning basic Python syntax and then move on to web frameworks like Django or Flask. Make Time for Learning One of the biggest challenges of learning Python is finding the time to do so. Try to set aside a consistent block of time each day or week for learning. This could be as little as 30 minutes a day or a few hours on the weekend. Consistency is key when it comes to learning a new skill. Use Online Resources There are a wealth of online resources available for learning Python, including tutorials, courses, and forums. Some popular resources include Codecademy, Udemy, and Stack Overflow. Make use of these resources to supplement your learning and get answers to your questions. Join a Community Joining a Python community can provide you with support and encouragement as you learn. There are many online communities, such as Reddit’s r/learnpython, where you can ask questions, share your progress, and connect with other learners. Practice, Practice, Practice Finally, the key to mastering Python is practice. As you learn new concepts, be sure to practice them by building projects and experimenting with code. Not only will this help reinforce what you’ve learned, but it will also give you a sense of accomplishment and motivation to keep going. Learning Python takes time and effort, but by setting clear goals, creating a learning plan, making time for learning, using online resources, joining a community, and practicing regularly, you can become a Python master in no time. Java vs Python: Which is the Better Programming Language? Java and Python are two of the most popular programming languages in the world today. Both have their own unique features, advantages, and disadvantages. So, which one is better? Let’s take a look at the key differences between Java and Python to help you decide which programming language is best for your needs. When it comes to performance, Java is faster than Python. Java code is typically compiled to bytecode and then executed, which makes it faster than Python’s interpreted code. However, Python’s ease of use and simplicity make it a more suitable choice for small projects and rapid prototyping. Java is a statically typed language, which means that the data types of variables are defined before compilation. Python, on the other hand, is dynamically typed, which means that the data type of a variable is inferred at runtime. This makes Python code shorter and more readable, but Java code is more reliable and easier to maintain. Ease of Use: Python is often considered one of the easiest programming languages to learn and use. Its simple syntax and readability make it a popular choice for beginners. Java, on the other hand, has a steeper learning curve, but once you learn it, it can be a powerful tool for developing complex applications. Java is commonly used for developing enterprise-level applications, such as banking systems and large-scale e-commerce websites. Python, on the other hand, is often used for scientific computing, data analysis, and web development. Both languages have strong communities and a wide range of libraries and frameworks to support their respective applications. Both Java and Python have their pros and cons, and the choice ultimately depends on your specific needs. Java is faster and more reliable but has a steeper learning curve, while Python is easier to learn and use but may not be as fast or reliable. Consider your project requirements and personal preferences before choosing a programming language. Python is a versatile programming language that can be used for a wide variety of applications. From web development to data analysis, machine learning, and artificial intelligence, Python is a valuable tool for developers and businesses alike. Its ease of use, readability, and vast community support make it an ideal choice for both beginners and experienced programmers. As technology continues to evolve, the demand for Python expertise is only expected to increase. Therefore, learning Python can be a valuable investment for anyone interested in pursuing a career in programming or technology.
OPCFW_CODE
I see that there is a quite important discussion about whether we should analyse a complete dataset, whether we over-score a partial dataset that is completely open, etc. This is a BIG POINT and we should discuss here - because our new survey especially addresses this. Let me explain here: with the new index we want to encourage governments to publish all data in one dataset - i.e. in one file containing all fields/characteristics we want to see in there. This is our reference point - this is why we have dataset definitions and we clearly only want to evaluate the datasets that meet all our requirements - hence using Q5 (which we consider to integrate into Q3). however, there are a lot of cases where these data are not provided in one dataset. To answer to @carlos_iglesias_moro comment (how Q1 and Q5 relate to each other). We decided that it would be a radical step to only measure a dataset that contains all our requirements - e.g. a spreadsheet containing water pollutants of all watersources in one file, etc.) - to be rigorous we would have to ask “Are all data included in a file” and if that’s not the case we would stop the survey - because actually we only want to analyse the openness of datasets that meet all our requirements We decided against this step and also accept the evaluation of partial datasets. And this opens two issues discussed by @RouxRC - 1) do we “over-score” datasets if they are only partial (openness vs. “completeness”) and 2) shall we analyse multiple datasets or focus on one partial dataset? To point 1 - we definitely only want to evaluate our reference dataset (meeting all of our criteria). If there is no such dataset, we still want to see if there are other datasets we could evaluate - to understand, how open these datasets are, to acknowledge first steps taken by government in the right direction and to sensitize our submitters for the fact that they are only looking at a partial dataset and still give them the chance to evaluate it. But also we want to encourage governments to publish a complete dataset - and therefore we want to explicitely flag in the overall score something like this “THE SCORE ONLY APPLIES TO A PART OF THE DATA - THE DATASET CANNOT BE REGARDED FULLY OPEN”. Alternatively we can have a lower score in total - e.g. subtracting 50% score for partial datasets, or sth. similar. The point is that we exactly DO NOT want to communicate that the dataset is fully open if it does not even meet our criteria - but we do not want to cut off datasets that are partial either. The critical point here is how we can incentivize governments to publish complete datasets. Key is to have a clever way of flagging this - and a disclaimer might not be enough if we display a 100% score - so considering negative scores might be an option here, that we will consider for our weighting. To point 2 - In past editions we allowed to analyse several datasets, but I think it is methodologically a problem to evaluate multiple datasets because we compare apples and oranges: In Romania we found several datasets for national statistics - one was for free but not machine-readable, another one was available in bulk but had to be paid - in the end the dataset got a 100% score - because we added up partial scores in one overall score. http://index.okfn.org/place/romania/statistics/ National maps of the UK are not complete, but still we evaluated it as bulk - it got a 100% score even without containing Northern Ireland. http://index.okfn.org/place/united-kingdom/map/ In Cameroon we found company registers for several types of enterprises - the dataset got a score of 0% because every question was answered with “Unsure”. http://index.okfn.org/place/cameroon/companies/ So we had several cases where partial datasets were treated differently - all leading to different scores. But the case of Romania shows that it does not make sense to add up scores for different datasets because it makes our evaluation arbitrary again - what if a dataset contains some characteristics and is for free - while the complete dataset has to be paid. We cannot simply add up their scores because in the end the message is “A specific dataset is open to a certain extent”. I agree, @RouxRC , that it makes sense to document alternative datasets. This is also why we use Q2.2: We want to see where datasets can be found. We could repurpose question 2.2. and use it to list alternative datasets - a comment section could be used so submitters can describe alternative datasets (re: @cmfg) and tell us our rationale why they only looked at one specific dataset (which should most likely be the fact that this dataset was thee one most compliant with Q3.
OPCFW_CODE
This page is part of the web mail archives of SRFI 101 from before July 7th, 2015. The new archives for SRFI 101 contain all messages, not just those from before July 7th, 2015. I agree with Taylor that the name list->list is confusing. The proposed specification of that function also strikes me as very poor, for the reason that you have to know what you are putting in to know what you are going to get out. I would much rather see a function named list->random-access-list (and another named list->standard-list or list->linear-list) that accepts any kind of list and produces one of the desired type. That name already suggests that the function may act as the identity operation on some inputs; in a system where all lists were random-access, list->random-access-list would very well always be the identity function. While we're at it, it would be nice if (a) All standard functions that expected lists already performed list->standard-list on their inputs (or could at least be automatically wrapped to do that); and (b) There were some mechanism to conveniently additively extend list->standard-list to handle other types of lists (after all, why shouldn't it be able to perform vector->list on vectors, and stream->list on streams, and the appropriate other thing on any other kind of sequential data structure that might come along in the future?). While I am for considering list->random-access-list and list->standard-list for inclusion in the next revision, I am not seriously suggesting addressing points (a) and (b) in this document. Rather, the conundrum we are currently having about the interoperation of random-access lists with regular lists is an example of the fact that Scheme has no standard mechanism for generic operations, and that the core functions in the language are not generic---a defect that has bothered me about Scheme for years. Is the time ripe for SRFI 102: Generic Operations? Best, ~Alexey On Fri, Sep 18, 2009 at 5:27 PM, David Van Horn <dvanhorn@xxxxxxxxxxxxxxx> wrote: > Taylor R Campbell wrote: >> >> Date: Fri, 18 Sep 2009 17:09:48 -0400 >> From: David Van Horn <dvanhorn@xxxxxxxxxxxxxxx> >> >> Here is a proposal for converting between representation of pairs and >> lists. >> >> I think the names PAIR->PAIR and LIST->LIST are very confusing. Also, >> both directions are necessary. How about using more descriptive, if >> more verbose names, such as SEQUENTIAL-LIST->RANDOM-ACCESS-LIST or >> LINEAR-LIST->RANDOM-ACCESS-LIST? Also, are the pair-only operations >> useful? > > As proposed, they do go both directions. I suspect the pair only operation > is not useful, but I'm not sure. I will punt on that one for now. As for > names, I like list->list; it's suggestive of it potentially being an > identity operation, but I'd like to hear what others think. > > David > >
OPCFW_CODE
Do Iranian girls talk dirty? Posted 27 June 2006 - 10:46 PM How about yourself? (Considering you're a girl) do you use rough language? Or do your female friends have the habit of conducting such behaviour? Posted 27 June 2006 - 10:49 PM Posted 27 June 2006 - 10:57 PM depends who im with.lets put it that way. if its with my family i wouldnt dare. if its with my friends definetly. and with guys.ohh yess.lmao. Posted 27 June 2006 - 11:05 PM Posted 27 June 2006 - 11:08 PM Posted 27 June 2006 - 11:10 PM We have some papas who can teach her manners! Persian soldier shall punish them! Posted 27 June 2006 - 11:13 PM i will never forget her.lol.not in a good way though i ws soo pissed off. i think persian girls are the type you want your parents to meet but then when its only the girl and you they are a completely different person. there is like barely any persians in ocoee where i live.actually there is none in my town but all the guys always ask me about persian girls and theyare like are they as crazy as you and im like yeahh persian girls are the hottest. Posted 28 June 2006 - 12:20 AM i voted for the "they act like hazrate maryam but only public" one i dont really like to use dirty words.. khosham nemiad fosh bedam... sometimes u through in a couple but if i do it constantly haalam az khodam beham mikhore Posted 28 June 2006 - 12:55 AM Hmm, it's been my experience that no, they do not talk dirty, but they are all around bitchy to compensate. haha Posted 28 June 2006 - 01:25 AM “Everything interests me, but nothing holds me.” — Fernando Pessoa Posted 28 June 2006 - 01:32 AM but like i dont get it what do you mean by dirty? like freeky? hahah, this noobie has a lot to learn about us here Posted 28 June 2006 - 01:49 AM dasteh harchi pesare az posht bastan bekhoda! man zadam,, persian girls should learn some class act from persian guys,, and that's the honest truth .... you guys start playing the nice innocent girl ..... vali voy behaleh vaghti ke ooooooooon zaboooonetoooon mesleh mar shoroooo be nish zadan mikoneh Posted 28 June 2006 - 05:54 AM I don't think iranian girls talk more dirty than girls from other cultures or countries, however in Iran, I think girls fosh less than the guys there, but abroad, or here, they fosh more than the iranian guys, generally, or from what ive seen.. I wouldnt say i fosh that much, however, when I fosh, I FOSH. Posted 28 June 2006 - 07:07 AM Well, girls probably curse as much as guys when they are together Posted 28 June 2006 - 07:33 AM بسی رنج بردم در این سال سی|عجم زنده کردم به دین پارسی"-فردوسی" Posted 28 June 2006 - 08:39 AM I think there were a couple of times that I did....but those were special circumstances Posted 28 June 2006 - 10:19 AM beineh dokhtarayeh digeh, faghat dokhtar hayeh latin va german ro didam keh pa beh payeh 'bandeh'' talk dirty harf mizadan. dokhtar irunia khosh bakhtaneh lat nistan ! lat mikhai, bia france ya hamin U.S.A Posted 28 June 2006 - 02:15 PM 1 user(s) are reading this topic 0 members, 1 guests, 0 anonymous users
OPCFW_CODE
I am currently using SonarQube Enterprise 9.4. I am trying to set up a scrape_config job to have my Prometheus server monitor SonarQube. We’ve decided to go with using a system passcode for authenticating the API calls to the endpoint, I’ve done some research on this and it seems that the config would possibly be something like this (note: provided two diff. options for the authorization section below): - job_name: 'sonarqube' type: APIKEY OR X-Sonar-Passcode From what I understand, Prometheus only provides the ability to use basic auth or bearer tokens for authentication. (Though it seems like there may be talk of providing the ability to pass in a custom API key in the future.) However, when testing out this particular format using X-Sonar-Passcode as the authorization type for the authorization section in the scrape_config file, it returns a 403 error: - job_name: 'sonarqube' So my question is: For authenticating calls to the /api/monitoring/metrics endpoint, is there are particular authorization type that needs to be used in Prometheus scrape_config files? (Or is there possibly a way to use the SonarQube system passcode with basic auth instead?) Thanks for the quick reply! I did see that page, however it doesn’t seem to address how to use a SonarQube system passcode for authentication specifically, just tokens and basic auth. There’re also these two sections covering Prometheus monitoring in the documentation: It says you can access the endpoint in three ways, including using X-Sonar-Passcode: X-Sonar-Passcode: xxxxx header: You can use X-Sonar-passcode during database upgrade and when SonarQube is fully operational. Define X-Sonar-passcode in the sonar.properties file using the When I curl the SonarQube metrics endpoint using X-Sonar-Passcode as a header, it works. However, when I include X-Sonar-Passcode as an Authorization Type in a Prometheus scrape configuration, it returns a 403 error. I know that SonarQube provides the ability via its helm chart to have this scrape_config file autogenerated using a podmonitor; however, my company only uses manually added scrape config jobs for adding new services to be monitored by Prometheus. So I guess I’m trying to figure out if SonarQube has a preference for the Authorization Type used in a Prometheus scrape_config file? (It seems that my including the wrong Authorization Type here would be the reason for the returned 403 error.) Hi @Alexandra , if i recall correctly you can use the monitoring passcode as a bearer token for prometheus as well to access the monitoring endpoint. If you want to use user credentials, bear in mind that sonarqube can not access user credentials until it is fully loaded, meaning that there will not be any metrics until the pod is marked healthy. I would use the bearer token EDIT: found the old ticket with more information and a confirmation about the bearer token: [SONAR-15688] - Jira hope that helps This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.
OPCFW_CODE
Noisy error on malformed CSS Much like #179, I've been getting this from something malformed: 2015-09-30 14:43:49 DOMXPath::query(): Invalid predicate #0 System_ErrorHandler->ERROR_HANDLER #1 DOMXPath->query line 287 #2 Pelago\Emogrifier->process line 211 #3 Pelago\Emogrifier->emogrify line 44 That line is: $nodesMatchingCssSelectors = $xpath->query($this->translateCssToXpath($selector['selector'])); So it may well be fixed by a fix for #179 by eliminating the bogus selector, but it shouldn't throw a noisy error like this anyway. Better to just ignore the content. When I've tracked down the content responsible for this error, I'll add it in here so we can build a test for it. What CSS is causing this problem? I got the same problem. in /var/www/vendor/pelago/emogrifier/Classes/Emogrifier.php – DOMXPath::query('//a[href^="tel"]') at line 723 Okay, so it looks like Emogrifier currently does not support negative attribute selectors (i.e., this is missing feature). Anyone willing to implement this? That just sounds like XPath and CSS selectors do not share the same syntax for that kind of lookup. I suspect in my case it was simply malformed CSS rather than a missing feature - e.g. if someone puts HTML tags in the CSS, which I've seen. From the source of my CSS, I strongly suspect it's just random junk rather than anything as sophisticated as this missing selector, so the rename may not be quite appropriate. Okay, I've reverted the changes to this ticket. Thanks for the input! (We already have #227 for the missing feature anyway.) @Renkas: So it looks like you're experiencing #227. I've managed to track down some CSS that's causing this error - it's an empty media query. In the Emogrifier class, the extractMediaQueriesFromCss method finds the media queries, but having an empty one makes it go out of sync and it starts treating a selector as properties. I've set up the content and the regex it uses here, and you can see it going wrong. Following on from that, when it gets to existsMatchForCssSelector, the first CSS selector it passes is: } table.body Which is invalid, but still gets translated into the invalid XPath: //}//table[contains(concat(" ",@class," "),concat(" ","body"," "))] After which I get the error: 2016-04-27 14:43:19 DOMXPath::query(): Invalid expression #0 My_ErrorHandler->ERROR_HANDLER #1 DOMXPath->query line 956 #2 Pelago\Emogrifier->existsMatchForCssSelector line 913 #3 Pelago\Emogrifier->copyCssWithMediaToStyleNode line 391 #4 Pelago\Emogrifier->process line 276 #5 Pelago\Emogrifier->emogrify line 44 So it would be good if Emogrifier could cope with a missing media query body. Meanwhile, I'll try to track down why my media queries are empty - I supect an old version of CSSTidy may be at fault.
GITHUB_ARCHIVE
how a website is loaded in our browser When I type www.google.com at my browser address bar, what exactly happens technically and how entire stuff is loaded. Considering the same HTTP page is being loaded... what is role of DNS server, IP address, MAC address, subnet mask, proxy setting, default gateway in this case. Does it make any different if I am in different class of network? So, you want someone to explain TCP/IP, DNS, HTTP, Ethernet, browsers and your OS's networking system all at once? You are asking about all things at once, it's a big concept. Still in short. When you type www.google.com (or any other site name) then the request goes to the DNS server which translates the URL into an IP address. Read here more: http://en.wikipedia.org/wiki/Domain_Name_System Then the request goes to server where the website is being hosted, the server which is providing hosting service for the website contains the website-stuff that has to be shown to the world. Read about apache server: http://en.wikipedia.org/wiki/Apache_HTTP_Server Subnet: http://en.wikipedia.org/wiki/Subnetwork does it make any different if i am in different class of network? No, it doesn't make any difference if you are in different class of network. (Study about routers: http://en.wikipedia.org/wiki/Routers) Points to help you out: Every computer that belongs to a network -including yours has an ip address. Every network has hosts under it. The network may be divided into subnets The ip addressing is hierarchial.This helps in routing IP addresses may be assigned manually or by the DHCP server Manual-IP configuration DHCP-Dynamic Host Configuration Protocol All packets that are sent to your ip address come through your isp network - this includes switches and routers Packets from other networks are forwarded by this ip address. Once they reach the nearest switch. Switches use your MAC address to send packets to your computer. The MAC address is obtained by ARP The gateway address is the path through which packets are sent out of your network or your ISP's Proxy servers are servers that allow connections through them To understand more about how this works, download Wireshark: Start the sniffer and then load google.com in your browser. You will notice the following The browser first sends a DNS request with the hostname to the DNS server of your ISP (or your network if any)- DNS finds out the ip address from the hostname. The DNS server replies with the IP address of the server The browser then sends the HTTP request This is in the form e.g GET /index.html HTTP/1.1 The server responds with the format The data is sent to the user Usually if a webpage is requested, it is in HTML format(with javascript,css,etc). This is then parsed and processed by the browser to get the webpage we see. To test this : ON LINUX, type telnet stackoverflow.com 80 in the terminal. As soon as it gets connected,type the following (quickly before it gets disconnected): GET /index.html HTTP/1.1 (enter) Host: stackoverflow.com (enter)(enter) to see the response ON WINDOWS Download the putty client and fill the host as stackoverflow.com, port as 80, and choose Connection type as Telnet. As soon as it gets connected, repeat same steps as above to see the response. The examples shown above illustrate how things work from Layer 7, but not Layer 3+ from your device's perspective. I would look at using tcpdump/wireshark to try and dump all of the network packets if you're interested in getting those kinds of low-level details. An example's provided below (run on FreeBSD). Notes: - Be sure to start wireshark/tcpdump before your web browser/client first so the packets get captured. - Specify the right port when starting wireshark/tcpdump; filtering the connection via a DNS address might not work in all cases if the remote webserver has load balancing/failover setup. Window with tcpdump: # tcpdump -A tcp port 80 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on em0, link-type EN10MB (Ethernet), capture size 65535 bytes capability mode sandbox enabled 01:48:17.917640 IP <IP_ADDRESS>.50636 > nuq05s01-in-f20.1e100.net.http: Flags [P.], seq<PHONE_NUMBER>:1631738227, ack 30720002, win 65535, length 26 E..BP<EMAIL_ADDRESS>...J}.t...PaBYY....P...F5..GET /index.html HTTP/1.1 01:48:17.918119 IP nuq05s01-in-f20.1e100.net.http > <IP_ADDRESS>.50636: Flags [.], ack 26, win 65535, length 0 <EMAIL_ADDRESS>....P......aBYsP...'+........ 01:48:18.072501 IP <IP_ADDRESS>.50636 > nuq05s01-in-f20.1e100.net.http: Flags [P.], seq 26:28, ack 1, win 65535, length 2 E..*P<EMAIL_ADDRESS>...J}.t...PaBYs....P...F... 01:48:18.072662 IP nuq05s01-in-f20.1e100.net.http > <IP_ADDRESS>.50636: Flags [.], ack 28, win 65535, length 0 <EMAIL_ADDRESS>....P......aBYuP...')........ 01:48:18.074353 IP nuq05s01-in-f20.1e100.net.http > <IP_ADDRESS>.50636: Flags [P.], seq 1:687, ack 28, win 65535, length 686 <EMAIL_ADDRESS>....P......aBYuP...Z...HTTP/1.0 400 Bad request: request header 'Host' missing Content-type: text/html; charset="iso-8859-1" &lt;html&gt; &lt;body&gt; &lt;h3&gt; Request denied by WatchGuard HTTP proxy. &lt;/h3&gt; &lt;b&gt; Reason: &lt;/b&gt; request header 'Host' missing &lt;br&gt; &lt;hr size="1" noshade&gt; &lt;b&gt; Method: &lt;/b&gt; GET &lt;br&gt; &lt;b&gt; Host: &lt;/b&gt; <IP_ADDRESS> &lt;br&gt; &lt;b&gt; Path: &lt;/b&gt; /index.html &lt;br&gt; &lt;hr size="1" noshade&gt; &lt;/body&gt; &lt;!-- PAD --&gt;&lt;/html&gt; 01:48:18.074512 IP nuq05s01-in-f20.1e100.net.http > <IP_ADDRESS>.50636: Flags [F.], seq 687, ack 28, win 65535, length 0 <EMAIL_ADDRESS>....P......aBYuP...$z........ 01:48:18.074683 IP <IP_ADDRESS>.50636 > nuq05s01-in-f20.1e100.net.http: Flags [.], ack 688, win 65014, length 0 E..(P<EMAIL_ADDRESS>...J}.t...PaBYu....P...F... 01:48:18.077023 IP <IP_ADDRESS>.50636 > nuq05s01-in-f20.1e100.net.http: Flags [F.], seq 28, ack 688, win 65535, length 0 E..(P<EMAIL_ADDRESS>...J}.t...PaBYu....P...F... 01:48:18.077070 IP nuq05s01-in-f20.1e100.net.http > <IP_ADDRESS>.50636: Flags [.], ack 29, win 65535, length 0 <EMAIL_ADDRESS>....P......aBYvP...$y........ Window with telnet: # telnet www.google.com 80 Trying <IP_ADDRESS>... Connected to www.google.com. Escape character is '^]'. GET /index.html HTTP/1.1 HTTP/1.0 400 Bad request: request header 'Host' missing Content-type: text/html; charset="iso-8859-1" &lt;html&gt; &lt;body&gt; &lt;h3&gt; Request denied by WatchGuard HTTP proxy. &lt;/h3&gt; &lt;b&gt; Reason: &lt;/b&gt; request header 'Host' missing &lt;br&gt; &lt;hr size="1" noshade&gt; &lt;b&gt; Method: &lt;/b&gt; GET &lt;br&gt; &lt;b&gt; Host: &lt;/b&gt; <IP_ADDRESS> &lt;br&gt; &lt;b&gt; Path: &lt;/b&gt; /index.html &lt;br&gt; &lt;hr size="1" noshade&gt; &lt;/body&gt; &lt;!-- PAD --&gt;&lt;/html&gt; Connection closed by foreign host.
STACK_EXCHANGE
Receiving Keyboard Input (Windows Embedded CE 6.0) The keyboard is an important means of user input on many Windows Embedded CE–based devices. Windows Embedded CE maintains a hardware-independent keyboard model that enables it to support a variety of keyboards. The OEM usually determines the keyboard layout for a specified Windows Embedded CE–based device. At the lowest level, each key on the keyboard generates a scan code when the key is pressed and released. The scan code is a hardware-dependent number that identifies the key. Unlike Windows-based desktop operating systems, Windows Embedded CE has no standard set of keyboard scan codes. Your application should rely only on scan codes that are supported on the target device. The keyboard driver translates or maps each scan code to a virtual-key code. The virtual-key code is a hardware-independent number that identifies the key. Because keyboard layouts vary from language to language, Windows Embedded CE offers only the core set of virtual-key codes that are found on all keyboards. This core set includes English characters, numbers, and a few critical keys, such as the function and arrow keys. Keys that are not included in the core set also have virtual-key code assignments, but their values vary from one keyboard layout to the next. You should depend only on the virtual-key codes that are in the core set. In addition to mapping, the keyboard driver determines which characters the virtual key generates. A single virtual key generates different characters depending on the state of other keys, such as the SHIFT and CAPS LOCK keys. Do not confuse virtual-key codes with characters. Although many of the virtual-key codes have the same numeric value as one of the characters that the key generates, the virtual-key code and the character are two different elements. For example, the same virtual key generates the uppercase "A" character and the lowercase "a" character. After translating the scan code into a virtual-key code, the device driver posts a keyboard message that contains the virtual-key code to the message queue for the application. The main user input thread for the application then calls back to the driver for each key event to obtain the characters that correspond to the key. The driver posts these characters with the key event to the foreground message queue for the application. When the application retrieves this keyboard message from the message queue, the message is stored. When the application later calls TranslateMessage, the driver places the characters that were posted with the key on the queue for retrieval. Each thread maintains its own active window and focus window. The active window is a top-level window. The focus window is either the active window or one of its descendant windows. The active window of this thread is considered the foreground window. The device driver places keyboard messages in the message queue of the foreground thread. The thread message loop pulls the message from the queue and sends it to the window procedure of the thread focus window. If the focus window is NULL, the window procedure of the active window receives the message. The following illustration shows the keyboard-input model.
OPCFW_CODE
Creating and maintaining tools involves three main factors - the author choosing to prepare the tool, how rapidly they update their tool repository after the underlying analysis package source code repository has been updated, and when each Galaxy server administrator chooses to update their installed tools. In brief, many project community supported tools are carefully selected, well supported and quickly updated. The longer answer below describes how the open source community contributes to the creation and support of tools. Galaxy allows any open source command line analysis package to be incorporated into a tool by preparing a specialised “wrapper” document and some automated tests. Developers can learn to create these using Galaxy training resources, and can upload them to the ToolShed, a public, open, tool sharing repository. Thousands of independent tool wrapper authors have contributed to the 8000+ tools currently available in the ToolShed. These can be automatically installed to any Galaxy server, where the third-party packages and dependencies can be downloaded and installed, or managed as a secure container if preferred. The IUC, is an open, community-controlled committee responsible for publishing coding standards, training material, and guidelines for tool wrapper authors, and for encouraging "best practice" tool wrappers, including automated tool tests, and security recommendations. The IUC also prepares and maintains its own tool wrappers in response to community need for new and established open-source analysis packages. Fully automated “bot” software supports continuous integration, by regularly checking every IUC tool analysis package repository, creating an “update” pull request in the wrapper repository and notifying the community, whenever a new version has been released. Independent review is mandatory for all changes to IUC tool wrappers, before they can be published to the ToolShed. Some important toolkits for specialised data rich domains such as proteomics and chemoinformatics, are maintained by community contributors using IUC best practice infrastructure and methods, to reliably keep their tools up to date. Whenever the underlying third-party analysis package repository releases a new version, a tool wrapper must be modified and tested, then uploaded to the ToolShed as a new version that installs and uses the updated package. Tool wrappers can be very complicated, but in terms of lines of source code and bugs, they usually represent a relatively small fraction of any Galaxy tool. The underlying open source analysis code and dependencies are the greatest source of complexity, and of software errors. No systematic review of that very large volume of third party open source code is undertaken by the Galaxy community. The final step in terms of propagating updated toolshed wrappers takes place at each Galaxy server where the tool has been installed. The server administrator can choose to have all installed tools automatically updated, or to manage updates manually, through the server administrative interface. When a new version of any tool is installed, it becomes the default version for all users. Older versions are retained and can be selected instead of the default when a job is run. Retaining historical tool versions allows users to computationally replicate previous jobs, with exactly the same analysis package and dependency versions. Versioning allows accurate replication, even after years may have passed, during which multiple updates of the analysis package and dependencies may have been installed. Replication includes all that older version’s software errors present when the original analysis was completed. Alternatively, any previous job can be re-run using the same tool settings and input data, but using an updated version of the analysis package and dependencies in place of the original one. In summary, like most things in Galaxy, parts of the community take responsibility for tool wrapper creation and upgrades in response to independent underlying open source analysis packages being updated. Many tools are supported by active community groups, including the IUC. Transparent version control is integrated into the propagation of ToolShed tools to Galaxy servers, supporting traceability and computational replication for open science analyses.
OPCFW_CODE
Robust and reliable authentication is the essential first line of security for any application or system. Make authentication too difficult and users won’t use your solution, make it too easy and bad guys will. There are various flavors of authentication, from simple username/passwords solutions through multi-factor and risk-based authentication systems that provide very high levels of security. Here are a couple of noteworthy solutions – both of which have been available for quite some time – that should be on your short list if you’re trying to protect an application, a network or your data: - TextPower offers an elegant solution called TextKey that provides an interesting twist on two-factor authentication. Many banks, cloud providers and others offer two-factor authentication that sends a code to your mobile phone and asks you to enter it after you’ve entered a username and password. While this scheme does provide an added layer of security, it’s still subject to man-in-the-middle or man-in-the-browser attacks and other hacking exploits. However, what TextKey does is reverse the process for using a mobile phone for authentication purposes: instead of receiving a code via mobile to enter into a browser, the secure application displays a code and asks the user to text it to the application. Because every mobile phone has a Unique Device Identifier (UDID), the mobile carrier will not send the SMS message if someone is trying to spoof the system because the sending mobile number (already stored in the application’s database) and the UDID must match. In short, authentication cannot take place simply because a bogus user cannot get their SMS through. TextKey also uses a number of other authentication criteria to provide very solid protection against hackers and others. - Confident Technologies has developed an authentication solution that studies have proven to be quite secure despite its simplicity. Instead of a user entering a password, he or she will identify images within categories that have previously been memorized. For example, when setting up access to an application, a user will select three categories of images, such as planes, rockets and dogs. When he or she attempts to access a system, there will be a presentation of a grid of images from which the user will select the images that correspond to predetermined categories. The images will change each time access is attempted, but will always be consistent with their predetermined choices. The company also offers an image-based CAPTCHA system, far better than the text-based solutions that are widely deployed. Studies have shown that image-based authentication is easier to use than password-based systems and is more resistant to brute force attacks and dictionary attacks. In one study, users were asked to set up text-based passwords and image passwords. After 16 weeks, only 40% of users could remember the former, but 100% could remember the latter. When asked to change their passwords and images, 75% could remember their text-based passwords, but all of the subjects could remember the changed images. Add to this the fact that image-based systems are also more resistant to keystroke loggers, a serious problem for many. Authentication is a necessary evil, but there are solutions that can offer greater security while not making life more difficult for users.
OPCFW_CODE
using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace DND_Monster { public static class GibberingMouther { public static void Add() { // new OGL_Ability() { OGL_Creature = "Gibbering Mouther", Title = "", attack = null, isDamage = false, isSpell = false, saveDC = 0, Description = "" }, OGLContent.OGL_Abilities.AddRange(new List<OGL_Ability>() { new OGL_Ability() { OGL_Creature = "Gibbering Mouther", Title = "Aberrant Ground", attack = null, isDamage = false, isSpell = false, saveDC = 0, Description = "The ground in a 10-foot radius around the {CREATURENAME} is doughlike difficult terrain. Each creature that starts its turn in that area must succeed on a DC 10 Strength saving throw or have its speed reduced to 0 until the start of its next turn." }, new OGL_Ability() { OGL_Creature = "Gibbering Mouther", Title = "Gibbering", attack = null, isDamage = false, isSpell = false, saveDC = 0, Description = "The {CREATURENAME} babbles incoherently while it can see any creature that isn't incapacitated. Each creature that starts its turn within 20 feet of the {CREATURENAME} and can hear the gibbering must succeed on a DC 10 Wisdom saving throw. On a failure, the creature can't take reactions until the start of its next turn and rolls a d8 to determine what it does on its turn. On a 1 to 4, the creature does nothing. On a 5 or 6, the creature takes no action or bonus action and uses it's movement to move in a randomly determined direction. On a 7 or 8, the creature makes a melee attack against a randomly determined creature within its reach or does nothing if it can't make such an attack." }, }); // template #region // new OGL_Ability() { OGL_Creature = "Gibbering Mouther", Title = "", isDamage = false, isSpell = false, saveDC = 0, Description = ""}, // new OGL_Ability() { OGL_Creature = "Gibbering Mouther", Title = "", isDamage = true, isSpell = false, saveDC = 0, Description = "", attack = new Attack() //{ // _Attack = "Melee Weapon Attack", // Bonus = "1", // Reach = 5, // RangeClose = 0, // RangeFar = 0, // Target = "one target", // HitDiceNumber = 2, // HitDiceSize = 6, // HitDamageBonus = 3, // HitAverageDamage = 10, // HitText = "", // HitDamageType = "Acid" //} //}, #endregion OGLContent.OGL_Actions.AddRange(new List<OGL_Ability>() { new OGL_Ability() { OGL_Creature = "Gibbering Mouther", Title = "Multiattack", isDamage = false, isSpell = false, saveDC = 0, Description = "The {CREATURENAME} makes one bite attack and, if it can, uses its Blinding Spittle."}, new OGL_Ability() { OGL_Creature = "Gibbering Mouther", Title = "Bites", isDamage = true, isSpell = false, saveDC = 0, Description = "", attack = new Attack() { _Attack = "Melee Weapon Attack", Bonus = "2", Reach = 5, RangeClose = 0, RangeFar = 0, Target = "one target", HitDiceNumber = 5, HitDiceSize = 6, HitDamageBonus = 0, HitAverageDamage = 17, HitText = "If the target is MEdium or smaller, it must succeed on a DC 10 Strength saving throw or be knocked prone. If the target is killed by this damage, it is absorbed into the {CREATURENAME}.", HitDamageType = "piercing" } }, new OGL_Ability() { OGL_Creature = "Gibbering Mouther", Title = "Blinding Spittle (Recharge 5-6)", isDamage = false, isSpell = false, saveDC = 0, Description = "The {CREATURENAME} spits a chemical glob at a point it can see wtihin 15 feet of it. The glob explodes in a blinding flash of light on impact. Each creature within 5 feet of ht eflash must succeed on a DC 13 Dexterity saving throw or be blinded at the end of the {CREATURENAME}'s next turn."}, }); // new OGL_Ability() { OGL_Creature = "Gibbering Mouther", Title = "", attack = null, isDamage = false, isSpell = false, saveDC = 0, Description = "" } OGLContent.OGL_Reactions.AddRange(new List<OGL_Ability>() { }); // Template #region //new OGL_Legendary() //{ // OGL_Creature = "Gibbering Mouther", // Title = "", // Traits = new List<LegendaryTrait>() // { // new LegendaryTrait("", "") // } //}, #endregion OGLContent.OGL_Legendary.AddRange(new List<OGL_Legendary>() { }); OGLContent.OGL_Creatures.Add("Gibbering Mouther"); } } }
STACK_EDU
I happend to chmod 755 my whole /usr/ dir I happend to change my whole /usr/ dir files to -rwxr-xr-x with command "sudo chmod 775 -R /usr/" now I firguire out that some special binary files like "su" "chkpaswd"and "sudo" should have the privilege of setuid ,and I have use chmod to return them back to fix same authority problem. But still have some mistory trouble like I cant use dolphin to mount the dev automaticly or use kwrite to rewrite some files by input passwd.It just says u dont have right to operate:Not authorized to perform operation and now ,I cant just open the appimage because of Cannot mount AppImage, please check your FUSE setup. You might still be able to extract the contents of this AppImage if you run it with the --appimage-extract option. See https://github.com/AppImage/AppImageKit/wiki/FUSE for more information open dir error: No such file or directory I use debian 10 base os ubuntu 20.04 ,and this pic is what i have in /usr/. I did this dangerous operation because I want to use CAE on oridinary user.enter image description here So ,can some handsome guys to told me are there any special files like "sudo" in dir /usr/ that I dont know? Thanks!!QwQ How are we to know what applications you've installed on your system (thus what directories you have in /usr/), we currently don't even know your OS & release, nor if you're asking about a desktop, server or some other system. The easiest fix is via restoration of data from backups. Basically, you don't need to change the files (and dirs) privilege of /usr. Does this reference help your understanding of File permissions? This is what we refer to as a "fatal error". It would be very tedious and lengthy to get to know the required file permission of every system file, then correct the permissions. In practice, the only solution here is a reinstall. Note that you can reinstall without reformatting the system file system. Doing so will rather work as a "repair", updating all of your system files but leaving installed programs and system configuration intact. Also user configuration is preserved because user home folders are not deleted. very thankful ,So if I reinstall my system by not format the disk ,I wont lose my data ,right? but the install program notice me that I have to use my free space to install the system . If I redistribute my disk ,wont it auto format my disk? No, but you do have a fully updated backup of your user data before you start this operation anyway. There is always a possibility something goes wrong. I spend my space time to start a virt system of Ubuntu,and try to figure out the difference between the origin one and my broken one. After I modify the special right of many key files ,it really help of solving almost every problems.I say like that because maybe there`re still some thing I didnt find yet. follow are some files which have setuid right may help people like me to fix serious authrioty problems : enter image description here
STACK_EXCHANGE
Analysis Services is not available for any Express variant. If the current 'Start Mode' is Disabled as you can see in the Configuration Manager, you need to change the 'Start Mode' either to 'Manual' or 'Automatic' before start. I have yet to come across a well-documented list of those requirements, all in one place. I have found some installations have some of these, but not all of these features. This version is best for small application developers who need to include reporting capabilities with their applications. To be honest I would like the full text search, the management studio express, and the bi dev studio, but they are lacking. We encourage you to keep this option checked, as we review the product feedback on an ongoing basic. I have no idea why it was not letting add anything additionally. If not, you will be prompted to install. I will answer as many questions as possible. How many rows I can expect to cope with? Anyways it installed completely now. Make any changes that are necessary for your environment. Will there be a way then to update the express edition to the developer one that I have? It can certainly be used for significant production applications. The server was not found or was not accessible. To start the installation immediately, click Run. Free to download, free to redistribute, free to embed, and easy for new developers to use immediately. I don´t know what am I doing wrong. Any help would be very much appreciated. The Installation Center will then launch. I know that depends on the size of the rows, queries, etc, but I need to make a decision before I spec and build, so an order of magnitude guess would be invaluable. Like you've recommended, I've found the place to update and did it, it went fast too fast? For convenience, you can use Microsoft Update to automatically receive the latest patches and updates, enabling a high level of security and the latest features. It is ideal for small server applications and local data stores. I now want to publish something to my local computer and it tells me I have insufficient privileges to do anything with reporting services. Any work done on automated installs failing alot? I'd want the list to include priveleges required by any of the services, so that no matter what installation options I choose, I'd know exactly which priveleges to put in place from that list. Once I did that everything works. In one application, we used this at over 1500 healthcare clinics for many years it is still in use today. The error message I get is: Reporting Services error The permissions granted to user. Launch the package, you will then see the contents extracted to a temporary location. I have found many Google searches but none of them resolve the problem. I have a Dell Precision laptop with Intel core 2 dual 1. Please update this article to reflect recent events or newly available information. The only real difference is Express vs Enterprise. This can be beneficial to other community members reading the thread. During installation, it does not ask for any about instance creation. Maybe someone has some thoughts on that or some facts. Thanks in advance and hope to hear from you guys soon. If there are more than 10 employees on client site, client cannot use express rather he has to purchase a license to use the tool as a database engine. Step 2 Download and install. I'm sure someone will find that useful if they do the same thing that I did. In subsequent attempts, the installations were completed with errors with Database Engine installation. For some reason I took the default install on the entire package. I have been able to use the database server with no problem. Does anyone know where I can find that information? The installer is called SqlLocal. This can be beneficial to other community members reading the thread. It is not intended to support production or multi-user workloads. Sometimes publishers take a little while to make this information available, so please check back in a few days to see if it has been updated. It can run on both Windows® desktop operating systems like Windows 7, 8, 8. For more information, please see here. Thank you for the excellent presentation. Does anyone have any ideas why I can't publish a report to my local reporting services? I am really tired of beating my head over what should not be a problem. Can I remove one of the versions? According to what I've found online this is normal and disabled in the express edition, this is why I'd like to go for the developer one. Thank you Dan for your efforts. Connecting to Visual Studio 2005 requires downloading and installing.
OPCFW_CODE
Fix #452 by use of Local Storage Checklist [x] Appropriate branch selected (all PRs must first be merged to dev before they can be merged to master) [x] Any modified or new methods or classes have helpful JSDoc and code is thoroughly commented [x] The description lists all applicable issues this PR seeks to resolve [x] The description lists any configuration setting(s) that differ from the default settings [ ] All tests and CI builds passing Description Store, update and retrieve user profile data using Local Storage. Seeks to resolve #452. Auth0 /userinfo endpoint is called only during login phase, profile data is stored on Local Storage. any update on user profile is sent to Auth0 AND to Local Storage subsequent calls to userinfo are bypassed (data would be stale) and profile is returned from Local Storage During the logout phase the local storage entry is wiped This is intended for patching the Auth0 unespected behaviour until the new work that will use MongoDB to store user information. Codecov Report Merging #460 into dev will decrease coverage by 1.37%. The diff coverage is 0%. @@ Coverage Diff @@ ## dev #460 +/- ## ========================================== - Coverage 16.47% 15.09% -1.38% ========================================== Files 325 325 Lines 12337 15832 +3495 Branches 2340 4803 +2463 ========================================== + Hits 2032 2390 +358 - Misses 8357 11497 +3140 + Partials 1948 1945 -3 Impacted Files Coverage Δ lib/common/user/Auth0Manager.js 10.73% <0%> (-1.97%) :arrow_down: lib/manager/actions/user.js 14.39% <0%> (-2.28%) :arrow_down: lib/gtfs/util/graphql.js 35.29% <0%> (-39.71%) :arrow_down: lib/manager/components/ProjectSettings.js 47.82% <0%> (-16.88%) :arrow_down: lib/common/components/Login.js 50% <0%> (-16.67%) :arrow_down: lib/common/components/JobMonitor.js 55% <0%> (-16.43%) :arrow_down: lib/manager/components/DeploymentConfirmModal.js 26.15% <0%> (-15.31%) :arrow_down: lib/manager/reducers/user.js 38.46% <0%> (-14.48%) :arrow_down: lib/editor/reducers/settings.js 31.03% <0%> (-13.97%) :arrow_down: lib/common/components/SidebarNavItem.js 77.55% <0%> (-13.76%) :arrow_down: ... and 263 more Continue to review full report at Codecov. Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 0df58d7...2e8065b. Read the comment docs. After seeing this proposed code and the problem description in #452, I think this raises a security issue. It seems that it is possible for an admin to revoke permissions for a particular user, but since all of the app_metadata is stale while appearing valid to the backend it is possible for the user with revoked permissions to perform unauthorized actions as long as they're logged in on the website - both currently and with these changes. As @landonreed mentioned, we do hope to refactor datatools to store all user data outside of Auth0 in a backend DB (ie Mongo) and always be fetching the latest data from that backend DB to check the user's authorizations before completing certain actions. This PR does "fix" the issue with stale data for individual user accounts, but a larger refactor is needed to ensure that user data is updated after external events cause changes in user data (ie when an admin updates another users' data). As @evansiroky mentioned, we're looking to refactor how user data is stored soon. This change will affect how users subscribe to watch feeds and projects, so for now we're going to close this. Thanks for reporting the issue and providing the fix though. We'll keep you updated on the user data storage update progress. thank you, looking forward mongodb solution! On Tue, 29 Oct 2019 at 19:21, Landon Reed<EMAIL_ADDRESS>wrote: Closed #460 https://github.com/ibi-group/datatools-ui/pull/460. — You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/ibi-group/datatools-ui/pull/460?email_source=notifications&email_token=AA4RC2MTJ2EQSGIP6DA6JNLQRB5JZA5CNFSM4IIJJOMKYY3PNVWWK3TUL52HS4DFWZEXG43VMVCXMZLOORHG65DJMZUWGYLUNFXW5KTDN5WW2ZLOORPWSZGOUQN7R5Y#event-2753296631, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA4RC2PUXXEXMKD4Q7JB47DQRB5JZANCNFSM4IIJJOMA .
GITHUB_ARCHIVE
# chemistry module import sys import numpy class Element(): def __init__(self, Z, radius, max_bonds, r, g, b): self.Z = Z self.radius = radius self.max_bonds = max_bonds self.color = numpy.array([r,g,b], dtype='float32') def __str__(self): format_string = ' '.join(['{:4d}','{:9f}','{:4d}','{:9f}'*3]) return format_string.format(self.Z, self.radius, self.max_bonds, *self.color) class Molecule(): def __init__(self, size=0): self.natoms = size # set up a PSE dictionary with element data self.PSE = {} for line in open('PSE.txt'): if line.startswith('#'): continue data = line.split() self.PSE[data[0]] = Element(*data[1:]) def readxyz(self, filename, extension='xyz', center=True): if extension != 'xyz': print('I can only handle xyz format') sys.exit(1) f = open(filename) # read number of atoms and title self.natoms = int(next(f)) self.hdr = next(f) # check number of atoms is valid if self.natoms < 0: print('Invalid number of atoms (negative)') sys.exit(1) elif self.natoms > 100000: print('Molecule too large (> 100 000 atoms)') sys.exit(1) # read and store atom coordinates data = [line.split() for line in f] f.close() self.atom_types = [d[0] for d in data] self.pos = numpy.array([d[1:] for d in data], dtype='float32') self.rad = numpy.array([self.PSE[el].radius for el in self.atom_types], dtype='float32') self.col = numpy.array([self.PSE[el].color for el in self.atom_types], dtype='float32') # get origin and box size self.pos = self.pos - self.pos.min(axis=0) self.ori = numpy.array([0,0,0],dtype='float32') self.box = self.pos.max(axis=0) # centrate if requested if center: R = numpy.array([0.,0.,0.]) R = sum(self.pos)/self.natoms self.pos = self.pos - R def genbonds(self): # create a distance matrix, rather inefficient since each atoms is checked against # all the other atoms (memory + time scaling as natoms^2) # TODO: block the distance compution into a loop over 3D blocks of ~(3Å)^3 and # only consider distances to atoms in adjacent blocks, bringing the scaling down # to n instead of n^2. delta = [None]*3 for i in range(3): coord = self.pos[:,i] delta[i] = coord - coord.reshape((len(coord),1)) dist = delta[0]**2 + delta[1]**2 + delta[2]**2 + 10. * numpy.identity(self.natoms) rcov = (self.rad + self.rad.reshape((len(self.rad),1)))*2 # get array of indices where the distances is below the covalent radius self.con = numpy.array(numpy.where(dist-rcov<0.)).transpose() self.dist = numpy.sqrt(dist[self.con[:,0],self.con[:,1]]) self.nbonds = len(self.dist)
STACK_EDU
Was Keturah Abraham's wife or concubine? Keturah was Abraham's wife? Genesis 25:1-2 Abraham had taken another wife, whose name was Keturah. She bore him Zimran, Jokshan, Medan, Midian, Ishbak and Shuah. Keturah was Abraham's concubine? 1 Chronicles 1:32 The sons born to Keturah, Abraham’s concubine: Zimran, Jokshan, Medan, Midian, Ishbak and Shuah. Hi and welcome to the site. Please take time to take the tour and browse our help centre for more info on how this site operates. Also useful is the post how we are different to other sites. Please explain what the contradiction is. It does not necessarily mean that Abraham married Keturah while Sarah was alive. A logical explanation would be that while Sarah was alive Keturah was a concubine and then Abraham married her after Sarah's death - the two facts are not mutually exclusive. This is more of a BH question, I think. The difference might come down to the purpose of each book. Genesis is a literal account and since Abraham was without wife when he bound Keturah to himself--she became his wife, performing the duties of a wife. Chronicles on the other hand is a historical record and perhaps a legal document for things such as inheritance through genealogies and also the royal lineage. So while being a technical difference between words that can mean essentially the same thing, when it comes to inheritance the difference becomes quite relevant. This assertion is my own, but to cite a source that backs up what I say about Chronicles having genealogies gotQuestions summary of Chronicles is suitable. Another site to give perspective is from Bible.org When producing the Septuagint, the translators divided Chronicles into two sections. At that time it was given the title, “Of Things Omitted,” Also, The books of Chronicles seem like a repeat of Samuel and Kings, but they were written for the returned exiles to remind them that they came from the royal line of David and that they were God’s chosen people. The genealogies point out that the Davidic promises had their source in those pledged to Abraham that He would make him the father of a great nation, one through which He would bless the nations. As well as, This book also taught that the past was pregnant with lessons for their present. Apostasy, idolatry, intermarriage with Gentiles, and lack of unity were the reasons for their recent ruin. OUTLINE: First Chronicles naturally divides into four sections: (1) The Genealogies or the Royal Line of David (1:1-9:44); (2) the Rise of David or His Anointing (10:1-12:40), (3) The Reign of David (13:1-29:21), and (4) The Assession of Solomon and the Death of David (29:22-30). The word for wife in Genesis 25 could be translated as woman, according to Strong's Lexicon: http://www.blueletterbible.org/lang/lexicon/lexicon.cfm?Strongs=H802&t=KJV
STACK_EXCHANGE
Edate; api Poker_API(url, params,false if (api0 "ResultError die(api1 echo " pre n for (i1; i count(api i) echo apii. Penned by the IFP Rules Committee Chairman David Flusfeder, the books purpose is to establish a set of international guidelines for poker players around the world. body /html Login Stats This is a PHP example (p) that parses through one or more event logs and calculates login statistics. html body?php include "p / pw and url set in this file params "Password".I)0) found true; break; echo " b ".html body h3 Poker Login /h3 form method"post" table tr td Player Name: /td td input type"text" name"Player" /td /tr tr td Password: /td td input type"password" name"Password" /td /tr tr th colspan"2" input type"submit" name"Login" value"Login" /th /tr /table /form /body /html Player passwords." /option n / submit button reloads page with selected file info echo " /select nbsp; input type'submit' name'submit' value'Submit' n echo " /form n echo " p Files found: ".div /td /tr /table input type"submit" name"Submit" value"Submit" / /form /body /html This example (p) works in version 4 where the avatars are all in one large image: html body?php avatarurl / set your url here avatarmax 64; / number of avatars available include "p." /p br/ n / retrieve event log dates params "Password"." /th /tr n echo " /table br/ n else / initial page load - display selection droplists echo " h3 Login Stats /h3 n echo " p Event files found: ".The Poker_API function uses the free PHP libcurl extension to make a post call to your game server, passing it the desired parameters, and then returns either a regular or an associative array with the results. "n echo " /pre n else / initial page load - display selection droplists / retrieve error log dates params "Password".If you're installing PHP locally using." comment faire du patin a roulette 4 roues /p n / display droplist of dates echo " form method'post' n echo "Start date: select name'edate1' n foreach (edates as i e) echo " option value." br br rn?Before saving the code below mgm hotel and casino employment to a file called p, change the url variable to your actual game server address (with file port and /api path).Api"Error / Iterate through the players in the response and create a associative / chips array keyed on player name. It hasnt been around for long, but certainly has accomplished a lot. Specifically, for each period of server up time, it displays the total number of logins, unique logins, and peak logins.
OPCFW_CODE
Again, draw downward-facing triangles in the middle of each of them, to get this: This interval varies randomly but within a certain range according to external conditions, like temperature, availability of nutrients and so on. This theory has grown over the years into a vital 20th century tool for science and social science. Such a proof suffices to show that the statement is true for all n: Before you start a drawing, you can include this code: But this example here is just to show how recursion is really just a nifty use of a stack data structure. The text field then looks like this: Otherwise, draw a square and recursively call the function with smaller n and d values. Recursion provides the plan that we need, based on the following idea: A solution of the three-rod four-disk problem is illustrated above. So, at some point, the routine encounters a subtask that it can perform without calling itself. We get a doubling sequence. One such notation is to write down a general formula for computing the nth term as a function of n, enclose it in parentheses, and include a subscript indicating the range of values that n can take. Other ways to denote a sequence are discussed after the examples. The reduction case is to divide the interval into two halves, proceeding as follows: Then replace the straight line drawing parts with recursive calls to get the full function. And it is because it can kinda transform n-1 terms into xB xn-2 into x2B xetc. First you draw one Sierpinski triangle, and then you draw three more smaller Sierpinkski triangles, one in each of the three sub-triangles. This pattern turned out to have an interest and importance far beyond what its creator imagined. In Python, a stack overflow error looks like this: Thus, the next Fibonacci number is One more transformation of this kind gives us this figure: You can download a program that implements our room counting function here: Not only that, but it will then call floodfill on the pixel to its right, left, down, and up direction. Here are some more examples: A combination is a subset of the n elements, independent of order. The solution using recursion is also very short. So not only does this simple lazy-zombie principle cause a chain reaction of zombies, it also causes this chain reaction to stop when a cat is encountered. It can be used to model or describe an amazing variety of phenomena, in mathematics and science, art and nature. Notice that in each row, the second number counts the row. Are there situations when recursion is the only option available to address a problem? For example, we might define a class Pair containing two long integers. Notice that our flood fill function has four recursive calls each time it is called. The Sierpinski triangle is a fractal shape made as follows. The usual trick is to find a closed form expression for B x and tweak it. Compose a recursive program ruler. The male ancestors in each generation form a Fibonacci sequence, as do the female ancestors, as does the total. Compose a recursive program that computes the value of ln n! The pattern we see here is that each cohort or generation remains as part of the next, and in addition, each grown-up pair contributes a baby pair. Never place a disc on a smaller one. It looks like the Triforce from the Legend of Zelda games. Assume that all months are of equal length and that:C++ program to generate Fibonacci series. C++ program for Fibonacci series. C++ programming code. Function overloading New operator Scope resolution operator. Programming Simplified is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs Unported License. Fibonacci Sequence - Recursive, Iterative, and Dynamic programming Below are the pictures showing what's happening when we use recursive algorithm. Since the recursive algorithm is doing the same calculation repeatedly it becomes slow when it does those recalculation so many times. when we write a recursive code, we need to take into. A novice programmer might implement this recursive function to compute numbers in the Fibonacci sequence: def fib(n): if n == 0: return 0 if n == 1: return 1 return fib(n-1) + fib(n-2). One such notation is to write down a general formula for computing the n especially for sequences whose most natural description is recursive. The Fibonacci sequence can be defined using a recursive rule along with two initial elements. The rule is that each element is the sum of the previous two elements, and the first two elements are 0. – Recursive and Special Sequences Specific Sequence # 3 – Fibonacci Sequence Example 1: Create a collage of at least 10 pictures demonstrating Fibonacci Sequence/Golden Ratio Write a recursive formula for the number of dandelions Caleb finds in his garden each Saturday. b.) Find the number of dandelions Caleb would. As can be seen from the Fibonacci sequence, each Fibonacci number is obtained by adding the two previous Fibonacci numbers together. For example, the next Fibonacci number can be obtained by adding and Thus, the next Fibonacci number is The recursive definition for generating Fibonacci numbers and the Fibonacci sequence is: fn = fn.Download
OPCFW_CODE
While broadband access has become increasingly critical in the twenty-first century, the COVID-19 pandemic’s stay-at-home mandates accelerated the necessity of such access. In light of this development, our current work explores the question of, "does a region’s differential access to and quality of broadband influence the short-term impacts on employment during the COVID-19 pandemic?" We use econometric difference-in-difference methods at a county-level spatial analysis paired with robust broadband measures that reach beyond the standard advertised speeds as provided by the Federal Communications Commission, including measures provided by Microsoft, ACS and FCC. Our initial findings show that when all else is equal, after COVID, counties with more than 50% population access to 25 mbps download and 3 mbps upload experience an increase of 1.23% in their unemployment rate over similar counties that have less than 50% access to broadband. We are now in the midst of exploring the potential mechanisms which could explain the reason for this. We think there is a strong opportunity for an undergraduate student to take ownership of several pieces of the current exploration, creating space for their own unique analysis within this work. These pieces include: - Researching and understanding the Ookla and MLab datasets, which are relatively new and evolving datasets in this space, and provide a measured assessment of broadband access throughout the United States. - Learning how to use the Ookla and MLab APIs in order to collect the most up-to-date data from these data sources. This would include pulling access at the county and census-tract level for 25 mbps/3 mbps and 100 mbps/10 mbps where possible. - Learning how to use the census API in order to ingest census-tract data for demographic purposes. - Learning the basic assumptions of difference-in-difference methods so as to be able to run the existing models using the newly ingested Ookla, MLab and census tract demographic data. We would provide the remaining additional data required to run this work. This approach would allow the student to take ownership over a specific, tangible portion of the project. While expected to be self-directing, the undergraduate student will work directly with a Ph.D. student, Nikki Ritsch, on a weekly basis, meeting as often as is needed for the work required. The student will also meet bi-weekly with the full research group in addition to attending individual meetings with the PI as needed. This work contributes to a new research area of rising importance on the impact that infrastructure has on social systems amidst extreme events, such as pandemics, and the broader implications therein. As such, it can help ascertain where to begin broadband expansion goals and may shed light on where the Emergency Broadband Benefit could be most effectively implemented in order to reduce inequity in broadband access. In this manner, this study treats broadband infrastructure as a “social sensor” for understanding where a lack of access amidst COVID-19 is most acute and, therefore, where to first target policy resources.
OPCFW_CODE
URLs and the Semantic Web Written by: Allie Tatarian JSON-LD, RDF, and other technologies mentioned in the MetaSat primer are all part of the semantic web. The semantic web is a set of standards defined by the World Wide Web Consortium with the goal to make data on the web easily shareable and machine-readable. Right now, humans can connect with other humans through the world wide web, and humans can interact with data and software programs. However, it can be difficult for machines and programs to communicate with each other without a human directing the interaction. Semantic web technologies propose to make this task easier, which will help improve the distribution, proliferation, and interpretation of data. That being said, in order to share data effectively, it must be paired with excellent metadata. Metadata is data about data, such as where it is from, what it is measuring, who created it, and more. In order for metadata to be interoperable, metadata fields should be represented by unique, permanent identifiers. Unique identifiers are necessary and helpful to define ideas. Identifiers can vary a lot, and some things have multiple different kinds of identifiers. For example, you might be identified by your name, your address, or your national identification number (Social Security number in the US). Your name and address, however, while they might be useful in most cases, are not unique, permanent identifiers. There may be other people who have your same name, and other people who live at your address. Your name and address may change over time. But your Social Security number stays the same your entire life. Your name, address, and SSN are all identifiers, but only your SSN acts as a unique, permanent identifier. On the semantic web, metadata is structured according to a standard called the Resource Description Framework (RDF). The RDF model stores metadata in subject-predicate-object triples. For example, to describe the title of this book, the RDF triple might look like "This book (subject) has title (predicate) The Hunger Games (object)." This works a little differently than natural languages, so that sentence may sound a little weird, but it makes sense to machines that know RDF. However, computer programs do not know the definitions of words, so we have to tell them what they are. To do this, we can use URLs to represent both the thing we are describing (the book) and the metadata we are trying to record (the title). In this case, we could use the Goodreads link to represent the book and a term from schema.org to represent the title. The more machine-friendly RDF triple becomes "https://www.goodreads.com/book/show/2767052-the-hunger-games (subject) https://schema.org/name (predicate) The Hunger Games (object)" This is obviously not how most people usually use URLs to navigate the web. Why does this work, and how do machines use URLs differently than people do? What URLs are URL stands for "Uniform Resource Locator." It is a type of URI, or "Uniform Resource Identifier." A URI tells you what something is, and a URL tells you where to find it on the web. A URI can act as a unique identifier for a topic, since a URI acts as an identifier for a resource. A URL, on the other hand, is a location, telling your computer where to find a resource; it acts like an address that tells your browser or program where it can find something. Every URL is also a URI, which means that it is an identifier, but may or may not be a unique, permanent identifier. That's what you can see in the Venn diagram below: Every URL is a URI, but not every URI is a URL. When you input a URL into your web browser, your machine makes what is called a GET request to another machine called a server. Before it can make the GET request, however, the URL has to be converted into something the server understands: an IP address. Each URL is essentially a stand-in for an IP address. This is why multiple URLs can point to the same page: They can all stand-in for the same IP address. For example, both "nytimes.com" and "newyorktimes.com" point to the IP address assigned to the New York Times' website. This is called a redirect—a different URL than the "main" one can direct to the same page. This is how services like TinyURL and Bitly work—the URL they give you redirects to the original URL that you put in. When a server machine is given a GET request and an IP address, it then finds the proper webpage and returns it to the requesting machine. However, there is another piece of the puzzle that can complicate things. Not only can multiple URLs stand in for the same IP address, but one URL/IP address can actually point to more than one resource. This is possible because when your browser sends along an IP address for a GET request, it actually sends along additional information that the server may find helpful. For example, it may send along your browser's default language, so that the server can return a page in a language that you can read. This is called content negotiation. To sum up, with redirects, several URLs can point to the same page; with content negotiation, one URL can point to multiple web pages. RDF, metadata, and content negotiation What does content negotiation have to do with metadata on the web? Well, sometimes one URL can point to different types of pages based on whether a human or a program is viewing the page. For example, one URL can point to an HTML page if a human is viewing in a browser, but point to an RDF/XML page for a program that is looking for metadata. Here is an example of the same information in two different formats (in this case, the Library of Congress Subject Heading (LCSH) for "Library"): The first screenshot is the information in HTML, and the second is in RDF/XML. Crucially, these two pages both contain the exact same information and can be reached through the same URL (http://id.loc.gov/authorities/subjects/sh2002006395). If you check the link in a browser, it will probably show you the HTML page, because browsers prefer HTML. However, if a program that prefers RDF/XML were to follow the same link, it will be shown the RDF/XML page. (As a side note, this page also exhibits redirects. At the top of the HTML page, you can see that there are three URIs that point to this page: http://id.loc.gov/authorities/subjects/sh2002006395, info:lc/authorities/sh2002006395 (a URI that is not a URL), and http://id.loc.gov/authorities/sh2002006395#concept) Programs like APIs that use linked data prefer formats such as RDF/XML, N-Triples, and JSON to HTML because they are much more structured and predictable than HTML. An HTML page has a lot of information in it that is not the information of interest to a program. For example, the LCSH "Library" page has a header, footer, search bar, navigation bar, and many links full of information that an API will probably not need. The HTML page also includes information about formatting that is important for the information to be clear to a human reader, but that will be superfluous to a program. Formats like RDF/XML strip out all of the unnecessary information so that a program will only see the data that it needs. Additionally, these non-HTML formats are often standardized, so a program will know exactly where and how to look for the data that it needs. The process a program makes when choosing between different files at the same web address is illustrated in this chart from the W3C: First, the inputted URI is redirected to the address where the main web page lives, and then the program uses content negotiation to choose between an HTML file and an XML file. Linked data vocabulary URLs help make structured data even more useful to programs and machines, because they give a standard definition of a topic. For example, an API does not know what "title" means, and different databases may use different terms than "title" (such as "name" or "245" for MARC records). For clarity, a linked-data database may instead use a term like https://schema.org/name, which has a standard definition that will not vary and can be used in the same way throughout different databases. Ultimately, using linked data vocabularies and excellent metadata standards is necessary for creating 5-Star Open Data, the gold standard to shareable data on the web. The MetaSat team recommends storing linked data using JSON-LD, a flexible form of RDF that is easy for humans to write and machines to read. For more on linked data and JSON-LD, see our RDF and JSON-LD primers.
OPCFW_CODE
Sum of all the digits from 1 to 10000 I need to add up all the digits in the numbers $1$ to $10000$. How would I do that without using a calculator? I don't get it one bit. We are doing something in our math class about this. 10000*10001/2 = 50005000 can you do 1 through 10? is there a pattern? @Kaynex: all the digits, not all the numbers. Is the question about the sum of all the numbers or the sum of all the digits in the numbers ? Possible duplicate of Proof for formula for sum of sequence $1+2+3+\ldots+n$? As I understand it we are not told to sum up all the numbers from $1$ to $10\,000$ but the decimal digits of these numbers. Taking the numbers from $0000$ to $9999$ instead we have $10\,000$ numbers having four digits with an average value of $4.5$. The sum of all appearing digits therefore is $10\,000\cdot4\cdot 4.5=180\,000$. Add $1$ to this for the single number $10\,000$, and obtain $180\,001$ as final result. Numberphile did a video on this and it looks like they used a similar method to what you outlined. https://youtu.be/Dd81F6-Ar_0 First note that the sum of $0$ through $9$ is $45$. Also note that from $0000$ to $9999$, each digit appears exactly $1000$ times, for each of the $4$ positions. Therefore, the sum of all digits in the numbers $1$ through $10000$ is $$45\cdot 4\cdot 1000 + 1=180001$$ I hope this helps you intuitively understand. How each digit appears 1000 times? e.g. for first position, each digit appear only once. How many times does a 1 appear in the last digit? How many times does a 2 appear in the last digit? Etc. How many times does a 1 appear in the 10's position? How many times does a 2 appear in the 10's position? Etc. How many times does a 1 appear in the 100's position? How many times does a 2 appear in the 100's position? Etc. How many times does a 1 appear in the 1,000's position? How many times does a 2 appear in the 1,000's position? Etc. How many times does a 1 appear in the 10,000's position? (Answer: Exactly once.) How many times does a 2 appear int 10,000's position? (Answer: never.) Answer those questions. Hint Add first all digits up to $9999$. Hint 2 Ignore the zeroes. Hint 3 Each digit $1-9$ appears how many times in each of the four possible positions?
STACK_EXCHANGE
Regex: problem creating matching pattern Having some problems figuring out the regex to match this: function Array() { [native code] } I'm trying to only match the text that will occur where "Array" is. I think that your question is very unclear. That's my string. The text where "Array" is will change. So I'm trying to match that. so you are trying to match text between "function" and "()"? I think you should eventually accept Helephant's answer (since you already said that it solved your problem). Are you trying to find out what type a variable is in javascript? If that's what want you can just compare the object's constructor to the constructor that you think created it: var array = new Array(); if(array.constructor == Array) alert("Is an array"); else alert("isn't an array"); This isn't really the best way to go about things in javascript. Javascript doesn't have a type system like C# does that guarantees you that a variable will have certain members if it's created by a certain constructor because javascript is a pretty dynamic languages and anything that an object gets from its constructor can be overwritten at runtime. Instead it's really better to use duck typing and ask your objects what they can do rather than what they are: http://en.wikipedia.org/wiki/Duck_typing if(typeof(array.push) != "undefined") { // do something with length alert("can push items onto variable"); } This actually helped me solve the problem I was attempting. Thank you for reading through what I was asking to seeing my intention. In Perl, you'd use: m/^\s*function\s+(\w+)\s*\(/; The variable '$1' would capture the function name. If the function keyword might not be at the start of the line, then you have to work (a little) harder. [Edit: two '\s*' sequences added.] Question about whether this works...here's my test case: Test script: while (<>) { print "$1\n" if (m/^\s*function\s+(\w+)\s*\(/); } Test input lines (yes, deliberately misaligned): function Array() { ... } function Array2 () { ... } func Array(22) { ... } Test output: Array Array2 Tested with Perl 5.10.0 on Solaris 10 (SPARC): I don't believe the platform or version is a significant factor - I'd expect it to work the same on any plausible version of Perl. Is it that hard to add a \s* to the beginning? And might it not be a bad idea (though I don't know JavaScript) to add a \s* between the (\w+) and the ( ? Since everybody is nitpicking already ;) Why including the ^ and the \s* at all? There is no need for this constraint, a simple \bfunction to indicate a word boundary should do it. The \s* at the end is also unneeded, why care at all what follows the match? The caret avoids problems with function appearing in strings or comments or... It is not perfect. The \s* at the end allows spaces after the name and before the open parenthesis. If there's no risk of ambiguity ('function Ambiguous works'), then OK. The problem scope is not completely clear. I've been trying this, and not getting it working. It still matches 'function'. @Tomalak - I wouldn't want the regex to match "function Bad Code()", where someone accidentally wrote a function name with a space. It should look for the parenthesis to make sure the function at least has decent syntax. This is just my opinion, though. So subsequently, I've gotten this which almost works: [function\s]((\S+)+)(?=\(\)) However, it still matches the space before Array
STACK_EXCHANGE
package org.innovateuk.ifs.assessment.builder; import org.innovateuk.ifs.application.domain.Application; import org.innovateuk.ifs.assessment.domain.Assessment; import org.innovateuk.ifs.assessment.domain.AssessmentApplicationAssessorCount; import org.junit.Test; import java.util.List; import static org.innovateuk.ifs.application.builder.ApplicationBuilder.newApplication; import static org.innovateuk.ifs.assessment.builder.AssessmentApplicationAssessorCountBuilder.newAssessmentApplicationAssessorCount; import static org.innovateuk.ifs.assessment.builder.AssessmentBuilder.newAssessment; import static org.junit.Assert.assertEquals; public class AssessmentApplicationAssessorCountBuilderTest { @Test public void buildOne() throws Exception { Application application = newApplication().build(); Assessment assessment = newAssessment().build(); int assessorCount = 10; AssessmentApplicationAssessorCount count = newAssessmentApplicationAssessorCount() .withApplication(application) .withAssessment(assessment) .withAssessorCount(assessorCount) .build(); assertEquals(application, count.getApplication()); assertEquals(assessment, count.getAssessment()); assertEquals(assessorCount, count.getAssessorCount()); } @Test public void buildMany() throws Exception { Application[] applications = newApplication().buildArray(2, Application.class); Assessment[] assessments = newAssessment().buildArray(2, Assessment.class); Integer[] assessorCounts = {10, 20}; List<AssessmentApplicationAssessorCount> counts = newAssessmentApplicationAssessorCount() .withApplication(applications) .withAssessment(assessments) .withAssessorCount(assessorCounts) .build(2); assertEquals(applications[0], counts.get(0).getApplication()); assertEquals(assessments[0], counts.get(0).getAssessment()); assertEquals(assessorCounts[0].intValue(), counts.get(0).getAssessorCount()); assertEquals(applications[1], counts.get(1).getApplication()); assertEquals(assessments[1], counts.get(1).getAssessment()); assertEquals(assessorCounts[1].intValue(), counts.get(1).getAssessorCount()); } }
STACK_EDU
pentium 4 question Posted 17 April 2002 - 04:18 AM could someone please shed some light on the pentium 4 series ?? the 1.6 ghz and above come in several flavours : 423/478 pin , 0.13/0.18 micron and with 256/512 cache... what is the difference ?is it that important to the home user ?? (Games\divx) and how come one motherboard, the intel 850 support them all ??(does it really ??) Posted 17 April 2002 - 07:41 AM THe 423/478 pin count they perform the same. They make different flavor mb with the 423 and 478 sockets, the 478 being the newer one. Posted 17 April 2002 - 08:30 AM What I am saying is that in addition to the "Flavors" of the P4 are the species of motherboards which support the P4 in different ways and with different memory solutions: 850 (rambus), Intel-DDR, SIS, and VIA. Posted 17 April 2002 - 12:04 PM Socket 423, 0.18 Micron, 256k cache 1.3Ghz - 2.0Ghz available (In .1Ghz jumps) Socket 478, 0.18 Micron, 256k cache 1.5Ghz - 2.0Ghz available (In .1Ghz jumps) Socket 478, 0.13 Micron, 512k cache 1.6Ghz, 1.8Ghz, 2.0Ghz, 2.2Ghz, 2.4Ghz available Socket 423 is already dead, there will be no CPU faster than the currently available 2.0Ghz in this format. S423 CPU's, despite the name, are actually bigger than the S478 ones. Socket 478 CPU's on 0.18 Micron are coming to the end of their life. No new CPU's are being produced and PC manufacturers will just be ridding themselves of now old stock. Socket 478 CPU's on 0.13 Micron all carry the Northwood "A" designation. If the CPU speed is followed by an "A" for example 1.8a then it will be on the 0.13 Micron process and have 512k cache. New CPU's due soon will carry the Northwood "B" designation. These too will be S478, 0.13 Micron, 512k cache however they will use a quad-pumped 133Mhz FSB as opposed to the quad-pumped 100Mhz FSB on ALL current P4's. 0 user(s) are reading this topic 0 members, 0 guests, 0 anonymous users
OPCFW_CODE
With the iPhone X, Apple introduced face recognition to the iOS operating system. Face recognition is a form of biometric authentication that can be used to secure the phone or to safeguard different operations on the phone. The face recognition technology on the iPhone X is a major step forward in bringing this software to consumer products and other mobile designs. Creative digital agencies that have attempted to release this innovation in the past have had trouble with reliability and security, and it has been a long journey to develop this new feature for iOS. With that in mind, we’re taking a look at some of the developments that have helped to make this the new standard for mobile security. Apple’s FaceID works by building a 3D model of the face. It then compares that model to a known template of the user’s face and gives it a score based on the similarity of the two models. Depending on the score, the device will then provide or deny access to the person holding the phone. Since building a 3D model of the face is one of the keys to face recognition, the first hurdle is to provide the phone with the ability to accurately determine depth. One method that has been used in the past is to analyze RGB values, but this technology does not perform well in adverse lighting. In bright sunlight, the sensors get overwhelmed with light and in low-light conditions, there is a loss of pixel information. Earlier versions of the iPhone used stereo cameras to determine depth in an image. The phone would compare the two images from the stereo cameras to create a disparity map, and this would allow the device to determine the depth of objects in a photograph. With the iPhone X, the phone uses structured light cameras to determine depth. With the structured light camera, thousands of infrared dots are projected onto the surface. The phone then uses factors like the time of flight to create a 3D map of the face. This is a method that can reliably determine depth, and it can also work well under adverse lighting. The iPhone X needs to be able to reliably compare face templates for the facial recognition to work. It also needs to be able to do this quickly for it to be a service that people will want to use. In the past, this was a major challenge for face recognition systems. Thanks to advances in neural networks, these problems are being solved. AlexNet was one such development that helped to demonstrate the capabilities of neural networks for image classification. With this Convolutional Neural Network, researchers were able to achieve a high level of accuracy for visual recognition. Convolutional neural networks are going to be the key to bringing technologies like face recognition and augmented reality to the next level. Recognizing this fact, manufacturers are now in competition to develop the processors that are going to power the Deep Neural Networks of the future. To power the iPhone X, Apple developed a custom GPU. To run the complex algorithm behind FaceID, the GPU will use a “Neural Engine”. This is a pair of processing cores that are tuned to perform a range of algorithmic functions including the operations behind the FaceID system. These technologies have helped to make face recognition a reality for a handheld device, but they can be applied to much more. With the advanced sensors and the application of Deep Neural Networks, we are going to see developers looking for new and inventive ways to integrate these technologies in with their apps for the ultimate, secure experience. Written by Serena Garner, Guest contributor.
OPCFW_CODE
Tensorflow Estimator InvalidArgumentError I'm trying to find a way to find and fix a bug in my TF code. The snippet of code below succeeds at training the model, but generates the following error when last line (model.evaluate(input_fn)) is called: InvalidArgumentError: Restoring from checkpoint failed. This is most likely due to a mismatch between the current graph and the graph from the checkpoint. Please ensure that you have not altered the graph expected based on the checkpoint. Original error: /var/folders/kx/y9syv3f91b1c6tzt3fgzc7jm0000gn/T/tmp_r6c94ni/model.ckpt-667.data-00000-of-00001; Invalid argument [[node save/RestoreV2 (defined at ../text_to_topic/train/nn/nn_tf.py:266) = RestoreV2[dtypes=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_INT64], _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_save/Const_0_0, save/RestoreV2/tensor_names, save/RestoreV2/shape_and_slices)]] Caused by op 'save/RestoreV2', defined at: File "/Users/foo/miniconda3/envs/tt/lib/python3.6/runpy.py", line 193, in _run_module_as_main The exact same code works when used with MNIST data set, but doesn't work when used with my own dataset. How can I debug this or what could be the cause. It seems the graphs don't match after the model is restored from a check point, but I'm not sure how to proceed to fix this. I've tried with TF version 1.11 and 1.13 model = tf.estimator.Estimator(get_nn_model_fn(num_classes)) # Define the input function for training input_fn = tf.estimator.inputs.numpy_input_fn( x=X_train, y=y_train, batch_size=batch_size, num_epochs=None, shuffle=True) # Train the Model model.train(input_fn, steps=num_steps) # Evaluate the Model # Define the input function for evaluating input_fn = tf.estimator.inputs.numpy_input_fn( x=X_test, y=y_test, batch_size=batch_size, shuffle=False) # Use the Estimator 'evaluate' method e = model.evaluate(input_fn) This error often happens when you modify some part of the graph, e.g. change size of hidden layers or remove/add some layers, and estimator tries to load the previously checkpoints. You have two options to fix the issue: 1) Change model directory (model_dir): config = tf.estimator.RunConfig(model_dir='./NEW_PATH/', ) # new path model_estimator = tf.estimator.Estimator(model_fn=model_fn, config=config) 2) Delete previously saved checkpoints in the model directory (model_dir). You are sure graph is untouched? Be sure, the new dataset has the same Data-type as before. If you load float numbers previously for the inputs, in the new dataset they should be float numbers too. Thank you for your reply. I'm pretty sure the graph is untouched - the code you see above is all I have (it fails on the last line). The issue doesn't occur for a MNIST dataset, but only for another dataset. It could indeed be related to data-types, though so far I wasn't able to identify exactly where the problem occurs. I will look into this some more. @foobar Can you mention the "another dataset"? because I tested with fashion_mnist, it was just fine. It's a dataset of tf-idf features generated by sklearn.feature_extraction.text.TfidfVectorizer => (<14953x972981 sparse matrix of type '<class 'numpy.float64'>' with 10352907 stored elements in Compressed Sparse Row format>). I then convert it to an array (as requried by tf.estimator.inputs.numpy_input_fn) like so -data.toarray(), so the data.shape is (14953, 972981). It's quite a large number of features. I have found a limit on a number of features - anything below ~16k is fine, anything above generates this error. Of course I had way too many features, but it's still kind of strange that there's a limit and the type of error it triggers. @foobar Did you increase the size of queue_capacity in numpy_input_fn? Probably fix your issue. I didn't know about that option, thanks! It didn't fix the issue though unfortunately (tried with a few options upto 100k with 17k features). For now I'm just using less features. Thank you for your help!
STACK_EXCHANGE
Customize Your Email Signature And Profile Each Infusionsoft user has a profile record. The profile details are divided into two levels of tabbed sections. The profile information is displayed in the top level of tabs. The second level of tabs displays active follow-up sequences, email accounts set up in Infusionsoft, and active plug-ins. Customize Your User Profile - Click on Edit My Profile in the user toolbar to customize or update your profile.Pro Tip! If you need to update another user's profile, go to Admin > Users in the master nav and click on the user name you wish to customize. Note that each user sets up a basic profile during their first log in. You should review and customize each profile further. - Edit the profile information in each tab. Click the Save button when you are finished making changes. - Click on Edit My Profile in the user toolbar to customize or update your profile. Enter or edit your contact information into the fields on this page. The first name, last name, and email address fields are required. The other fields are optional, but it is best to make each user profile as complete as possible. Preferences: Increase efficiency by setting up preferences that correspond with job responsibilities. Make sure each user has quick access to the most relevant information to perform their daily tasks within Infusionsoft. - Default Tab on Contact Second Row: This setting controls which tab shows up first when you view the lower portion of a contact. The tasks tab is the default view. Use this setting to ensure the most relevant history is displayed for each user's role (i.e., a sales rep needs to see opportunities.) - Default Search Type: This setting controls the default search type for the quick search box located at the top right of your Infusionsoft application. Select from Contact, Company, Task/Appt/Note, Order, Subscriptions, or Opportunity. - (Optional) Default Start Page: This setting controls the first page a user sees when they sign in to Infusionsoft. Navigate to the page you want to use as your home page and copy the URL beginning with /Admin/ from a different page to override the default (e.g., /Admin/myFiles.jsp.) The Infusionsoft home page is the default. - (Optional) Default Search View for Contacts and Opportunities: This setting controls the way you view lists of contacts and opportunity records. Interactive View increases efficiency when working and updating lists. You can change this to grid-view if you prefer to view more records per screen and align the data into spreadsheet-style columns. - (Optional) Signature at Top of Reply: This setting controls the location of your signature in email replies sent by Infusionsoft. Skip this if you are using the Outlook Plug-In or a different email program to check your email. This is set to no by default. Set it to yes if you are using the Infusionsoft email client to check and reply to email messages. - Default Calendar View: This setting controls the number of days displayed on your calendar. It is set to day by default. Select from day, week, or month. - Default Start and End Hour: This setting controls the daily time range displayed on the your calendar. Adjust these settings if a user works non-standard hours. - Time Zone: Your time zone will be auto-detected when you first create your Infusionsoft account. Signatures TabImportant Note! This step references a legacy feature. The current email builder does not utilize this legacy signature feature. Customize your plain text and HTML signatures. These signatures can be merged into email templates. The user signature merge field makes it faster to update the signature when contact information changes. Instead of editing each email individually, you can edit your email signature. It will automatically update in all of the email templates where the signature merge field is used. Enter a text and HTML signature so you can use the merge fields in any type of email template. - Text Signature: This signature is merged into plain text emails. Plain text emails do not accept images or text formatting (i.e., font, bold, color.) - HTML Signature: This signature is merged into HTML emails created using the drag & drop builder. You can style the HTML signature by adding images or formatting the text (i.e. font, bold, color.) (Optional) Add notes about a specific user. User Groups Tab Assign each user to one or more job-related user groups. - Accounting: Assign a user to the accounting group if their job responsibilities include any of the following: Managing products, creating or updating the shopping cart and order forms, or monitoring order reports. This group's access is the same as the order manager group. - Admin: Assign a user to the admin group if their job responsibilities include advanced Infusionsoft administration (e.g., importing, setting up system defaults, managing users.) Users in the admin group have access to all areas of Infusionsoft and view all of the available reports. They will be able to use your Infusionsoft system without restriction. - Marketing Manager: Assign a user to the marketing manager group if their job responsibilities include creating campaigns, tracking lead sources, and monitoring campaign related marketing reports. - Order Manager: Assign a user to the accounting group if their job responsibilities include any of the following: managing products, creating or updating the shopping cart and order forms, or monitoring order reports. This group's access is the same as the order Accounting group. - Sales Manager: Assign a user to the sales manager group if their job responsibilities include creating and updating sales pipeline stages in the opportunity module, assigning opportunities to sales reps, monitoring sales rep activity, or reviewing sales reports. - Sales Rep: Assign a user to the sales rep group if their job responsibilities include contacting leads, tracking the sales process through opportunities (e.g., adding notes, moving sales stages.) Users in the sales rep groups can create and be assigned to an opportunity.
OPCFW_CODE
Browsing through the program for the RSA Conference next week, I see a talk by Bruce Scheneier entitled "Why security has so little to do with security". The title certainly resonates with me. The word "security", in an information security context, is often used in an all-encompassing sense which includes management (as in access management, identity management), business continiuty defense against denial-of-service, and encryption. This goes beyond what "security" tends to mean in everyday English. But sometimes, in information security, "security" is just used to mean encryption [partly as a result of Bruce Scheneier's Applied Cryptography book]. And this narrow cryptography definition is much narrower than how "security" is used in everyday English. I think language is the issue here. I find it interesting that in German, the words for "security" and "certainty" (sicherheit, literally "sureness") are the same. In French, the words for "safety" and "security" are the same (sûreté, again literally "sureness"). So, in those languages, "security" has a broad definition, incorporating senses of dependability, management, and safety. I can see how the French and German words fit with the broad information security concepts of business continuity, "management" (access management, identity management), and "safety" that users (and their data) will be protected. This had been a pet theory of mine for a while, but then I read something similar in the BBC's "Letter from Europe" column last month: A friend and colleague who is annoyingly fluent in half a dozen languages notices the growth of something he calls "Brussels English". One example he gives is the persistent use of "security" to mean "safety", perhaps because in French and German they are the same word. This habit has evidently spread to England too. He cites an example at Waterloo Station, which requests that people put their hot drinks down while going through the ticket barrier "for their own security". But surely it is their safety, not security, that is at risk? But that sets me musing on whether this is a reaction to a rather modern use of the word "security" in English. When did it first acquire its current meaning in English? Wartime? When did "security guards" first enter the language? In XML security, Vordel's area, the security we provide goes much beyond cryptography, into the areas of management (access control, reporting on traffic), availability and dependability (monitoring service level agreements), and safety (ensuring data is protected). That is, encompassing the broader French and German meanings of "security" than just the more narrow English language usage. I'll try to get there early to get a seat for Bruce Schneier's talk. Usually I end up sitting on the floor near the door, since his RSA talks are always very popular.
OPCFW_CODE
User interface design i.e. UI is technique through which you can design best quality interface to fulfill the user requirements. Increasing numbers of websites are developing new types of user interface design, taking advantage of users’ increasing levels of Internet-sophistication and faster connections. These new interfaces often allow users to view and manipulate large quantities of data. User interfaces should be designed iteratively in almost all cases because it is virtually impossible to design a user interface that has no usability problems from the start. Some of the interesting user interface design ideas: The use of typography, symbols, color, and other static and dynamic graphics are used to convey facts, concepts and emotions. If you do develop a totally innovative user interface design, do be sure to carry out usability testing before launching. Fundamental Principles of User Interface Design 1. Identify your user’s needs Your user’s goals are your goals; learn about your user’s skills and experience, and what they need. Find out what interfaces they like and sit down and watch how they use them. By focusing on your user first, you will be able to create an interface that lets them achieve their goals. 2. Pay attention towards patterns By using familiar UI patterns, you will help your users feel at home. 3. Stay consistent always Language, layout, and design are just a few interface elements that need consistency. A consistent interface enables your users to have a better understanding of how things will work, increasing their efficiency. 4. Try to use visual hierarchy Design your interface in a way that allows the user to focus on what is most important. The size, color, and placement of each element work together, creating a clear path to understanding your interface 5. Provide Informative feedback Always inform your users of actions, changes in state and errors, or exceptions that occur. Visual cues or simple messaging can show the user whether his or her actions have led to the expected result. 6. Be forgiving, Permit easy reversal of actions Your UI should allow for and tolerate user error. Design ways for users to undo actions, and be forgiving with varied inputs. Also, if the user does cause an error, use your messaging as a teachable situation by showing what action was wrong, and ensure that she/he knows how to prevent the error from occurring again. 7. Empower your user Once a user has become experienced with your interface, reward him/her and take off the training wheels. Providing more abstract ways, like keyboard shortcuts, to accomplish tasks will allow your design to get out of the way. 8. Speak their language Provide clear and concise labels for actions and keep your messaging simple. Your users will appreciate it. 9. Keep it simple Whenever you are thinking about adding a new feature or element to your interface, ask the question, “Does the user really need this?” or “Why does the user want this very clever animated gif?” Are you adding things because you like or want them? 10. Keep moving forward “keep moving forward.” it is a key principle in UI design. So, go ahead, follow iterative prototype for error free user interface design.
OPCFW_CODE
Preventing advertisers and analytics firms from tracking one's progress around the Web has become something of a hot-button issue. Microsoft's approach to this is two-fold. Tracking Protection Lists (TPL) allow users to opt into lists published by privacy organizations to block such tracking mechanisms. The company has also slipped in a new mechanism just in time for release, too; if any TPL is in use, the browser will also send the Do Not Track header, also being sent by Firefox 4. The approach of sending the header while using a TPL does not seem ideal; different TPLs might block different kinds of tracking (for example, one might block advertisers, another might block analytics), but the Do Not Track header will be sent to everyone, regardless of the intent of any installed TPLs. That's not an issue now, as the header doesn't have any real meaning, but it could become an issue in the future. Beyond that, the inclusion of ActiveX blocking will be welcome to those who dislike Flash but have to keep it installed for compatibility. It's simple, but effective. Some new trust features will only come into their own now that the browser has been released. In particular, the browser now attempts to warn about unsafe downloads. Any application that is downloaded has a reputation. If lots of people download the same program, it's probably safe, so it has a good reputation; if you're the only one to ever download it, it has a much higher chance of being something nasty, so its reputation is bad. Or at least, that's the thinking. Attempts to download programs with bad reputations will earn an additional warning. The true value and efficacy of this system will only really become clear once the browser is in wide use. A new development process With these goals came a new way of developing the browser. Instead of producing a beta or two and then perhaps a release candidate, in March 2010 Microsoft said that it was going to release what it called "Platform Previews" every eight weeks or so. These previews would have the underlying improvements to the browser's core, giving Web developers the opportunity to experience both the greater performance and greater standards compliance that each new preview provided, but didn't come with any real browser interface. This site has long criticized Microsoft's browser development approach. The combination of infrequent releases and relative lack of access during beta periods made it difficult for developers to gauge the direction that the company was headed in, and so it was hard to provide timely, relevant feedback. Nor were we confident that the preview releases would do enough to redress this issue. Now that IE9 has shipped, it's fair to say that the previews did the job admirably. Microsoft showed steady progress, introducing substantial new features such as support for the HTML5 video and audio tags, canvas graphics, and WOFF fonts. Each new release made great strides in performance, too. Many thousands of bugs were filed against the browser, and they were all examined and addressed (though not necessarily fixed). Microsoft says that the bugs that were filed were high quality, too, with something like 50 percent of issues raised proving to be legitimate. In previous Internet Explorer development periods, the beta release would be the first opportunity to actually test the browser's rendering engine. With IE9, we already knew that the engine would be high quality thanks to the preview program. With the previews, Microsoft has shown that not only can it develop a high quality browser; it can also do so in a way that effectively engages with the community. The company has also provided a solution of sorts to the desire to test and experiment with more unstable specifications. Prototype implementations of features that are still in flux can be installed, giving developers the access they need to provide the experimentation they need to do, without running any risk of real sites actually depending on these features. These prototypes have been updated regularly, and their update schedule is governed by the frequency with which new drafts of their specifications are developed, rather than any fixed eight-week interval. And now the bad bits The biggest problem, and the biggest risk, faced by Internet Explorer 9 is that of compatibility. Not with websites—it does a great job there—but with operating systems. Because of its use of Direct2D and DirectWrite, which are only available on Windows Vista and Windows 7, it does not run, at all, on Windows XP. Though Windows XP's market share is declining on the back of strong corporate uptake of Windows 7, it's still the most common version of Windows. And it can't be used with Internet Explorer 9. This wasn't a bad decision. The performance improvements made by the use of DirectWrite and Direct2D allow a new class of Web application to be developed. They greatly extend the range of what is possible and practical to do on a website. Platform security features that Internet Explorer 9 leverages also make the switch to more modern operating systems desirable. Some of the things that make IE9 a better browser are things that simply do not exist on Windows XP. Nonetheless, it's plain that this will hamper adoption of the new browser. Firefox 4 includes Direct2D (and, optionally, DirectWrite) on platforms that support it, but it will still run on Windows XP; on that operating system it falls back to software rendering. This makes it slower, certainly, on that operating system. But it will still work. Windows XP is declining, and it's understandable that Microsoft has chosen not to target a system that will be a decade old this October. But it does mean that Microsoft may struggle to win over users. It's also a little disappointing that the 64-bit version is less polished than the 32-bit version. It can't be made the default browser, and it doesn't include the new, high-performance scripting engine. Microsoft has long argued that 64-bit browsing isn't necessary; most plug-ins are only 32-bit, and so, the argument goes, browsing must be a 32-bit activity. This is unfortunate. One, it leads to a certain chicken-and-egg problem: there's little incentive to develop 64-bit plug-ins since nobody uses a 64-bit browser due to the lack of plug-ins (though Adobe Flash 11 is likely to include first-class 64-bit support, resolving one of the big stumbling blocks). Making the 64-bit version first-class—the same features and performance as the 32-bit version—and ensuring that, at least, Microsoft's own plug-ins (such as Silverlight) were supported would go a long way towards making 64-bit browsing viable. This is, after all, much the same route as the company took with Office. The reason that 64-bit is desirable is particularly because it offers the potential to strengthen certain anti-hacking mechanisms. Address Space Layout Randomization (ASLR) depends on the ability to change the in-memory layout of things like DLLs. In a 32-bit process there are only a limited number of random locations that can be chosen. 32-bit processes are also more vulnerable to anti-ASLR measures such as "heap spraying" (wherein a large proportion of the browser's memory is filled with malicious code to make it easier for an attacker to trick the browser into executing it). 64-bit is by no means a panacea, but it does strengthen these protection systems. For something that is as frequently attacked as a Web browser, this kind of defense in depth is desirable. This is especially true as the 32-bit plug-in issue is not insurmountable. Safari on Mac OS X is a 64-bit process on suitable systems. It gets around the plug-in problem by running plug-ins in separate 32-bit processes. This is an approach that Microsoft could, and should, take. I suspect that IE9 will also struggle to win over the geek demographic. It's a very solid, effective browser, but the lack of "power" features (such as the richer tab handling, automatic session restoration, and extensive extension support) means that this community will likely be better-served by something like Firefox. Though such users are themselves a minority, they are nonetheless influential—they spearheaded Firefox's adoption, acting as advocates for that browser, and are doing the same for Chrome (though in the latter case, Google's substantial advertising budget is also a big help). In the Internet Explorer 5 days, these were the same people encouraging the abandonment of Netscape Navigator. As good as Internet Explorer 9 is, I don't think it's going to be enough to win them back. There are also lingering questions surrounding the question of what happens next. The new development process was successful, and has built up a lot of momentum, but the company is still, for the moment, keeping quiet about the next move. If there will be no browser version for another two years, then in spite of all IE9's remarkable progress, the game is lost. There's already a good chance that Firefox 4 will leapfrog it in most regards when it is released in the next week or two; its time at the top will be short-lived. The world of browser development is fast-paced. In an ideal world, the platform preview program will continue, aiming towards the release of, say, IE9.5 or IE10 in six to eight months from now, certainly no longer than a year. This would allow Redmond to keep pace with the Mozilla and Google developers, and one might even say that it would herald the welcome start of the third browser war. Certainly, the company doesn't want to let the momentum flag; it knows it's onto a good thing with the previews. But as of right now, all that exists is rumor and conjecture. Internet Explorer Vice President Dean Hachamovitch is giving a keynote presentation at next month's MIX conference in Las Vegas, and while this is expected to focus on IE9 for Windows Phone 7 (due later this year), it's hoped that he will also give a look forward at the future of the desktop browser. Internet Explorer 9 is a triumph. Not perfect, but still a first-rate product. Microsoft really has built a better browser here. It's arguably the most modern browser on the market—for a few weeks, at any rate. If you use Internet Explorer, and you're not stuck on Windows XP, you should switch. Even if you don't use Internet Explorer, you should try it out. Internet Explorer 6 and 7 are embarrassments that you should be ashamed to use. Internet Explorer 8 is acceptable, but no more than that. Internet Explorer 9 is the anti-IE6. It is an excellent browser that can be used with confidence and pride.
OPCFW_CODE
At Spruce, we’re building out our initial libraries and components to power the future of digital identity. Here’s the latest from our development efforts: Verifiable Credential Library As part of our efforts, we are developing a library to provide functionality around Verifiable Credentials (VC) in Rust. We chose Rust for its speed, predictable performance, and safety. One other consideration for Rust was for embedded and IoT devices in the future as they harness credentials and use them in the performance of discrete tasks. We are happy to report that our VC library passes all tests required by the W3C Verifiable Claims Working Group test suite to be considered a conforming implementation. Once released, we will propose that our library is included in the W3C CCG implementations list with all public results of our conformance testing, and instructions on how to run them locally. At the moment we are implementing JSON-LD support in Rust to fully express semantic data models past the operations required to pass the test suite. We are also working on improving support for LD-Proofs and the use of ZKPs. We are currently awaiting a preliminary security review. After that check passes and with proper contributor guidelines, we will publicly release the repository. Tezos DID Method Our work with the Tezos ecosystem requires a Tezos-based DID method to allow Tezos accounts to use verifiable credentials using a trustless model and within the same execution context. As a refresher, a decentralized identifier (DID) relies on DID documents to establish authentication, and operations on DID documents themselves, including creation, resolution, updating, and deactivation, as described by a DID method. We are therefore currently in the early stages of developing a DID method based on Tezos, incorporating TZIPs such as TZIP-16 and eventually producing TZIPs from our work. Public ledger-based DIDs are rife with privacy concerns, and we are taking the following approaches with our DID method specification: - Encouraging off-chain interactions where possible by (1) not requiring a public transaction prior to DID resolution and usage, and (2) considering the interplay with privacy-preserving DID methods such as did:peer, which should actually serve the brunt of interactions to prevent unnecessary information exposure. - Limiting the scope towards providing only authentication via keypairs thereby ameliorating many concerns by the community around service endpoints. We are considering the incorporation of the KERI protocol to keep things straightforward. - Working closely with engineers from the Tezos ecosystem to provide implementations in Lorentz and/or LIGO for the DID document management smart contracts. There is a chance we’ll have a full spec if we can keep the contracts tight enough, and also minimize gas costs in the process. With respect to our reusable product components, we are currently in the early design stages for our credential wallet, issuer tool, and ecosystem steward platform. We are completing user journeys and technical requirements based on customer feedback. Follow us on Twitter Follow us on LinkedIn
OPCFW_CODE
stristr and speed I've got two files,file a around 5mb, and file b around 66 mb. I need to find out if there's any occurnaces of the lines in file a, inside file b, and if so write them to file c. This is the way I'm currently handling it: ini_set("memory_limit","1000M"); set_time_limit(0); $small_list=file("a.csv"); $big_list=file_get_contents("b.csv"); $new_list="c.csv"; $fh = fopen($new_list, 'a'); foreach($small_list as $one_line) { if(stristr($big_list, $one_line) != FALSE) { fwrite($fh, $one_line); echo "record found: " . $one_line ."<br>"; } } The issue is its been running(successfully) for over an hour and its maybe 3,000 lines into the 160,000 in the smaller file. Any ideas? Build arrays with hashes as indices: Read in file a.csv line by line and store in a_hash[md5($line)] = array($offset, $length) Read in file b.csv line by line and store in b_hash[md5($line)] = true By using the hashes as indices you will automagically not wind up having duplicate entries. Then for every hash that has an index in both a_hash and b_hash read in the contents of the file (using offset and length you stored in a_hash) to pull out the actual line text. If you're paranoid about hash collisions then store offset/length for b_hash as well and verify with stristr. This will run a lot faster and use up far, far, FAR less memory. If you want to reduce memory requirement further and don't mind checking duplicates then: Read in file a.csv line by line and store in a_hash[md5($line)] = false Read in file b.csv line by line, hash the line and check if exists in a_hash. If a_hash[md5($line)] == false write to c.csv and set a_hash[md5($line)] = true Some example code for the second suggestion: $a_file = fopen('a.csv','r'); $b_file = fopen('b.csv','r'); $c_file = fopen('c.csv','w+'); if(!$a_file || !$b_file || !$c_file) { echo "Broken!<br>"; exit; } $a_hash = array(); while(!feof($a_file)) { $a_hash[md5(fgets($a_file))] = false; } fclose($a_file); while(!feof($b_file)) { $line = fgets($b_file); $hash = md5($line); if(isset($a_hash[$hash]) && !$a_hash[$hash]) { echo 'record found: ' . $line . '<br>'; fwrite($c_file, $line); $a_hash[$hash] = true; } } fclose($b_file); fclose($c_file); This went a bit above my head, do you know a good resource where I could learn how to do this properly? Added an example for you. Seems to work fine, but I've haven't exactly done extensive debugging. Should be enough to let you see what's happening and will run in teeny amounts of space compared to your original. Wow, that managed to do the whole 65mb file in about 45 seconds... Thanks so much, you just saved me a really really late night. Also automagically is my new favorite word. Try sorting the files first (espacially the large one). Then you only need to check the first few characters of each line in b, and stop (go to the next line in a) when you're past that prefix. Then you can even make an index of where in the file each characters is the first (a starts on line 0, b starts on line 1337, c on line 13986 and so on). Try using ob_flush() and flush() in loop. foreach($small_list as $one_line) { if(stristr($big_list, $one_line) != FALSE) { fwrite($fh, $one_line); echo "record found: " . $one_line ."<br>"; } @ob_flush(); @flush(); @ob_end_flush(); } How will that speed up the search?
STACK_EXCHANGE
Enhanced error handling in key functions Enhanced error handling in key functions: Added specific exception handling to 'summarize' and 'get_next_action_from_openai' functions for improved reliability and debugging. Introduced granular error handling for FileNotFound, Permission, Image, I/O, Base64 encoding, API request, and other potential exceptions. @Prureddy This looks a lot better and comprehensively handles a lot of potential exceptions in get_next_action_from_openai with descriptive error messages. I'd like to see what @joshbickett thinks. @legendkartik45 Yess I can add thanks for your suggestion It appears there may be an issue with the order of exceptions. @legendkartik45 suggestion may also help as well Bad except clauses order (Exception is an ancestor class of Error)Pylint[E0701:bad-except-order](https://pylint.readthedocs.io/en/latest/user_guide/messages/error/bad-except-order.html) Here's a ChatGPT thread about it: https://chat.openai.com/share/53fe5500-7fda-4935-ade9-5dd1fc94b2d6 Re-opening @michaelhhogue @joshbickett are you an official contributor to this project? Could you comment on where the name Agent-1 came from? This appears to have blatantly ripped of the work of researchers working hard over a year. Any open source contributors should consider that this firm raised millions and is scamming open source devs into stealing work for them. They never responded to our claims they stole this work, even down to the name agent-1, which is incredibly shameful if true and our attorneys would love to hear from an official contributor. We will be publishing our solution open source as well, and here is Atlas-1, which we've been training for over a year and published last month: https://youtu.be/IQuBA7MvUas What is Agent-1 and where did the name come from? It appears these guys blatantly ripped off our work and are now scamming open source devs into copying it for them. @Prureddy, let us know when you have an update on the questions. Would be great to merge this PR after a few corrections! @Prureddy going to close this PR for the meantime. If you're able to implement some of the required updates for the error handling that'd be great. When your updates are ready feel free to reopen the PR, Thanks! @joshbickett @alm0ra @legendkartik45 Thank you for pointing out the concerns. I've optimized the code by consolidating exception handling into a single approach using a dictionary for better efficiency and readability. Redundant code segments have been refactored to avoid repetition, ensuring a streamlined structure. Regarding the order of exceptions, I've organized them in specificity, which aligns with best practices for error handling in Python. As for printing and returning, the code previously printed the error and returned an error message. I've adjusted the error handling function to return the error message directly, simplifying the structure and making it more cohesive. This change avoids unnecessary duplication between printing and returning. The updates have significantly improved the code's clarity, maintainability, and adherence to best practices. Thank you for your feedback, which greatly contributed to these enhancements!tion
GITHUB_ARCHIVE
import sys import os import re import time import datetime import subprocess currentTime = time.time() # CONVERT UNIX DATE INTO NORMAL HUMAN READABLE DATE humanDate = datetime.datetime.fromtimestamp(currentTime).strftime('%Y_%m_%d_%H_%M_%S') # DIRECTORY WE GOING TO STORE THE DATABASE BACKUPS IN directory = 'C:\\Users\\trasb\\Desktop\\DB_BACKUP_FOLDER\\' # NAME OF THE SOURCE FILE TO USE FOR LOOPING THROUGH myfile = 'db_list.txt' # SQL QUERY TO GET THE INITIAL LIST TO MAKE THE BACKUP FROM getDB = os.popen('C:\\wamp\\bin\\mysql\\mysql5.7.10\\bin\\mysql -u root "information_schema" -e "SELECT SCHEMA_NAME AS \'\' FROM SCHEMATA WHERE SCHEMA_NAME NOT REGEXP \'information|sys|backup|mysql|test|performance_schema\'"').read() # REMOVE BLANK LINES FROM LIST getDB = re.sub('^\s*$','\n',getDB.strip()) # CHECK IF THE FOLDER EXISTS IF IT DOES NOT THEN CREATE THE FOLDER if not os.path.exists(directory): os.makedirs(directory) # WRITE OUR LIST TO THE DIRECTORY CREATED result = open(directory + myfile, 'w') result.write(getDB) # CLOSE THE FILE HERE, IF ITS NOT DONE THEN YOU WONT GET ANY DATA RETURNED result.close() # THIS PART BEGINS THE BACKUP PROCESS OF ALL THE DATABASES with open('C:\\Users\\trasb\\Desktop\\DB_BACKUP_FOLDER\\db_list.txt', 'r') as file: for line in file.xreadlines(): output = ""+directory+""+humanDate+"_"+line+ "_" + humanDate +"" output = re.sub(r'\n', '.sql', output) line = re.sub(r'\n', '', line) shit = "C:\\wamp\\bin\\mysql\\mysql5.7.10\\bin\\mysqldump -u root " + line + " > " + output + ".sql" zipit = 'gzip '+output+'.sql' with open(output, 'w') as final: os.popen(shit) os.popen(zipit) # GET LIST OF ALL FILES IN THE DIRECTORY AND REMOVE POSSIBLE HIDDEN FILES list_files = [x for x in os.listdir(directory) if x[0]!='.'] # NOW LOOP THROUGH THE FILES AND REMOVE EMPTY ONES for each_file in list_files: file_path = '%s/%s' % (directory, each_file) # CHECK SIZE AND DELETE IF 0 if os.path.getsize(file_path)==0: os.remove(file_path) else: name, ext = os.path.splitext(file_path) if ext == '.sql': os.remove(file_path)
STACK_EDU
IBM Personal Communications provides an emulator interface to communicate with IBM Mainframe/AS400/VT sessions. This is used for reading & updating host data and interfacing host with other applications. Performing manual tasks on emulator screen increases in transaction and becomes repetitive, this over a long term would have issues: - Redundant and hence error prone - Time consuming and thus expensive Consider a business scenario where bank employees, on a regular basis, build reports on their customer’s financial information by querying their credit limit, balance due etc. This report building operation involves below set of activities: a. Execute macros on the emulator screen to perform certain tasks. b. Query database to read customers’ credit and balance amount. c. Generate the final report by exporting query results to a text or CSV file. Performing manual operations to generate such reports is inefficient and prone to errors. This is where automation, in the form of API’s can help. Exploring IBM PCOMM Automation API In this post we will look at a sample application that uses PCOMM Automation API’s in a Microsoft Excel sheet. This application can start multiple PCOMM sessions or connect to the existing ones. On a selected session it can execute PCOMM macros (.mac) and extract information from the emulator screen to generate a user report. This application is divided into two sections: A. Session Manager: a. Use ‘Connect Session’ dropdown list to start a new emulator session or connect to an existing one. b. Use ‘Stop Session’ to gracefully terminate any active session which was started using this application. B. Generate Report: a. Use ‘Select Session’ to connect to a session started using above steps. b. Select macro (.mac) from ‘Table Definition’ list which will log into the Host session and move to the table definition page. c. Select macro (.mac) from ‘Query Window’ list to move to table query page. d. Click on ‘Generate report’ to collect Customers information from the Emulator page. A. Starting PCOMM Session: The PCOMM session can be launched by couple of methods: a. Using PCSAPI, ‘pcsStartSession’: b. Using HACL API, ‘autECLConnMgr.StartConnection’: Once the session is started, use PCOMM HACL API to connect to it: B. Stopping PCOMM Session: The session started above can be stopped using either of the methods below: a. Using PCSAPI, ‘pcsStopSession’: b. Using HACL API, ‘autECLConnMgr.StopConnection’: C. PCOMM Session Management: In this application, user can select any session to generate the report. To manage these sessions, we should associate each session with its PCOMM connection objects. To do so, use Dictionary objects to store session name as a key and session objects as its value. In an array store the newly created autECLPS and autECLOIA objects. As shown below, add session name (A-Z, a-z) as a Key to the dictionary and the array as its value: Once user selects a session to generate a report, get the PCOMM Session objects from the dictionary ‘SessionObjsDict’ by providing ‘sessionName’ as a key (described in section below). A. In this section, user generates a report by running macros on the Host and performs query on the host table using Excel User Form. Once user selects a session under ‘Select Session’, application retrieves the session objects from the ‘SessionObjsDict’ dictionary: B. To execute any macro (.mac) file in the Host presentation space, use autECLPS API ‘StartMacro’: C. User Form ‘Generate Report’ displays the column list of a table which has been read from a table inside the Emulator screen. In this User Form, user can select the column which they want to view in the final report and can add filters on the columns ‘Balance Due’ (BALDUE) or ‘Credit Due’ (CDTDUE). By using PCOMM HACL API’s, the data is read from the Emulator presentation space and is stored in the Excel sheet. For more information on PCOMM Automation API, please refer to PCOMM documentation. Below is link to a video that shows a demo of this application. Please email zServices@hcl.com or HCL-HI-LabServices@hcl.com to get more information on Services offerings. Technical Architect, Lab Services, IBM HACP & HATS
OPCFW_CODE
I have read the beginning. I am, however, pretty familiar with several such trading systems, and the anticipated results. There are definitely some spots I see in the chart where this would likely have picked a losing entry point, but the underlying system is still valid. All other things being equal, it will make money, at least for a time. The issue is more the fact that the bot is A) predictable, and B) prevalent in a small market In a big market simple ema bots aren't so easy to game, there's much more going on. In a small market like bitcoin, it's not terribly hard to place a trade that causes these bots to enter, and exploit the known result. (not necessarily to the bots' disadvantage, but often so) Goomboo addresses B in one of the early posts - Since the Bitcoin market has so little liquidity, I suggest that new trader try a variant of my system. If a large group of people were to trade the 10/21 crossover, the entries / exits would become crowded and price would move violently around those points, hurting us all! I suggest that people try things such as a 9 / 14 crossover or a 5 / 8 crossover. A bunch of bots all predictably doing the same thing with not much liquidity is begging someone to exploit them. So often, we're trading in a very tight range, so it's fairly easy to place a trade that will cause a crossover (if you have enough coins to bet on it!) Goomboo also points out that this is a trend trading system, and as such may have up to 70% losing trades. It only takes a couple bad losses to wipe out many gains in these systems. (as he also points out, the potential drawdown in such a system when used with leverage can be risky) The basic problem with this type of simple trend system, IMO, is that you wind up trading counter-trend, and in doing so, you set yourself up to be on the wrong side of a trade quite often. The is especially true in low liquidity, low volume markets. I use simple technical analysis - S/R zones and Fibonacci sequences, but not as Goomboo suggests many traders do - "predictive analysis" - As he said, prediction of a market is not truly possible. Identification of potential entries, and entering on confirmation through price action when it actually occurs is more successful in my experience. as above, I'm having a look at the bot's code to see how hard it might be to add a few indicators, especially volume, and maybe order depth to determine the real spread. Mostly I'm looking for a couple indicators for when it shouldn't take a trade. And, looking through the Mt.Gox API, the simplest thing to do might be to take 2weiX's suggestion to make the EMA completely configurable, and to use the VWAP rather than the Open price. I don't know how one might go about backtesting it to see what its profitability and W/L ratio might be.
OPCFW_CODE
This article is about relation between digital image size in pixels and photographic print size. First calculator in this article recommends photographic print size for given digital image dimensions in pixels. Let's define the problem: We have digital image with known size in pixels, for example, 3264 x 2448 pixels, and the set of standard photo print sizes used by photographic printing services. Photo print size name defines linear dimensions of photo print. For example, 4''x6'' or 4R photo print size means that photo print has 102x152 millimeters dimensions. We need to choose maximum photo print size which allows us to print digital image without quality loss. I've created the handbook Standard photographic print sizes for defining standard photo print sizes, which can be edited to provide missed sizes. The only special knowledge we needed to solve the problem is about quality. It can be easily found on the web. Photographic quality (at an arm's length viewing distance) requires that print resolution should be not less than 300 DPI (dots per inch) or, the same, 300 PPI (pixels per inch). Still acceptable quality requires that print resolution should be not less than 150 DPI. Everything else is simple math. Look at the picture below Each photo print size is converted to pixels, assuming that 1 inch holds 300 (150) pixels. The obtained size in pixels (taking into account aspect ratio, more on this below) is compared to digital image size. If print size in pixels is greater than digital image size (see picture, print size on the right), then it does not fit, since we have to enlarge the image and get resolution worse than 300 DPI. If print size in pixels is less than digital image size (see picture, print size on the left), then it does fit, since we have to shrink the image and get resolution better than 300 DPI. The calculator chooses print size with maximum linear dimension, which does still fit. (Smaller size won't be a problem since we can print with resolution up to 1200 DPI.) Second calculator in this article finds resulting pixels per inch value for printed image and how many pixels were cropped during scaling. Let's define the problem. We have photo print with known dimensions in centimeters, printed from digital image with known dimensions in pixels. Usually, the aspect ratio of printed image is not the same as aspect ratio of digital image. The image is scaled during printing, but its aspect ratio remains constant. This leads to unwanted effects. Look at the picture below We have two ways to scale: first - scale with cropping out the part of image second - scale without cropping, but with empty spaces on photo print. Since second looks ugly, I used the first. Thus, we have to find resulting image resolution in DPI, and how many pixels were cropped due to difference in aspect ratio. First one is easy - the dimension in pixels (width or height) which is not cropped is divided by corresponding photo print dimension in inches. The second one is difference between used pixels on cropped dimension and original digital image pixels.
OPCFW_CODE
I accidentally removed the “type” flag on a partition while using gparted and the partition is no longer visible to gparted or to dolphin file manager. The data and the partition is still there and I can manually mount it, but it shows up as empty space otherwise. While I am not that worried about it as I can manually mount it, I am hoping to upgrade to the newest openSUSE soon and so it would be nice if partition editors could see the partition so it doesn’t get written over. I don’t know much about managing disk flags through the command line, so any help is appreciated. I am using openSUSE 12.1 if it makes a difference. Please show an fdisk -l of the disk. This is the disk in question: Disk /dev/sda: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders, total 1465149168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000a509f Device Boot Start End Blocks Id System /dev/sda1 4096 409604095 204800000 83 Linux /dev/sda2 409604096 819204095 204800000 0 Empty As you see the type of sda2 is set to 0 (it is not a flag and it is not removed, it is an 8 bit field). You did not tell what the type was before you set it to 0, but when the partition contains a Linux file system (like ext4), then it should be 83 like on sda1. I guess that you can set it to same way you set it to 0 by using gparted and I do not quite understand why you did not do that. I never used gparted, thus I can only tell you how to do this using fdisk. To become root and start fdisk: su -c 'fdisk /dev/sda2' fdisk is interactive. You can see what single letter commands exist with the command but in this case we need the command you are now asked for the partition number and then for the type: then check if it is correct now: when OK write the changed partition table to the disk: BTW, next time when you post here copied/pasted computer text like you did above with the fdisk -l listing do so between CODE tags, else it is badly readable. You get the CODE tags by clicking the # button in the toolbar ofthe post editor. Sorry it took so long to get back to you on this. The reason that I didn’t fix it in gparted was because the partition was not detected in gparted, it only showed sda1 and the rest of the disk was free unpartitioned space. In any case what you said worked like a charm (though I had to put /dev/sda and not /dev/sda2). While I knew that the type number I needed was 83, I had no idea fdisk was interactive and I wasn’t able to find anything that described what you said to do when I searched before posting. Thanks a lot. Documentation for fdisk? Start with as always when looking for Unix/Linux documentation.
OPCFW_CODE
A Unity ID allows you to buy and/or subscribe to Unity products and services, shop in the Asset Store and participate in the Unity community. New Unity Live Help updates. Check them out here! Discussion in 'General Discussion' started by derkoi, Jan 16, 2014. Well, there's nothing to discuss: it's just great So, can I develop a game and put it on my vita for fun? or do I need a registered dev with Sony first? To be specific, do I have to pay just to put a game on my personal vita, not to distribute? Which is more important: lefty control-stick, or righty-control stick? You have to be a registered PS Vita developer. I'm a PS Mobile developer and apparently that doesn't count so I'm applying to be a PS Vita dev. Just to be clear, I've seen some questions floating around so I'll address them here before they start rolling in. You need to be a registered Vita developer and have a dev kit. For those who are registered, Vita deployment build is available on Devnet. A while ago we announced both Vita and PSM deployment will be available, these are two separate things and what we have just released is Vita only. 1.) This is only for licensed Playstation Devs, no PSM-Stuff, I'm afraid. 2) Even if you ARE a licensed dev, you need a PS Vita Devkit, if I'm not mistaken. 3) In order to become a licensed dev you have to fill this out: https://www.companyregistration.pla...CFModules/NewLicensee/snl_master_template.cfm You'll notice that you have to be a registered company in order to become a licensed dev. they should almost just keep this stuff quiet and only let the licensed devs know. it just confuses the kids that want to release their brand new awesome mmorpg on every major platform That's not fair! Super Tentacles Online is going to be the best game EVAR!!!one!!eleventy!1!! Hahaha. Man, sometimes the naivety these noobs possess is inspiring. (Mine was called Region Warriors). Yes, for licensed devs only. PSM for Vita is coming. Yes, that is correct. PSM for Vita, coming a little later, will allow self-publishing and deployment to retail Vitas in an identical way to how people makes apps for iOS and Android. I'm a little confused on the point of this release vs the PSM release. Is it like Unity free vs Unity Pro, are there any features missing? PSM isn't related to Unity at all, you could manage to port it over if you're up for the challenge. This allows you to work with Unity natively on the vita just by making a vita build and using your dev kit. Yes I know that. What I meant was there's this version of Unity that exports to Vita which can be downloaded if you're a registered Vita developer there's another version coming soon for PSM devs that allows you to develop for Vita without a dev kit. I'm wondering if there's any other differences as I'm in the process of applying as a Vita dev, however I'm already registered as a PSM dev, so wondered if it were worth the effort. I am a registered PSM Dev and if you are, you SHOULD know what the difference between PSM and native development ONLY for Vita is. PSM contains all Playstation certified devices: The PS Vita and all of these - http://playstation.com/psm/certified.html Your game/app has to run on ALL of these, that means: the maximum you can get out the Vita are the lowest specs of any listed devices. On top of that woring with PSM you haven't access to stuff like: both cameras, Reartouchpad and so on. That being said: I think it is worth the effort, since you don't have to bother with so much devices and can concentrate to make your game just for one platform. I registered as a PSM dev as I heard about Unity for PSM coming and also Sony was waiving the fee so I joined. I've never used any of the SDK provided as I've been waiting for Unity of which I applied for Beta. I wasn't aware my games would have to run on all the certified devices but now that you mention it, I guess it makes sense. I will continue my application as I plan to apply for PS4 development too. Thanks for the info. I think of it this way: a) Is Vita a games console? Do you want to make retail games that sell in the shops, or are downloadable. Do you want to make use of all the features of the Vita? Do you want to be an approved Sony Vita developer? If yes to all of those, then you want Unity for Vita. b) Is Vita a hand-held device like a tablet or iPod Touch? Do you want to self-publish games on Vita like you do on iOS and Android and BB10 and WP8 etc? Do you want to make games that use a restricted subset of the Vita hardware? If yes, then you want PSM for Vita. Damn - the registration page for all of Asia is in...Japanese. Is this free if you are a registered PS vita Dev? If it is allowed it would be a great opportunity for someone with a dev license and kit to partner up with other people who don't have one and let them make games for the ps vita and get them published though there partnership with that person even just as a test before they decide to invest in a ps vita license and dev kit themselves. If the dev license allows it,, please check all terms and conditions of all licenses before trying lol But I think it could be allowed you know because most publishers who publish to games consoles have independent developers work for them who are not part of there company yet they are able to use there assets and even entire games without those individual devs owning there own licenses and equipment, could be something along those lines. Its an interesting idea anyhow even if it might be unworkable. We are already working with. Forgotten Memories is heading to the VITA on Q1 2014! ya im sure licensed devs would love to let random people use they stuff they've earned and paid for, while probably voiding their contracts. It's totally possible and legal to publish others peoples games if you have such licenses. However, as a publisher you are the main responsible for whatever happens with such games. Good timing. Recently initiated communications to obtain SCE licensing, with Vita being one of them. I didn't realize Unity wasn't supported (assuming my cursory review of it is correct). The Unity Vita deployment is ready for use. You need to obtain a license from Sony if you want to use it. Both licenses Vita Developer/publisher and Unity Vita are managed by Sony. I would expect some sort of quality control over who they would publish and have a contract that would protect themselves, its not just letting random people use them to get on vita more like show us your work and what game you want to put through, we test it and if its good enough and works out they publish it using there dev license. Thanks for the info @tatoforever Do i need to register as a publisher too if I want to self publish?
OPCFW_CODE
Convert time PST/PDT to timestamp using Snowflake Background: I have the below table of data where I'm trying to concat the order_date and transaction_time columns to create a final timestamp column Problem: There is a PST/PDT string in the transaction_time column. I am trying to convert my final timestamp column(VARCHAR) into a UTC timestamp My attempted solution that didn't work: select transaction_date , to_date(transaction_date, 'mon dd, yyyy') as order_date , transaction_time , concat(transaction_date, ' ', transaction_time) as timestamp -- , to_timestamp_tz(concat(transaction_date, ' ', transaction_time), 'mon dd, yyyy hh:mm:ss am pdt') as final_timestamp from raw_db.schema_name.table_name Please help?? Thank you!! So PST and PDT are not valid iana timezone's which is what is expected by the Timestamp Formats, so you cannot use the inbuilt functions to handle that, but you can work around it. SELECT time ,try_to_timestamp(time, 'YYYY-MM-DD HH12:MI:SS AM PDT') as pdt_time ,try_to_timestamp(time, 'YYYY-MM-DD HH12:MI:SS AM PST') as pst_time ,dateadd('hour',7, pdt_time) as pdt_as_utc_time ,dateadd('hour',8, pst_time) as pst_as_utc_time ,coalesce(pdt_as_utc_time, pst_as_utc_time) as utc_time1 ,iff(substr(time, -3) = 'PDT', pdt_as_utc_time, pst_as_utc_time ) as utc_time2 FROM VALUES ('2020-10-28 7:25:44 AM PDT'), -- insert more rows here... ('2020-11-06 6:35:18 PM PST') v(time); shows two ways to get a unified UTC time from the two. which could be shortened to: SELECT time ,coalesce(dateadd('hour',7, try_to_timestamp(time, 'YYYY-MM-DD HH12:MI:SS AM PDT')), dateadd('hour',8, try_to_timestamp(time, 'YYYY-MM-DD HH12:MI:SS AM PST'))) as utc_time1 ,iff(substr(time, -3) = 'PDT',dateadd('hour',7, try_to_timestamp(time, 'YYYY-MM-DD HH12:MI:SS AM PDT')), dateadd('hour',8, try_to_timestamp(time, 'YYYY-MM-DD HH12:MI:SS AM PST')) ) as utc_time2 FROM VALUES ('2020-10-28 7:25:44 AM PDT'), ('2020-11-06 6:35:18 PM PST') v(time); which gives: TIME UTC_TIME1 UTC_TIME2 2020-10-28 7:25:44 AM PDT 2020-10-28 14:25:44 2020-10-28 14:25:44 2020-11-06 6:35:18 PM PST 2020-11-07 02:35:18 2020-11-07 02:35:18 As Per my comment if you have more TIMEZONE you need to support, lets say New Zealand's two timeszones ;-) then a CASE would be more suitable SELECT time ,substr(time, -4) as tz_str -- longer and NZxT is longer ,CASE WHEN tz_str = ' PDT' THEN dateadd('hour',7, try_to_timestamp_ntz(time, 'YYYY-MM-DD HH12:MI:SS AM PDT')) WHEN tz_str = ' PST' THEN dateadd('hour',8, try_to_timestamp_ntz(time, 'YYYY-MM-DD HH12:MI:SS AM PST')) WHEN tz_str = 'NZDT' THEN dateadd('hour',-13, try_to_timestamp_ntz(time, 'YYYY-MM-DD HH12:MI:SS AM NZDT')) WHEN tz_str = 'NZST' THEN dateadd('hour',-12, try_to_timestamp_ntz(time, 'YYYY-MM-DD HH12:MI:SS AM NZST')) END as utc_time FROM VALUES ('2020-10-28 7:25:44 AM PDT'), ('2020-11-06 6:35:18 PM PST'), ('2021-04-23 2:45:44 PM NZST'), ('2021-01-23 2:45:44 PM NZDT') v(time); OR you could use a regex to match up to the AM/PM part of the date time like in this SO Question/Answer, and have just one try_to_timestamp_ntz and just use the CASE to correct based of the suffix. Wow this is a god-level solution. Will need to take time to understand how this all works. Thank you for your help!! I understand it.. wow so brilliant. Cheers! the the IFF version sort of expands out better if you had 4-5 timezones you need to support and then used a CASE statement.
STACK_EXCHANGE
Table 1 - No CLA Manager Designee Assigned - Step 3 in PRD Summary Any user associated with a company will be taken to this page after they switch to member console, select project from left navigation and EasyCLA from top navigation. Here is the link to wireframe for the corporate console for EasyCLA. Step 3 in Table 1 of PRD https://confluence.linuxfoundation.org/display/PROD/Corporate+Console+for+EasyCLA+V2 All the backend operations will be encapsulated in EasyCLA API call. See PRD for more details. Acceptance Criteria [ ] Design Changes Required ?--> Not Applicable (N/A) [ ] Unit Testing Complete [ ] Unit Testing complete for backend API [ ] Unit Testing complete for UI [ ] Functional Testing Complete [ ] Functional Testing complete for backend API [ ] Functional Testing complete for UI [ ] Integration Testing Complete [ ] Integration Testing complete for backend API and UI [ ] Integration Testing complete end to end dev deployment [ ] Documentation Updated [ ] Any Open Sev. 1, Sev 2 bug(s) @wanyaland Please provide the updates on this ticket. Also please provide the ETA if the ticket is still in progress. Hello @vinod-kadam . The ETA for this is tomorrow it shall be done. @wanyaland any update? ETA for this API was yesterday. cc @dealako @wanyaland please work with @dealako to get those urls. As discussed in the standup. please complete the API work by using below Portal URLs: Dev Env: https://lfx.dev.platform.linuxfoundation.org Staging Env: https://lfx.staging.platform.linuxfoundation.org PROD Env: https://lfx.platform.linuxfoundation.org & provide the API to LFx team for integration. Once you get Project EasyCLA specific URLs from<EMAIL_ADDRESS><EMAIL_ADDRESS>then you can update the email section. Please contact Ahmed for Project EasyCLA specific URLs. @vinod-kadam Updated path parameters and shall merge once approved Happy paths tested and can be integrated with Lfx Testing InProgress. Will keep updating issues Wrong email is sent when CLA Manager request is sent @pranab-bajpai Could you please clarify below queries Will user with below roles have access to the screen Table1, Step1 when user access to organization and cla is not signed for the project cla-manager cla-signatory cla-manager-designee company-admin company-owner company-alternate-owner contributor Please let me know if I missed any roles In Table1, Step3, if any of the role above identifies cla manager, User with what roles can be identified as cla-manager Say a user1 is cla-manager for Intel & OpenColorIO and user2 is company-admin and identified user1 as cla-manager for Intel & OpenCue, since user1 already has cla-manager role, should only scope for the project be added wtithout assigning the cla-maanger-designee role Now say as a contributor if I come to Step3 and identify some other contributor as cla-manager, then other contributor is given cla-manager-designee role. I am not sure if this happens in real time but should this be allowed? @wanyaland Issue1 Observed that this is working when the logged in user has access only to project. Actually any user who is associated to the organization can view all the projects and also try to be CLA Manager or identify CLA Manager. Below is the scenario I tried Role - user scope - Community Role - contributor scope - Brigade|Bakbone (Project|Organization) Failed with 403 Then added scope for the project for which I was trying to identify CLA Manager scope - CloudEvents|Bakbone (Project|Organization) Passed with 200 I think the user who is trying to identify the CLA Manager need not have access to the project Issue2 Observed that still wrong email is sent to the identified CLA Manager Issue3 When there is no company admin for the Organization still API response is 204 and there is no validation to check if company admin exists Query As of now 3 company admin's can be added to the organization from LFX portal. So when we contact admin, is the email sent to all the 3 admins @nirupamav Issue1: For scope..a user needs to have organization scope and that can work as long as the organization scope is set for the given ID issue2, Issue 4: @pranab-bajpai can confirm on email content. I used the templates from PRD for this ticket. Updated issue 3 and shall raise a PR Moved to next sprint. Tested an observed below issues 1. Able to contact admin/assign cla-manager-designee from API to already signed project 2. The link to create LFID is still pointing to https://identity.linuxfoundation.org/ instead of myprofile link. Issue is tracked in Bug #1304 3. Email of the CLA Manager Designee is not displayed in the email sent to create LFID 4. We are contacting the admin to sign CLA or to assign cla-manager-designee, but the email content says that contributor would like to contribute which is not what we are doing in the flow. Email content needs to be changed 5. fullname field accepts more than 60 characters. As by comments in https://jira.linuxfoundation.org/browse/LFX-1914, fields should accept 60 characters Verified that below issues are fixed 5. fullname field does not accept more than 60 characters and 422 is returned 7. Not bble to send request when only spaces is provided for fullname field and 422 is returned which is as expected 8. 422 with appropriate message is returned when invalid email is provided Observed below issues and issues 1,2,3,4,6,9,10,11 could not be verified due below mentioned Issue 13 12. Space in not allowed between names in full name field. User should be able to enter space between firstname and lastname 13. Observed that "project already signed" is returned even when there is no signature. Not sure if this is related to delete cla group, But just letting know the scenario I tried. For the projects that I am trying I had deleted CLA group and again trying to sign CLA with new CLA group created for the project [Uploading... image.png ] Verified that the issue is fixed https://api-gw.dev.platform.linuxfoundation.org/cla-service/v4/company/00117000018BhtKAAS/project/a091700000AGJ5EAAX/cla-manager-designee Observed few issues and logged below tickets to track the issues and listed few issues in this ticket itself #1664, #1665, #1666 1. Able to send request when full name is not provided 2. Able to send request when only spaces is provided for fullname field 3, 404 is returned instead of 400 with appropriate message when invalid email is provided 4. Able to send request when fullname field is not provided in the payload Blcoked due to #1761 as always 504 response is returned Issues are still reproducible 1. Able to send request when full name is not provided 2. Able to send request when only spaces is provided for fullname field 3. 202 is returned instead of 400 with appropriate message when invalid email is provided 4. Able to send request when fullname field is not provided in the payload Verified that all the issues are fixed
GITHUB_ARCHIVE