Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Compilation errors when targeting C++23 What version of gRPC and what language are you using? gRPC version 1.62.1 C++, targeting C++23 What operating system (Linux, Windows,...) and version? Fedora 40 What runtime / compiler are you using (e.g. python version or version of gcc) clang that comes standard on Fedora 40 (18.1.6): >>> clang --version clang version 18.1.6 (Fedora 18.1.6-3.fc40) Target: x86_64-redhat-linux-gnu Thread model: posix InstalledDir: /usr/bin Configuration file: /etc/clang/x86_64-redhat-linux-gnu-clang.cfg What did you do? I imported gRPC targeting version 1.62.1 within a bazel project, which was the newest version available on Bazel Registry at the time of submitting this ticket. Compilation worked just fine until I targeted c++23 instead of c++20 within my toolchain (e.g., -std=c++23). What did you expect to see? Working compilation What did you see instead? Compilation error: ERROR: <redacted>/external/grpc~/BUILD:1287:16: Compiling src/core/lib/surface/server.cc failed: (Exit 1): clang failed: error executing CppCompile command (from target @@grpc~//:grpc_base) <snip/> In file included from external/grpc~/src/core/lib/surface/server.cc:19: In file included from external/grpc~/src/core/lib/surface/server.h:29: In file included from /usr/bin/../lib/gcc/x86_64-redhat-linux/14/../../../../include/c++/14/memory:78: /usr/bin/../lib/gcc/x86_64-redhat-linux/14/../../../../include/c++/14/bits/unique_ptr.h:91:16: error: invalid application of 'sizeof' to an incomplete type 'grpc_core::Server::RequestMatcherInterface' 91 | static_assert(sizeof(_Tp)>0, | ^~~~~~~~~~~ /usr/bin/../lib/gcc/x86_64-redhat-linux/14/../../../../include/c++/14/bits/unique_ptr.h:398:4: note: in instantiation of member function 'std::default_delete<grpc_core::Server::RequestMatcherInterface>::operator()' requested here 398 | get_deleter()(std::move(__ptr)); | ^ external/grpc~/src/core/lib/surface/server.cc:93:3: note: in instantiation of member function 'std::unique_ptr<grpc_core::Server::RequestMatcherInterface>::~unique_ptr' requested here 93 | RegisteredMethod( | ^ external/grpc~/src/core/lib/surface/server.h:214:9: note: forward declaration of 'grpc_core::Server::RequestMatcherInterface' 214 | class RequestMatcherInterface; | ^ 1 error generated. Anything else we should know about your project / environment? This issue was discovered when using Clang - GCC works just fine. A smaller reproducible example of this issue can be found at this godbolt link, showing how compiliation is fine with GCC irrespective of C++ version targeted and fine with clang targeting c++20: https://godbolt.org/z/Pc9rGfadn It appears that the code fails to compile due to not being compliant with C++23 std::unique_ptr: https://github.com/llvm/llvm-project/issues/74963#issuecomment-1850834221 In particular, the inner struct Server::RegisteredMethod encapsulates a unique ptr to inner struct Server::RequestMatcherInterface, the latter of which is defined later in the translation unit: The latter unique_ptr is used here, line 109, whereas the actual inner class is defined later here, line 125 Seeing this as well. Observe the same with the forward declaration of RegisteredMetricCallback in https://github.com/grpc/grpc/blob/v1.65.0/src/core/telemetry/metrics.h: Line 270: forward declaration https://github.com/grpc/grpc/blob/v1.65.0/src/core/telemetry/metrics.h#L270 Line 466: std::make_inique is called on incomplete type https://github.com/grpc/grpc/blob/v1.65.0/src/core/telemetry/metrics.h#L466 Line 530: actual definition https://github.com/grpc/grpc/blob/v1.65.0/src/core/telemetry/metrics.h#L530 Seeing this as well. This should be fixed in https://github.com/grpc/grpc/pull/35957
GITHUB_ARCHIVE
Grep GPG-encrypted YAML password safe. pw is a Python tool to search in a GPG-encrypted password database. Usage: pw [OPTIONS] [USER@][KEY] Search for USER and KEY in GPG-encrypted password database. Options: --copy / --no-copy copy password to clipboard -E, --echo / --no-echo print password to console --open / --no-open open link in browser --strict / --no-strict fail unless precisely a single result has been found --database-path PATH path to password database --edit launch editor to edit password database -v, --version print version information and exit --help Show this message and exit. To install pw, simply run: $ pip install pw Password Database, File Format, and Editing By default, the password database is located at ~/.passwords.yaml.asc and automatically decrypted by using GnuPG if the file extension is .asc or .gpg. It uses a straighforward YAML format as in the following example, which is hopefully self-explanatory: Mail: Google: - U: email@example.com P: "*****" L: https://mail.google.com/ - U: firstname.lastname@example.org P: "*****" N: "John's account" SSH: My Private Server: U: root P: "*****" N: "With great power comes great responsibility." (An Old Entry That Is Ignored): U: foo P: bar Mobile: PIN: 12345 # shortcut notation (only provide password) To edit the database, use pw --edit. This requires that the environment variable PW_GPG_RECIPIENT is set to the key with which the database should be encrypted and invokes the editor specified in the PW_EDITOR environment variable (make sure to use blocking mode, e.g., subl --wait). Warning: This feature requires that the password database is temporarily stored in plain text in the file system, data leaks may arise. To some extend, this can be mitigated by using, e.g., tmpfs and by providing the editor with the adequate options that ensure that no backup copies, swap files, etc. are created. Release history Release notifications Download the file for your platform. If you're not sure which to choose, learn more about installing packages. |Filename, size & hash SHA256 hash help||File type||Python version||Upload date| |pw-0.6-py2.py3-none-any.whl (8.0 kB) Copy SHA256 hash SHA256||Wheel||2.7||Jun 15, 2014| |pw-0.6.tar.gz (6.7 kB) Copy SHA256 hash SHA256||Source||None||Jun 15, 2014|
OPCFW_CODE
force Ghost to use defined url in config I have another server for Ghost and proxy /blog into that server (nginx). as a result, when I load example.com/blog , Ghost tries to load assets from example.com/assets etc. this means I get a page without styling. in config.js, I set url to ghost.example.com. I want to force Ghost to always use that as root domain Hi @hadifarnoud, I'm afraid this doesn't make much sense! The url you set in config.js is the URL that Ghost will use to generate any external URLs, if you want to run Ghost on a subdirectory, the subdirectory needs to be in the url in config.js, if you want to run Ghost on a subdomain, then you need to proxy the subdomain to Ghost. Ghost is neither webserver nor DNS management tool, and nor should it be. Your nginx configuration needs to reflect the URL you want to use, and Ghost only needs to be told what that is. when I add a directory to url, I get 'cannot GET /' error. it doesn' matter if Ghost is on the same server or not. so, I can't set url to example.com/blog I did proxy Ghost in nginx. I used the server IP:GHOSTPORT. the homepage loads fine but every link is broken, as well as css/js/images @hadifarnoud If you want to set the URL as ghost.example.com, you need to do the following: Create an A DNS record pointing from ghost.example.com (subdomain) to your server IP address On Nginx config, you must specify the server_name directive, as follows: server_name ghost.example.com www.ghost.example.com; and proxy the requests to the port where Ghost is running: location / { proxy_pass http://<IP_ADDRESS>:2368; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } When you've done that, then change the url inside config.js to http://ghost.example.com and you're done :) I gave up to proxy example.com/blog to another server on a subdomain ghost.example.com subdomain was working fine @ayrad but my issue is having it proxied on example.com/blog it does load the page but no css or images are loading because it tries to load them from example.com/assets which will work if it was example.com/blog/assets (nginx will proxy it). I cannot set url to example.com/blog in config.js @hadifarnoud Well now I understand your issue. I see that the Ghost assets URL starts with / which takes the root folder as starting point instead of the subdirectory /blog/: <link rel="stylesheet" type="text/css" href="/assets/css/screen.css?v=06bb7797a5" /> @ErisDS may confirm us if it's possible to install Ghost on a subdirectory (I personally think that for now it's not) @ayrad this is the exact tutorial I followed. Ghost is still loading everything from root directory. no css,js, or images work. I did manually add the /blog folder to assets but that does not solve the permalinks issue. I tried adding the directory to 'url', but as I said it breaks Ghost and gives me 'cannot GET /' error. @hadifarnoud Well, It's not necessary to edit the asset url. I thought it wasn't possible to install Ghost on subdirectory, but actually you can... I just installed Ghost (v. 0.5.10) on a subdirectory which you can check here and it works just fine. Download Ghost: wget https://ghost.org/zip/ghost-0.5.10.zip Extract the files to desired path: unzip -uo ghost-0.5.10.zip -d /var/www/server.elladodelgeek.com/blog (on a subfolder of the main website root directory) Edit config.js: ... production: { url: 'http://server.elladodelgeek.com/blog', ... Install dependencies: npm install --production Run Ghost: NODE_ENV="production" pm2 start index.js --name ghost_subdir (I use pm2 here but you can run it directly for testing purposes by using npm start --production) Edit Nginx virtualhost config, by adding this to the server block: location ^~ /blog { proxy_pass http://<IP_ADDRESS>:2368; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; } Reload Nginx and enjoy your setup :) It has been possible to install Ghost on a subdirectory for a very long time. if you install it on a subdirectory like /blog/ and then set your config.url to yoururl.com/blog as well, then Ghost will serve the blog from yoururl.com/blog. The 'cannot GET / error is then expected, because Ghost is not longer serving anything to /, it will only serve the blog if you request /blog. You would need to have some other service proxied to /. I have done the exact same thing. even removed the node_modules before doing it. here is my vhost config: https://gist.github.com/hadifarnoud/a76d93be62d76d62dc56 and my config.js (db info retracted): https://gist.github.com/hadifarnoud/845a60607763504c33e1 my ghost url: http://camva.ir/blog/ @hadifarnoud Seems like you already solved it. What was it? :) update: I had a / at the end of proxy_pass URI. removed that and it works. Thanks for your support guys @hadifarnoud Good news :)
GITHUB_ARCHIVE
Results 41 to 50 of 53 I am a proud owner of a ATI 9600 PRO I installed the drivers on SUSE 9.3 and I havenít got problems at all with, but I know that Linux ... Enjoy an ad free experience by logging in. Not a member yet? Register. - 11-11-2005 #41 - Join Date - May 2005 - Figueira da Foz, Portugal ups i forgot where do i sign - 11-11-2005 #42Originally Posted by Kurtzweil Go here to sign the petition: http://www.petitiononline.com/atipet/ - 01-19-2006 #43 runnin el cheapo ATI 9200 with 128M. I have to say I am very happy. I changed from the nVidia becouse I could not get it working with Mandrake nearly a year ago, put the ATI in and within minutes had it running. Upgarded to Mandriva 2006, and no go. Instead of getting upset with ATI (and nVidia in the past) ... I changed distro's instead .... aah the power of choice. Mepis runs the ATI out of the box ...glxgears at 1300 fps, and I cannot even get anything more than 700 on XP ... but when the 3d worked on linux, was always faster than the microsoft platform anyway. Must admit, been close to tears, wanted to torture small furry mammals, and thought about how much the glass would cost to replace after the monitor goes through it ... heck ... nearly found religion once .. I always keep to the later/older cards. I have found drivers easier to get installed and running. The AMD64 with an ATI card is a little trickier, Mepis did not like that, but well I am used to it now, maybe time to give this Suse a shot ...hmm Anyway I have signed 23331 - 01-19-2006 #44Originally Posted by a12ctic im stuck with a radeon 9600XT. or an nvidia tnt2. the tnt2 runs faster than the radeon . /weed"Time has more than one meaning, and is more than one dimension" - /.unknown --Registered Linux user #396583-- - 03-28-2006 #45 Great idea. I've had problems with my Radeon X800 on both windows and linux.10" Sony Vaio SRX99P 850MHz P3-M 256MB RAM 20GB HD : ArchLinux 14" Dell Inspiron 1420N 2GHz Core2Duo 2GB RAM 160GB HD : Xubuntu - 06-25-2006 #46 curious about my card I'm curious. Ubuntu detected my card (ATI 7000 Radeon VE) and seems to work fine. I assume I don't have all the features provided when installed on a Windows system. I have seen references to being able to compile your own drivers. Do many people try this? - 06-25-2006 #47Originally Posted by dapperone The general complain is that ATI drivers don't deliver the same level performance in Linux than in Windows. NVIDIA drivers do perform better in Linux than ATI's ones, and are often easier to install. From my personal experience that is true."To express yourself in freedom, you must die to everything of yesterday. From the 'old', you derive security; from the 'new', you gain the flow." - 06-26-2006 #48 good to know Good to know. Thanks. I read up about ATI cards. It says Radeon but not the 7000 series. I don't think I'll risk it. Why fix something that's not broken? - 09-22-2006 #49 Has anyone heard any type of rumor about the ATI Radeon 200m in the laptops? That's what I have now and would love to get the driver working. - 09-22-2006 #50
OPCFW_CODE
Meeting today's security challenges As a company grows, the addition of new employees, systems, and applications to support business initiatives can increase the number of attack vectors used by malware to infiltrate a companys network. To maintain control and visibility into the companys overall security posture, IT departments must put controls in place to limit the impact caused by unauthorized use of rogue applications, malicious code and Internet resources - such as Web 2.0 applications that further expose the organization to attack. Todays business security solutions need to go beyond basic virus detection and prevention to provide a solution capable of implementing controls and managing policy for a large number of endpoints and servers all with minimum effort. Ideally, Antivirus management solutions should automatically install protection on unmanaged workstations and servers that appear on the network, remotely identify and remove unauthorized applications, and remotely configure system settings to ensure the continuing integrity of the network. Securing your business with Bitdefender Companies can effectively protect client workstations and critical servers from attack by using Bitdefender's ability to detect and prevent known and zero-day threats, ensure compliance to corporate security policies and manage them effectively with fewer IT resources. Bitdefender's Centralized Management allows companies to: - Implement a unified management platform for remote installation, configuration and reporting of all Bitdefender Clients, Server and Gateway products deployed throughout the network - Provide IT administrators with network-wide visibility into all malware related incidents - Proactively audit hardware and software assets within the network - Remotely configure and manage client and server system settings - Report on malware related incidents to identify infection rates and trends - Measure the effectiveness of an organizations antimalware security program Bitdefender's management server Bitdefender's Management Server provides a powerful and centralized management console that is included in all Endpoint Protection, Critical Server and Gateway solutions. It combines both visibility into a companys security posture with remote endpoint configuration and policy enforcement via a centralized interface. Actionable information and remote management Bitdefender's Management Server is a core element in a comprehensive suite of solutions providing end-to-end network protection from the gateway to the desktop. Bitdefender's proactive, multi-platform solutions detect and block viruses, spyware, adware and Trojan threats that can compromise your network integrity. Bitdefender's Management Server consolidates threat information and system status from across all managed Windows and UNIX-based workstations and servers running within the organizations network. Endpoint system events and critical issues are quickly identified and easily resolved with one click problem resolution, minimizing administration and response times to critical incidents. Simplified endpoint and server management - Consolidates configuration and threat information from all managed and unmanaged endpoints and servers via a dashboard interface - Allows remote configuration, auditing, installation, and removal of applications and system settings of any managed Windows endpoint or server in the network - Integrates with Active Directory to leverage the organizations existing Windows domain structure and group policies - Scalable Master-Slave Architecture to manage gateways, servers and endpoints located in different physical locations - Reduced resource cost and overhead when mass updating system configurations and software on multiple endpoints and servers - Simplifies network management with wizard-driven Network Tasks Enhanced network security - Configurable Security Policies with pre-defined templates to aid policy enforcement - Network detection of unmanaged endpoints with remote Client Security installation - Consolidated reporting provides visibility into network wide security threats - Centralized alerting notifies administrators of critical threats in near real-time - Automates network audit data collection for database driven inventory and change reporting Defence in depth Bitdefender Business Solutions is a comprehensive suite of solutions providing end-to-end network protection from the gateway to the desktop. Bitdefender's proactive, multi-platform products detect and stop viruses, spyware, adware and Trojan threats that can compromise your network integrity. Managed security policies based on windows domain structure Integration into Active Directory leverages the organizations existing Windows domain structure and group policies. Security policies can be applied directly by selecting users or user groups from the Directory as an alternative to managing workstation groups within the Management Server. Bitdefender's Management Server also detects unmanaged workstations, making them easy to identify for automated remote deployment, or to exclude them from compliance with your defined security policy. Bitdefender Centralized Management manages both Windows and UNIX-based gateways, endpoints and critical servers Integrated network resource management capability The Bitdefender Management Server utilizes Endpoint and Server Auditing and Management scripts compliant with Microsofts WMI Scripting Language. With over thirty predefined templates, administrators can automate mass remote management to remotely kill applications and processes, install and uninstall software, restart or shutdown workstations, enable/disable autoruns, or block USB removable media access. Alternatively, the audit data collection of all endpoint and servers within the network including hardware specifications and installed software applications - can identify non-compliant systems that exceed your companys defined security standards. Automatic or scheduled update distribution Bitdefender Management Server enables intelligent distribution of new virus definitions and restricted content database updates from a central location. Updates can either be applied automatically once an hour when they become available, or scheduled to be applied during off-peak times to minimise performance degradation and impact to the network. Centralized policy, reporting and alert synchronization is controlled throughout the organization via the Master Management Server. It ensures centralized and intelligent distribution of policy updates within the local network or remotely managed networks running a Slave Management Server. Bitdefender's architecture allows centralized management of the local and remote deployments The Bitdefender’s Centralized Management solution contains three main components: Management Server to enable centralized management backend for all Bitdefender solution, Management Console for the user interface and Update Server for downloading and distributing product and virus definition updates. Bitdefender Management Server, Management Console and Update Server Intel® Pentium compatible processor: - 1GHz (2GHz recommended) - 512MB (2GB recommended) Hard disk space: - 1.5GB (2.5GB recommended), 3GB for upgrades - Windows 2000 Professional SP4 - Windows 2000 Server SP4 - Windows XP SP2 - Windows Server 2003 SP2 - Windows Vista - Windows Server 2008 - Windows Server 2008 R2 - Windows Small Business Server (SBS) 2008 - Windows 7 - Microsoft SQL Server 2005, 2008 or Microsoft SQL Express Edition (included) Supported web browsers for Management Console: - Internet Explorer 7 (or later) - Internet Explorer 6 (Windows 2000)
OPCFW_CODE
[All: this is a long post on an important topic, so I’ve made it a wiki post i.e. directly editable by others. Feel free to make inline additions, but please try to retain the general integrity. I suggest to add your initials to any additions. Most likely we should create extra topics on each major question described below.] We have had a long running need to better solve cross-referencing in the openEHR EHR for managed lists such as the Problem List, Allergies list and so on. We’ve had many discussions in the past, including this recent one on Linking in openEHR. I have previously created UML for some initial ideas (‘view Entries’) if you want to look at something, but this is far from complete and could even be wrong. There are various needs that simple LINKs and use of DV_EHR_URI don’t solve particularly nicely, much of that analysed by @ian.mcnicoll and other clinical modellers (@siljelb , @heather.leslie, @varntzen, @vanessap etc, feel free to chime in) in trying to build models for Problem List and the like. I’m going to try to articulate a few at a time, in the hope we can expose the needs and therefore the solution here. (In the below, you can mentally trade other reference lists like Medications List, Allergies, Family History, etc for Problem List, with the same general semantics.) So the first thing to think about is the idea of one or more Problem Lists (at least one ‘master’ Problem List with the main Dxs) for which I propose the following semantic requirements statement (to be debated). Managed Lists: - are curated, i.e. manually managed (i.e. not query results) - have content consisting of ‘focal’ and ‘related’ data - ‘focal’ meaning the thematically central data i.e. problems, allergies, medications etc; ‘related’ meaning anything else; - are not the primary structure in which the thematically focal data (Dxs and the like) are originally recorded - have their own documentary structure, i.e. something like Section/heading structure - the focal content is citations of previously recorded diagnoses and/or other ‘problems’ - may have citations of other related previous content, e.g. important observations, past procedures etc - ?could have have internal de novo content, i.e. not just own Sections, but Entries (probably Evaluations?) created within the List to represent notes? summaries? thoughts about care planning? - are managed over time by the usual means, with each modification creating a new version. One key thing we have to determine is: what can be cited? Is it: - A: only Entries within previous Compositions? I.e. individual clinical statements? - B: Sections containing multiple content items within previous Compositions? - runs the danger of pointing to too much content if you don’t check properly; - C: sub-Entry level items, e.g. Clusters and Elements, e.g. a single lab analyte inside a lab result OBSERVATION? - runs the danger of mixing up e.g. a target value (e.g. target BP) with an actual value, or anything else taken out of context; - D: any structure anywhere in a previous COMPOSITION (let’s limit it to LOCATABLEs, which is nearly everything); - seems dangerous in general. I am personally strongly in favour of a type A kind of citation - having a single Entry as the target. It always seems attractive to want to refer to anything, but I think that is of limited utility, and carries dangers. It is of course technically possible to model different kinds of citation object, that can point to different kinds of target structure. Technical Requirements - Representation To these we need to add some technical requirements, e.g.: - does a retrieve of the Problem List: - get all its cited contents in one go? I.e. what the clinician considers to be the content? OR - get only the heading structure and the citation objects (some kind of direct references), with further dereferencing needed to resolve all the citations in order to build the List for display and update? It seems fairly obvious that the first option is what we want - the whole point of the managed List after all is that you can easily get hold of it as a single logical object. So here’s the main technical problem. To achieve the result that the full List, including all cited contents, is returned through the API on request requires a solution to either persisting or computing the full contents of what the citations point to. The options include (with some obvious dangers listed): Persisted Copies: citations are resolved at create time, i.e. they cause copying into the persistent List structure, i.e. the EVALUATION recorded 3 years ago containing my diabetes type 2 Dx is just copied into the Problem List when it is added in the curation process. - the obvious danger here is that copies of Entries are likely to cause duplicates in querying - we are breaking the golden rule of IT here after all; - however, making some sort of safe, encapsulated copy is undoubtedly possible; Generated Copies: citation references are resolved at retrieve time on the server when a retrieve request is made such that the full Problem List is instantiated prior to sending through the API - this requires a model that includes data items that are not persisted, but generated post retrieve - more complicated; - the query service has to do a different sort of retrieval, so that these duplicate content structures are not created prior to executing the query - again more complexity; Persisted Serialisations: citations are resolved at create time, but don’t create structure copies (e.g. a 2nd EVALUATION etc), instead are instantiated in serialised form, e.g. XML or JSON which just need to be rendered to the screen (this kind of approach is documented in the Confluence page on Report representation). - this approach will prevent duplication in querying and any other process that aggregates persisted EHR data; - but it loses the native openEHR structures that might be useful on the client side. - Some other (new) native technical representation: some new converted form of the current native structures, e.g. a flattened readonly Entry or similar (see below). As per that Confluence page, I think there are very good arguments for using the serialised approach for report-like objects, e.g. discharge summaries, referrals, etc, because they are indeed a kind of recorded statement at a point in time that is treated as a medico-legal document. Whether that same logic holds for managed lists is a question. There is another potential requirement as well, which is that the client may want not just the cited Entries in the Problem List, but: - their context info i.e. from their containing COMPOSITIONs, indicating ‘when and where did you get your Dx of type 2 diabetes’ AND/OR - the version information, i.e. from the containing ORIGINAL_VERSION object, indicating ‘when did this information become visible in the EHR’. So we might not just want ‘straight’ Entries, but ‘wrapped Entries’ or ‘flattened Entries’ containing that other data, or each cited Entry. Note - this need is not specific to managed lists, but could be desirable within query results in general (today we solve it by stating the bits and pieces we want in the SELECT part of a query). Technical Requirements - Update When a managed list is being updated, i.e. ‘curated’ as we often call it, you can’t modify the cited contents (well, you might be able to do that, if you see errors etc, but it’s not a routine part of List update). Therefore if the ‘resolved’ (client side) representation includes native objects representing the citation targets, those latter objects have to be considered readonly. If the representation is in a serialised (or some other) form, it might be easier to do this. Other than this, updating a managed list should allow any reasonable change - removal of references, addition of new ones etc. Technical Requirements - Interoperability There are some other technical questions to think about as well. For example, what happens when copying the Problem List(s) and Medication List to another EHR system, e.g. GP → hospital? This can be via an EHR Extract or some other means. How would the receiving (openEHR) system persist the data? That depends on how it is represented, according to those options above - as native openEHR structures, or as a serialised form. Would such copying require that all the cited Entries and their containing Compositions be copied over as well? For native openEHR → native openEHR, a full copy should be made (like a Git repo sync operation with branches being pushed to a target repo) but for other environments, we might want to make fewer assumptions. We might therefore consider that there is a form of managed List that has references that no longer have targets in the system where it is persisted, due to being a copy. Towards a Solution My current thinking over the years on this issue is toward the following kind of solution: - within a openEHR EHR system, we represent managed lists such that citations contain direct references, which are resolved (each time) on retrieval in the server, so that the structure that goes through the API is the ‘full’ structure.
OPCFW_CODE
[WIP] #3474 Resolves #3474 When we have an error on updating an organization, we do not show the error details Description Added descriptive error messages to updating organizations. Type of change Error handling How Has This Been Tested? wip Screenshots Hey @lokisk1155 -- Thanks for this -- it should reduce user frustration. One admin thing from me: In future, could you put a short description as well as the issue number in the title -- It just helps the reviewers when they are skimming the list for things that they should be reviewing (we have different strengths) Hey @lokisk1155 -- Thanks for this -- it should reduce user frustration. One admin thing from me: In future, could you put a short description as well as the issue number in the title -- It just helps the reviewers when they are skimming the list for things that they should be reviewing (we have different strengths) no problem! Looks like some tests were assuming a true/false result from the OrganizationUpdateService.update call that need to be updated. But also there are some tests that indicate the valid? method might have been doing more than the model-level validations somehow. Looks like some tests were assuming a true/false result from the OrganizationUpdateService.update call that need to be updated. But also there are some tests that indicate the valid? method might have been doing more than the model-level validations somehow. Yes! This is very interesting lol! Before I revise tests on the validations I want to get to the bottom of "valid?" I don't see the spec for the new specific error messages. @jadekstewart3 I don't think there should be specs added to check the validations themselves as they exist on the organization's model. A little redundant..? @cielf @awwaiid Hi guys I am going to give it a final review on my end later tonight, but it is ready for review. I am changing a lot of tests @dorner here is a video of my issue with render :edit. My only thought is to render a custom path? @dorner Recreation Steps login as<EMAIL_ADDRESS>(password: password!) Click "name not provided" at top right (opens drop down) Click my_organization Scroll down and click edit Make name empty and click save /edit will disappear from url What I have tried: Redirecting to the URL which works correctly but loses the users changes Attempted many different syntaxes: render action: :edit , status: :unprocessable_entity Refactoring edit route to have its own namespace 3b: then attempted this with a custom template Disabled Javascript - No interference on form Disabled turbo on form 5a. Added specific URL and action to the form @lokisk1155 sorry for all the back and forth. It's totally fine that the /edit disappears from the page. In "non-Turbo" world, the user could refresh and get a prompt "do you really want to resubmit?". In Turbo world that prompt doesn't work, so basically all we're missing here is that if the user refreshes they're going to get an error page. However, if they click "back" from the error page, they'll get back what they previously entered, so they should be fine. Scouring the internet didn't turn up any better solution to this unfortunately. We could turn off Turbo entirely for this page, which probably would be a better experience to be honest. Might be worth posting on the Hotwire forums to see if anyone has any other ideas here. @dorner sounds good, for this PR I will clean the code up for my changes to be approved then. I will talk to @cielf about opening a new issue for the url being lost, I don't think turning turbo off for this would be ideal as users are probably not refreshing that often, and if they do clicking back will fix the routing error. @dorner Things are looking good on my end here! @lokisk1155, @dorner If I understand the problem with the /edit disappearing from the page correctly, it sounds like it would be not very much gain for an undetermined amount of work. Given that we have on the order of 130 open issues (including those that are in the backlog for getting written up to the point that a developer can work on them), I would vote for slotting that problem into the "Won't Do" pile, because the cost in volunteer hours to benefit-to-the-banks ratio is high.
GITHUB_ARCHIVE
Should Stack Exchange log out all users from the network and let them sign in again in the wake of the Heartbleed vulnerability? I see that over the past few days the hype for "Heartbleed Security Vulnerability" has increased. Several sites administrators have take actions to prevent their servers from being compromised. Stack Exchange has also taken few steps to protect its users' data. Some of the methods have been mentioned here Is Stack Exchange safe from Heartbleed? Isn't it a good idea to logout all users from the network and let them signin again so that all the keys have been updated? SoundCloud is doing this and I guess StackOverFlow and other Stack Exchange sites should take this into consideration. It can help. This might be relevant? "Several sites administrators have take actions to prevent their servers from being compromised." I really hope tens of thousands have. Is it a good idea to log users out and let them login again? Is it a good move? @djechlin Ehhh... as long as a handful of them have. How many sites can really be out there, anyway? ;-) @AndrewBarber You're assuming one admin per site. Apparently those few admins simply administrate all however many billion sites out there. It can't be that much work, after all. This action might give users a sense of security that steps are being taken to protect their data. @TalhaMasood A sense, perhaps. But I think the reality is more important. @Servy Actually, I was assuming that handful of admins was managing Facebook, Twitter, and Stack Overflow. Are you saying there are other sites out there?? @Servy if you want to think about it further, also don't assume one site per terminator - many sites are served behind shared terminators, even here. @AndrewBarber And what is that reality? Are we really safe. Is their a possibility that one fine morning I wake up to see that my account at Stack Exchange has been compromised. I just noticed my account has been compromised as early as 2009, and the hackers have been spouting nothing but nonsense in my name. I am shocked and demand urgent action @Pëkka Right away. I'll suspend your account in the interim! I have the name of a good shrink, @Pëkka I might be naive as to what the impact is, but what's wrong with leaving up to the individual users? If their account gets compromised because they failed to take the necessary steps, is that really SE's problem? @psubsee2003 Excellent :) That's what I wanted to hear @psubsee2003 Well, odds are the vast majority of users aren't aware that there is a risk; many might be willing to act if they knew there was a problem requiring an action. @Servy this means SO should educate users about this next time they login. People need to know that there is a problem and SO won't let anyone steal it's users' info. @TalhaMasood - You are aware that none of the Q&A sites hold passwords, right? @TalhaMasood Meh, easier to just silently fix the problem, if possible/needed, rather than getting people worked up over something that has already been resolved. It's probably more work to educate them to the point of being able to solve the problem than it is to just solve it for them. @Oded I login to SO using my Google Account. I think they do save the security Token against that. @AndrewBarber well, there's about 150k serverfault users, so there's that. @servy "The greatest object in educating is to give a right habit" -Maria Mitchell @Oded Wait, you mean that gmail password renewal request email I got from<EMAIL_ADDRESS>was fake??
STACK_EXCHANGE
Why should I Wipe Encrypted Drives? We are going to talk to you about the need to wipe drives that are encrypted at the end of life. It's a question that we get all the time. If I have drives that have a software or hardware level encryption, is it necessary for me to go through and actually do a wipe before it leaves my organization? Well, the answer is a clear yes. And we're going to go through some of the reasons why that is. And some of these are going to be more likely than others. But all of them are actual risks. If you have any questions about this, please feel free to reach out to us at firstname.lastname@example.org Why you would want to incorporate a wipe into your end of life process? First off is that the data may not actually be encrypted. If you're not certain that 100% of your data is encrypted, 100% of the time. If you got a mix of devices, some of them are encrypted. Some of them are mobile devices, laptops, desktops, servers. Unless you're certain, then that is a risk. Second off is that encryption can sometimes be shared across devices. In particular when you're dealing with RAID arrays. So if you've got a RAID that's encrypted and you have a drive that's taken out of it, it is possible that that encryption key is stored elsewhere and could be then used to to decrypt the drive at a later point Third off would be backdoor access to the drive. Especially if you're dealing with encryption software as opposed to a hardware level encryption. So it's possible that these drives have some type of backdoor, whether it's designed by the software itself or if it's designed by law enforcement, that's always a risk. Fourth is going to be brute force attacks. Encryption is always expanding and becoming more difficult to crack, but computers are also getting more powerful. And brute force attacks are always possible in the future, in particular, where computing power has gotten greater. And if you've got a drive that has left your organization, then four years from now someone tries to brute force that encryption key. It's possible that they could crack it. Fifth off would be the DCO / HPA / wear leveling areas. So sometimes these devices will not have these pieces encrypted. DCO's, HPAs, these are hidden partitions. Wear leveling areas are used when SSD are trying to prolong their lifespan. So these areas could pose a risk if someone did a laboratory type attack on these drives and this information was not encrypted or this information was exposed in some way. The next point would be just that the audit report is far more secure. One of the key things that we do here at WhiteCanyon is we're giving you this audit report that details everything about the process itself. The when the wipe stated, when it stopped the unique identifiers like the Mac address, drive serial numbers, all the other configuration specifics. Who did it? This audit report is your proof that you did what you said you were going to do. And if you ever have to justify what you've done, whether that's in a court setting or any type of legal setting or to show that you're in compliance with some regulatory requirements, that audit report is your golden ticket. And then the last point would be just that this is seatbelts and airbags. Encryption is incredibly important and very powerful. If it's used correctly. WipeDrive makes sure that no one could ever forensically recover any of the bits from that drive and try and crack them or try and go around the encryption in any way. So they mutually enforce each other.
OPCFW_CODE
The process of configuring SMTP journaling is done using both the Commvault software and the Exchange server. Steps in the Commvault Software Steps on the Exchange Server Create a remote domain. Create a mail contact. Create a Send connector. Create journal rules. Supported Exchange Servers Exchange 2007 or later (on-premises) Office 365 with Exchange Connecting a Remote PowerShell Session to Office 365 with Exchange For some tasks related to Office 365 with Exchange, you must use Windows PowerShell to create a remote PowerShell session to Office 365 with Exchange. For more information about how to create the session, see the Microsoft TechNet article "Connect to Exchange Online PowerShell", https://technet.microsoft.com/library/jj984289(v=exchg.160).aspx. Important: Before you perform any tasks that use the remote PowerShell session, you must run the following cmdlet: You only need to run this cmdlet once. If you run it again later, you will receive an error. For more information, see the Microsoft TechNet article "Enable-OrganizationCustomization", https://technet.microsoft.com/en-us/library/jj200665(v=exchg.160).aspx. The ContentStore Mailbox can capture BCC and distribution list information if the sending mail server has envelope journaling enabled. This information is not captured if envelope journaling is not enabled. Any type of mailing servers is supported if it provides the envelope format. Consult the documentation for your mail server to determine how to enable journaling for your mail server. The ContentStore Mailbox will capture all other information if envelope journaling is not enabled. Load Balancing and Fault Tolerance The ContentStore Mailbox supports the following methods for load balancing: Deploying a single send connector with multiple smart hosts Hardware load balancers Mail exchange records (MX records) The method that you choose depends on your environment. For more information, consult Microsoft documentation. For example, see the following Microsoft TechNet articles: "Load balancing", https://technet.microsoft.com/en-us/library/jj898588(v=exchg.150).aspx "Understanding Load Balancing in Exchange 2010", https://technet.microsoft.com/en-us/library/ff625247(EXCHG.141).aspx Before You Begin Make sure that the SSL certificate is ready to be installed on the ContentStore Mail Server (SMTP). Use a digital certificate signed by the Certificate Authority (CA), and not a self-signed certificate. Make sure that SSL port 25 is open on the ContentStore Mail Server (SMTP) access nodes.
OPCFW_CODE
For layout settings, click on ‘All Tabs’ icon and you will be redirected to the All Tabs screen. Now click on ‘Layout Settings’. By clicking on the ‘Layout Settings’ link you will be redirected to the Layout Settings screen. Click on the ‘App Menu’ icon and App Menu screen will open. Select the “SalesPort app” from the app menu. You can select the desired role and module from the ‘Select Roles’ and ‘Select-Object’ dropdown list and you can set the portal layout of the objects as per user role selection. Set Object’s layouts for List view, Edit view, and Detail view. Select the fields from the left column and click on ‘Add’ After populating the layout, click on the ‘Save’ button to save the layout. You can rearrange the order of fields by clicking on ‘Up’ and ‘Down’ icons beside the right column populated with the selected fields. You can set the Icon, insert the Title and the Description for ListView, Edit View and Detail View individually that will display in the portal. You can set the Layout for the custom object as well. Note: You cannot set layout for the Attachment and Call module. Sync fields from active page layout: Sync fields from active page layout will allow the user to copy fields from default salesforce record page layout to portal module layout. By selecting this option, a popup will appear to confirm the sync of the fields from the active page layout. By clicking on the OK button, it will load fields to the Selected fields list in the dropdown. This will be helpful for users who want to provide the same fields to portal end the same as salesforce module record page fields. Copy all fields to detail view: By ticking the “Copy all fields to detail view” option, the fields will be selected for the detail view as selected for the edit view. Edit/Detail subpanel section-wise Salesforce admin can manage the page layout by separating the section and add the relevant fields to divide the information in the customer portal. You can configure the sections that you want to display the details in the customer portal to the users. Insert the ‘Title’ for the section. i.e., Account Information, Additional Information, Address Information, etc., and manage a row into the section. – Row configuration in Section Under the section, you will get the two columns, here you can drag & drop the fields from the left (list of the Fields) to right in the row under the specific section. By clicking on + (Add Row) button, you can add a new row into the section and drag & drop the relevant fields as per the section. You can remove the unwanted fields just by clicking on Cancel ‘X’ icon and remove the row as well. – Section Configuration You can manage the page layout section-wise by selecting the fields for the selected object. By clicking on the section ‘Title’ text, you can edit the title as you want to display it in the customer portal. Once all the details are added and configure the page layout, click on the Save button. You can also configure the Field properties from the page layout and manage the details of the Field. Besides the Field name, you will find a setting (gear) By clicking on it, you will get a popup named Field Properties on the screen. You can edit the ‘Field Label’ and configure the field-related details that you can also configure from the “Field Management” section. The Fields will appear as you have set their properties. So, it will be easy to modify and configure any Field properties from the page layout. Once the Field Properties are configured, click on the Save button. Note: Layout settings page, in selected fields, each field should be accessible or have read-only permission only then it will be visible in front. To avail of Salesforce modules into your WordPress portal, it is mandatory to set Portal Layouts for each accessible module.
OPCFW_CODE
Motors overheating with DRV8835 We have a problem with the DC motor driver DRV8835 that we have soldered to the black PCB attached below (4). Prior to this driver, we were using L293D h-bridges to run these DC motors and never experienced an issue. We changed the drivers from L293D to DRV8835 and the motors not only overheated but also drew twice as much current. With the old board (3) four motors (not loaded) were drawing ~0.2A and with the new one (black PCB) it is drawing ~0.5A. We never had this issue with the old PCB and L293D drivers. We are using ESP32s microcontroller and the schematic of the new board is given below (5). Before designing the PCB we set up a circuit on a breadboard (6) and everything was working fine. The motors got slightly warmer with DRV8835 on the breadboard but they were running at the same speeds comparing with L293D. Are there any suggestions to this problem that we can try implementing? We are working on robotics but we don’t come from an electronics background so any suggestions are appreciated. Thanks! Summary The DRV8835 is providing a higher voltage to your motor even though you are using the same power supply. The DRV8835 uses FETs with a very low "On resistance" to switch power to the motors. Because of this, the voltage applied to the motors is very nearly the battery voltage. This snippet from the DRV8835 datasheet shows what you can expect: The L293D uses bipolar junction transistors in a Darlington configuration to switch power to the motors. Because of this, the voltage actually applied to the motors will be about 3 volts lower than the battery voltage. This snippet from the L293D datasheet shows you what you can expect: At 500mA through your motors, you would be getting a voltage drop across the DRV8835 FETs of about 0.4 volts. Compare that to the voltage drop of about 3 volts for the L293D. As far as I can tell from your schematics, you are using a 5 volt supply to drive the motors. With the L293D, the motors are getting about 2 volts, maybe 2.4 volts. With the DRV8835, the motors are getting about 4.6 volts. The current drawn by a motor goes up as the voltage increases. You are seeing about twice the current drawn for about twice the voltage - that's pretty much what I'd expect. Since power is the product of current and voltage, the power your motor consumes will have gone up by a factor of 4 (2*V*2*A.) That's quite a bit more power than before, so the motors get quite a bit warmer than before. To make the motors run cooler, you can try a few things: Insert a resistor (1 ohm or so) in series with the motors. The resistor will get hot instead of the motor. Use 3.3V to power the motors instead of 5V. You might not have a 3.3V source available that can deliver the needed current, though. Switch to motors that are intended to run at the higher voltage. See if they are really getting hot, or just warmer than you expected. If they aren't hotter than the motor datasheet allows, then you just live with it. If you are using pulse width modulation, use a lower duty cycle. With the higher voltage, you should be able to use a shorter duty cycle to get the same speed. Looking at the DRV8835 data sheet, you should be able to run the PWM fast enough that the motor current will be averaged by the motor's armature inductance. I'd start with 20kHz as a nice round number, if your micro can provide that easily.
STACK_EXCHANGE
Versioning refers to saving new copies of your files when you make changes so that you can go back and retrieve specific versions of your files later. When creating new versions of your files, record what changes are being made to the files and give the new files a unique name. Follow the general advice on the site for naming files, but also consider the following: - Include a version number, e.g "v1," "v2," or "v2.1". - Include information about the status of the file, e.g. "draft" or "final," as long as you don't end up with confusing names like "final2" or "final_revised". - Include information about what changes were made, e.g. "cropped" or "normalized". Simple file versioning One simple way to version files is to manually save new versions when you make significant changes. This works well if: - You don't need to keep a lot of different versions. - Only one person is working on the files. - The files are always accessed from one location. The directory below shows multiple versions of a web page mock-up called DMSSiteHome.jpg. Note the use of v1, v2, etc. to indicate versions. The notations "FISH" and "SandC" indicate different images that were swapped into some versions, i.e. major changes that were made. Saving multiple versions makes it possible to decide at a later time that you prefer an earlier version. You can then immediately revert back to that version instead of having to retrace your steps to recreate it. This method of versioning requires that you remember to save new versions when it is appropriate. This method can become confusing when collaborating on a document with multiple people. Simple software options Everyone at Stanford has access to two cloud services that offer version control features: Google Drive and Box. Drive's word processing, spreadsheet, and presentation software automatically create versions as you edit. - Any time you edit files created on Google Drive, new versions are saved as you go. - Version information includes who was editing the file and the date and time the new version was created. - You can also see what changes were made from one version to the next (or between the current version and any older version) and revert back to a previous version at any time. Pros: The real-time editing feature means that Google Drive works well for collaborating on files with multiple people. And because the files are on Google Drive, they are accessible from anywhere. Cons: You are restricted to the software provided by Google, which may not have all the bells and whistles of your desktop word processing, spreadsheet, or presentation software. More: Find out more about using Google Drive for Stanford. Stanford Box creates and tracks versions of your files for you. - Any type of document can be stored and versioned with Box. - The comments feature lets you indicate changes that have been made between versions. - Documents can be shared with others, and Box will track who uploaded or updated each file and when. Pros: Box allows you to automatically sync folders on your desktop to your Box account. Microsoft Excel, Word, and PowerPoint files, as well as Google Docs and Spreadsheets can be edited directly within the Box interface. Cons: Does not have real-time editing like Google Drive. Add-on: The Box Edit add-on allows you to launch local editing of any type of file from your Box account. Saving the file automatcially creates a new version back on your Box account. More: Find out more about using Stanford Box. Advanced software options If you have more sophisticated version control needs, you might consider a distributed version control system like git. Files are kept in a repository and users clone copies of the repository for editing and commit changes back to the repository when they are done. Version control systems like git are frequently used for groups writing software and code, but can be used for any kind of files or projects. Many people share their git repositories on GitHub.
OPCFW_CODE
Course Details: Cloud Computing – AWS Amazon The Cloud Computing Training content is developed with the goal of equipping trainees with the skills needed for taking up the coolest job for next generation. The Training introduces you to the Amazon Cloud and the skills required to work on AWS Amazon infra management on various stages to achieve meaningful insights. Realizing the applications and statistical concepts and building Cloud Architects in the AWS Amazon Cloud field using the Aws Amazon tools is at the heart of the course content. The required tools and techniques for asking right kind of questions to make inferences and predicting the future outcomes are discussed during the course of the training. All along the training, we will be using the real world and real time scenarios wherever applicable to give you the comfort in taking up the Cloud Computing job and start performing from day one! Below are the objectives of AWS Amazon Cloud training: 1. Get hands on with the AWS Amazon Management Console environment and Resource Managing 2. Understanding the Services available in AWS Amazon Console. 3. Hands on with AWS resource like EC2, ELB, Auto Scaling, IAM’s, AMI’s, RDS, Cloud Watch, Cloud Front, Route 53, S3, VPC, VPN, SNS, SES, etc., 4. Various techniques for AWS Design and Configure the infra using AWS Amazon management console. 5. Apply customer views to build the AWS Amazon Infra services for productivity. This course will cover the following concepts on each day - System Operations on AWS Overview - Networking in the Cloud - Computing in the Cloud - Storage and Archiving in the Cloud - Monitoring in the Cloud - Managing Resource Consumption in the Cloud - Configuration Management in the Cloud - Creating Scalable Deployments in the Cloud - Creating Automated and Repeatable Deployments Who can undergo the Cloud Computing Training? Every industry is seeking towards Migrating to Cloud infra for getting an edge over the competition in the market. Given the dearth of skilled cloud engineers, there is an enormous opportunity for professionals at all levels in this area. 1. IT professionals looking to start or Switch career in Cloud Computing. 2. Professionals working in the field of System and Network Administrator & Graduates planning to build a career in Cloud computing. Pre-requisites for the Course? - The ideal pre requisites for this class are prepared individuals who have: - Strong interest in Cloud computing - Background in introductory level of basic concepts of Systems Administration - Background in either software development or systems administration - Inquisitiveness & good communication skills to be successful Cloud Computing Engineer. - Some experience with maintaining operating systems at the command line (shell scripting in - Linux environments, cmd or PowerShell in Windows)
OPCFW_CODE
Accuracy of converting from TLE/Orbital Elements to Cartesian if used for other propagator? Say that I wanted to propagate a real-life satellite based on an initial position in space. However, the only source of data I can get is from a tracking website like CelesTrak or Space-Track, where the output is in a TLE format (I might be wrong about this being the only option from Space-Track, but I digress.) Alternatively, I may be able to obtain information like orbital elements, for example using the NASA Horizons page. The TLE is designed to be used for SGP4, but the propagator that I would be using doesn't take orbital elements like an TLE; rather, it uses cartesian state vectors (ECI X,Y,Z in both position and velocity) directly to propagate. I know it is possible to convert an TLE to cartesian state vectors via a long/convoluted process. However, in doing so, I would be introducing errors into the system from the TLE/SGP4 system, which is less accurate than the propagator I would use from that point forward; that being said, the conversion would only be used for the initial state, not for any other part of the propagation. Similarly, it's possible to convert from orbital elements to Cartesian State Vectors, but those orbital elements are also mean values and as such are also inaccurate. What kind of accuracy loss is incurred by converting a TLE or Orbital Elements into cartesian state vectors, for the sole use of being an input to a more-accurate propagator? Does it mainly depend on the length of the simulation, or is there greater error from the conversion process alone? If you are interested in asking "What would be the best way to try to do this.." I'll be happy to write an answer. Maybe you can ask that separately? @uhoh perhaps, although i've seen there are papers detailing instructions on converting from SGP4 to Cartesian - are you talking about 'the best way to do this' meaning 'the most accurate way to convert TLE to XYZ'? yes I think; what I've always wanted an excuse to try is to use TLE+SGP4 to generate an ensemble of state vectors at say 1 or 5 minute intervals for one period around the TLE's epoch, then switch to a normal cartesian propagator and propagate each one into the future. This generates a "cloud" of positions at some future epoch. I was curious if this average orbit would be more reliable than just choosing one point in time and getting only one state vector from SGP4. The more I think about it though the more I realize this may be a lot of work that ends with ambiguous results... @uhoh I think your suggestion is the answer to "What kind of accuracy loss": i.e. do that process for the object of interest and see how far the future cloud differs from the next TLE. Obviously the object could be station-keeping or changing attitude so repeat a few times to get a better overview. @Puffin ya I see what you mean. Whether the TLE is representative of the spacecraft's actual state is a separate question though. Here I'm only talking/thinking about using a TLE at face value. How predictive it is of the next TLE or how closely it matches what's going on at the moment are great questions though! If a two line element represented the exact state of a satellite at the epoch time of the element set, there would be zero penalty in applying the SGP4 algorithm to cartesian coordinates, which are in the True Equator, Mean Equinox (TEME) frame. From there it would be simple matter of a coordinate transformation to convert those TEME coordinates to something sane such as the J2000 frame (better said, a semi-simple matter; TEME is not well defined). A two line element set does not however represent the exact state of a satellite at the epoch time of the element set. It instead represents the two line element set that minimizes a weighted scalar error metric over a span of observations, with states propagated via the SGP4 algorithm. The inherent limitations of the SGP4 algorithm means that the cartesian coordinates computed from a two line element set will have a significant error, even at the epoch time of the element set. Edited in response to very constructive criticism from @DavidHammen and @CallMeTom. I agree with them, but I didn't say those things in my initial answer, and I should have. If the only source of data you have is a TLE, then you are starting from a low-quality initial state, which you should expect to be wrong by several kilometers. All a high-quality propagator can do from there is tell you where something that actually was where the TLE claimed your satellite of interest was will go. You don't know where your satellite actually was, so nothing can tell you where it will actually go. The other propagator will do a better job than SGP4 of estimating where an imaginary object at the TLE's initial state will end up, but that doesn't mean the imaginary object will evolve into a state closer to the state of the real satellite. The error built into the very approximate nature of a TLE is not recoverable without a better source of data. If you have something else, then use it instead, because TLEs are terrible. However, with all that in mind, if all you have is a TLE, and you are interested in what happens to a notional satellite that really was where the TLE claimed something was, then yes, that is the best you're going to be able to do. TLEs exist for the purpose of being easily distributed. SGP4 exists for the purpose of turning TLEs into something more useful, like Cartesian position and velocity. Once you have those as the initial state at your desired times, handing them to a different propagator with better models for gravity, drag, solar pressure, and everything else is the best way to proceed, as long as you remember that trusting the TLE to begin with may well be your biggest source of error. I do this routinely at work, but only in design studies to model sensor performance on a moderately realistic simulated satellite environment. In that case, propagating years into the future is not my goal. I just use a bunch of TLEs to give me a lifelike distribution of initial states, because being off by tens or even hundreds of kilometers in-track at the starting point doesn't matter to the simulation results; all that matters is how the states evolve from their imaginary starting conditions, for which I would never use SGP4. If I am doing anything with a currently operational satellite, I always have something much better than a TLE to start with. If you have not just another propagator, but also an orbit determination tool, then you can play with using the SGP4 output to simulate observations, and determine your own orbit from that. I stress "play", because the only question this answers is "I wonder what would happen if..." You're not going to make a TLE-derived orbit better without real data; but if simulation is all you're after, then it can be interesting to explore this option. Real data is available from several commercial vendors, but it's not cheap -- except perhaps in comparison to the painfully expensive commercial orbit determination tools. The process of converting out of TLE & TEME seems long and convoluted if you plan to type it all in yourself, but you don't have to. You can download SGP4 from https://www.space-track.org/documentation#/sgp4 and use it to process a bunch of TLEs into long lists of position and velocity; osculating Keplerian elements; latitude, longitude, and altitude; or a variety of other formats. Then you can do whatever you want with them. @uhoh: Never take a TLE at face value! Its components are mean elements, and so is part of the definition of its coordinate system. At face value, they describe the motion of a fictional satellite with respect to a fictional equinox. However, everything is carefully arranged to combine and cancel in just the right way to get something reasonable out, but only if you use SGP4 to do it. In the words of Spacetrack Report #3, The NORAD element sets are “mean” values obtained by removing periodic variations in a particular way. In order to obtain good predictions, these periodic variations must be reconstructed (by the prediction model) in exactly the same way they were removed by NORAD. The point cloud approach might produce some interesting results, but I think the main flaw is we're missing some important data that space-track does not provide, namely the covariance. If we had that, we could replace each point in time with not a single state vector, but rather a large ensemble normally distributed around that point, and see how a particular confidence volume grows over time. If one wants precision orbit propagation, starting with a TLE is exactly what one is not supposed to do. What would be nice would be the raw data that went into the formation of a TLE. Those data are not available to the public. Range, range rate, azimuth, and elevation data from a ground station might be available to the owner / operator of a satellite, as may be position and velocity vectors from a space-qualified GPS receiver onboard the satellite. It's those kinds of data that are needed for precision orbit determination. TLE/SGP4 is supposed to give a fast and easy way to get a position in near future with acetable but low accuracy. You will NEVER gain the accuracy you already lost but instaed make it bigger. There are cases, where converting TLEs to SV is resonable, but in this cases you should not propagate with SGP4 but use the SV for the TLE-Epoch and you always have to know that you are starting with a estimation, not a real position! I couldn't find a good paper on TLE-accuracy, yet. But a good estimation is an error up to 5km in in-track direction and within 1 km in most cases! (-1 b/c you are wrong!)
STACK_EXCHANGE
This month's letter includes a summary of our new Visual Studio Gallery website we launched a few weeks ago, the updated VSX developer center, information about what the VSX team has been working on in the past few months, event news on VSX, some new VSX projects released, more VSX content online, and a preview of what's coming next month. What's new with the VSX team Our team continues to work on upcoming releases of the Visual Studio SDK for VS 2008 as well as planning around the next version of Visual Studio. We have part of our team working on the next update for the VS SDK while others work on following versions that will include more tools within the VS SDK itself. Our team is working on determining when the next release of the VS SDK will be, and we expect to know more details on that next month. We are determining if we want to release an updated VS SDK soon with very small changes or wait a bit longer to release a newer VS SDK which contains more significant enhancements. While we work on upcoming versions of the VS SDK, we are also working reducing the size of the VS Shell runtimes. One thing we plan to ship when VS 2008 Service Pack 1 ships, is to re-release the VS Shell runtimes. The new redistributable packages will not include the actual .NET Framework 3.5 installation bits. The new VS Shell chainer feature will still automatically check for the .NET Framework 3.5 and install it as needed. This will reduce the size of the VS Shell setup by about 200 MB. Visual Studio Gallery announced Our team is still buzzing with enthusiasm from our recent launch of the new Visual Studio Gallery website. For additional news and announcements for the Visual Studio Gallery: - VSX Team blog: Visual Studio Gallery announced - Soma's blog (our developer division senior VP): Visual Studio Gallery - Anthony Cangialosi's blog: Welcome to the Visual Studio Gallery Visual Studio Gallery tips You can access the site via http://visualstudiogallery.com/, or the shorter friendly redirect http://vsgallery.com/. Anthony Cangialosi, program manager on our VS Ecosystem team who is responsible for the Visual Studio Gallery site, has started blogging again with information and news about the new site. Recently posted on Anthony's blog was: Seeing all the VS Gallery Extensions Just thought I would share a useful tip for the Gallery. You may find that you want to see more than the top 10 newest items or the 10 most viewed items. We'll be adding a more link to the bottom of those in the near future but in the mean time you can see this by using a trick in the search bar. Enter a space (the actual character space with the space bar) into the search control in the upper right corner of the gallery. Then press the search button. You'll get back a list of all the extensions since all extensions will have a description with a space in it. Now sort this list by the column you are interested in, modified date, number of views, cost category, etc. If you use the tip above and search the Visual Studio Gallery with just a single space character, you'll see we just passed 500 items listed this week, which means we are averaging about 100 new items listed on the site per week. VSX Developer Center updated The Visual Studio Extensibility Developer center at http://msdn.com/vsx was updated recently with a new interface similar to the one found on the VB and C# dev centers, and others. The site now has its own stand-alone center with independent navigation pages on VSX via the tabbed navigation (Library, Learn, Downloads, Support, Community). There will be additional enhancements to the site soon along with upcoming new VSX content like whitepapers, videos, samples, and more. VSX on Channel 9 Late last month we had two videos posted on Channel 9, both interviews by Dan Fernandez: Channel 9: Anthony Cangialosi and Ken Levy: Visual Studio Gallery I catch up with Anthony Cangialosi and Ken Levy from the Visual Studio Extensibility team to talk about the newly launched site for finding Visual Studio extensions, www.visualstudiogallery.com. You'll also see Ken walk through using two cool, free extensions that you can download from the gallery, StickyNotes and the Source Code Outliner PowerToy. Ken and Aaron talk about the new features for extensibility in Visual Studio 2008 and the Visual Studio 2008 SDK including touching on key topics like: - How you can build your own IDE with the Visual Studio Shell - How you could create your own language service using Babel - How to plug into editor features like IntelliSense for statement completion - How to build your own "Hello World" tool Window New "How Do I?" Videos for Visual Studio Extensibility There are many new How Do I videos on VSX published, and you can subscribe to the "RSS feed for "How Do I?" videos for VS and VSX. These are great videos created by VSX developers Hilton Giesenow and Dylan Miles. New articles on LearnVSXNow! István Novák continues his awesome series of VSX related educational content he calls LearnVSXNow!, now with 15 VSX technical articles posted with more great educational content on the way. DreamSpark provides Microsoft developers tools to students for free The new Microsoft DreamSpark combined with our free Visual Studio SDK opens the door for many students to learn, use, and extend Visual Studio for free. Some additional comments from our team in Aaron Marten's blog: FREE Visual Studio 2008 for College Students via DreamSpark. Upgrading VS 2005 Packages to VS 2008 If you have created packages using the VS SDK for VS SDK 2005 and have or plan to start using the VS SDK for VS 2008, check out James Lau's blog post: Upgrading VS 2005 Packages to VS 2008: A more Advanced Guide. VSX at events Earlier this week, Quan To and I spoke at the local .NET Developers Association user group on VSX: Extend Your Visual Studio Development Experience. The session lasted over an hour and a half we counted 59 attendees total. Quan showed how create a simple source code outliner extensions using the VS SDK. Quan has a link to the walkthrough steps for that demo and a short summary of our presentation in his blog post: VSX Talk at the .NET Developer Association weekly meeting. If you plan to give a VSX related presentation at a conference or user group, feel free to let me know in advance so that I might mention here on the VSX team blog to help increase awareness. VS extension tips of the month We posted a new PowerToy called PowerCommands for Visual Studio 2008, and it's already the #1 most viewed listing on the Visual Studio Gallery and the #1 download on MSDN Code Gallery (not including documentation downloads). If you downloaded the PowerCommands readme prior today, you may want to check out the updated version of the readme on the download page. The new PowerCommands utility along with Source Outliner PowerToy is a Visual Studio 2008 and StickyNotes, all ranked as the top 3 most viewed listings on the Visual Studio as of today, all make great complementary free VS IDE productivity tools. As of today, the PowerCommands has over 5000 unique views on the Visual Studio Gallery and over 4000 downloads from MSDN Code Gallery. Feel free to post messages in the Discussions or Issue Tracker pages of the Code Gallery page to provide feedback for possible updates and future versions of PowerCommands. In next month's letter, we will have more news from the team, additional VSX content online, and additional information about our upcoming version of the VS SDK for VS 2008. Please send your feedback to us via the Contact link on any of our team member blogs, or post a technical question in the MSDN Forum for VSX. You can also email me directly at email@example.com or using the Email link on my blog. Visual Studio Tools Ecosystem
OPCFW_CODE
Build Generated Samples For a current build generated reference of samples, click here. JasperReports Library Samples The JasperReports project tree containing the library source files and several demo applications that are easy to run and test using the Ant build tool. The project is available for download at SOURCEFORGE.NET. The samples can be found in the demo/samplesdirectory of the project. Some of them use data from the HSQL default database that is also supplied in the demo/hsqldb directory of the project. Details about the structure of a JRXMLreport design and the use of different elements are available in the Schema Reference document. For each sample below, a *.Zip file is provided that includes the *.jrxml, *.pdf and *.html file. For a quick review of the JasperReports library features, we present here some of those sample reports. Table of Contents This is the most complex sample provided with the package. It explains how to use data grouping and more complex element formatting. Crosstabs are a special type of table component in which both the rows and the columns are dynamic. They are used to display aggregated data using tables with multiple levels of grouping for both columns and groups. Illustrates how subreports might be used in more complex reports. Data Source Sample When generating reports, JasperReports can make use of various kinds of data, as long as the parent application provides a custom implementation of the net.sf.jasperreports.engine.JRDataSourceinterface that will allow it to retrieve that data. The library comes with a default implementation of this interface that wraps a java.sql.ResultSetobject and lets people use JDBC data sources seamlessly. Text formatting features are very important in document generating software and JasperReports offers a complete range of font settings: size, style, text alignment, color, etc. The engine can fill multi-column reports either vertically (from top to bottom) or horizontally (from left to right), depending on the "printOrder" attribute specified in the report design. Here's a sample showing the detail section being generated horizontally. This sample illustrates the use of hyperlink elements. They allow the creation of drill-down repots and generally speaking offer a higher degree of interaction with the document viewers. The library has built-in support for generating documents in different languages. In this sample, users will learn how to use image elements in their reports. This sample shows how you can include graphics and charts into your reports. The JasperReports library does not produce graphics and charts itself, but allows the use of other specialized libraries and easily integrates this type of elements into the documents it generates. Scriptlets are a very flexible feature of the JasperReports library and can be used in many situations to manipulate the report data during the report filling process. Shows how different graphic elements such as lines and rectangles can be used in the documents. This is a special sample with debug purposes. It shows how the text fields behave when they stretch downwards in order to acquire their entire content. Element stretching and page overflow mechanisms are very sensitive aspects of report generating tools. Styled Text Sample The text elements can contain style information introduced using an XML syntax based on nested Table of Contents Sample Some reports may require the creation of a "table of contents" structure, either at the beginning of the document or at the end. Here is a simple example for those who want to learn how to create such structures. JasperReports can generate documents in any language. This is a simple example that shows how font and text encoding settings should be used. Sample that shows how to include barcodes inside reports using the Barbecue open source library. Shows how to set up a report in "Landscape" format.
OPCFW_CODE
Gone is a wiki engine written in Go. It's - Convention over Configuration and - designed with Developers and Admins in mind. With Gone, you can - display Markdown, HTML and Plaintext straight from the filesystem. - edit just any file that's made of text. - have all this without setup, no database needed, not even the tinyest configuration. So go get it! Assure that you have Go installed. Now, install the application via $ go get github.com/fxnn/gone Binary releases will follow. You can simply start Gone by calling its binary. The current working directory will now be served on port - Display content. test.mdin that working directory is now accessible as http://localhost:8080/test.md. It's a Markdown file, but Gone delivers a rendered webpage. Other files (text, HTML, PDF, ...) would simply be rendered as they are. - Editing just anything that's made of text. In your browser, append ?editin the address bar. Gone now sends you a text editor, allowing you to edit your file. Your file doesn't exist yet? Use - Customize everything. Change how Gone looks. gone -help for usage information and configuration options. NOTE, that these features only apply to UNIX based OSs. Especially the Windows implementation currently does not support most of the access control features. Gone uses the file system's access control features. Of course, the Gone process can't read or write files it doesn't have a For example, if the Gone process is run by user joe, it won't be able to read a file only user ann has read permission for (as with Likewise, an anonymous user being not logged in can't read or write files through Gone, except those who have world permissions. For example, a file rw-rw-r-- might be read by an anonymous user, but he won't be able to change that file. Also, in a directory rwxrwxr-x, only a user being logged in may create new files. Users can login by appending ?login to the URL. The login information is configured in a good old .htpasswd file, placed in the working directory of the Gone process. Authenticated users can read and write all files that are readable resp. writeable by the Gone process. Note that there's a brute force blocker. After each failed login attempt, the login request will be answered with an increasing delay of up to 10 seconds. The request delay is imposed per user, per IP address and globally. The global delay, however, grows ten times slower than the other delays. - Authentication information are submitted without encryption, so use SSL! - Anyone may read and write files just by assigning world read/write permissions, so better chmod -R o-rw *if you want to keep your stuff secret! - Gone uses the working directory for content delivery, so better use a start script which Index documents, file names Calling a directory, Gone will look for a file named Calling any file that does not exist (including index), Gone will try to look for files with a extension appended and use the first one in alphabetic order. So, the file http://localhost:8080/test.md could also be referenced as http://localhost:8080/test, as long as no test file exists. In the same way, an index.md file can be used as index document and will fulfill the above requirements. This mechanism is transparent to the user, no redirect will happen. When you create files with the extension url and put a URL inside it, Gone will serve it as a temporary $ echo "https://github.com" > github.url A call to http://localhost:8080/github will get you redirected to GitHub now. Gone uses some Go templates for its UI. The templates are shipped inside the executable, but you can use custom versions of them. For general information on Go HTML templates, see the html/template godoc. With your web root as working directory, invoke It creates a new folder .templates which will never be delivered via HTTP. You'll find all templates inside and can modify them. If you (re)start Gone now, it will use the templates from that directory. Note, that you can also supply a custom template path. gone -help for more information. Some day, Gone might be - extensible. Plugin in version control, renderers, compilers or anything you like, cf. #29 - granting file access on a group level, using a - searchable in full text. If you want to modify sources in this project, you might find the following information helpful. Third party software Please note that the project uses the vendoring tool https://github.com/kardianos/govendor. Also, we use the standard go vendor folder, which means that all external projects are vendored and to be found in the A list of projects and versions is managed under vendor/vendor.json. If you build with go-1.5, enable the GO15VENDOREXPERIMENT flag. Gone imports code from following projects: - abbot/go-http-auth for HTTP basic authentication - fsnotify/fsnotify for watching files - gorilla, a great web toolkit for Go, used for sessions and cookies - russross/blackfriday, a well-made markdown processor for Go - shurcooL/sanitized_anchor_name for making strings URL-compatible - golang.org/x/crypto for session-related cryptography - golang.org/x/net/context for request-scoped values - fxnn/gopath for easy handling of filesystem paths Also, the following commands are used to build gone: - pierre/gotestcover to run tests with coverage analysis on multiple packages - mjibson/esc for embedding files into the binary Gone's frontend wouldn't be anything without - ajaxorg/ace, a great in-browser editor +------+ | main | +------+ | | | +---------+ | +---------+ v v v +-------+ +------+ +--------+ | store | | http | | config | +-------+ +------+ +--------+ / | \ +--------+ | +--------+ v v v +--------+ +--------+ +--------+ | viewer | | editor | | router | +--------+ +--------+ +--------+ main just implements the startup logic and integrates all other top-level Depending on what config returns, a command is executed, which by default starts up the web server. From now on, we have to main parts. On the one hand, there is the store that implements the whole storage. Currently, the only usable storage engine is the filesystem. On the other hand, there is the http package that serves HTTP requests using router component directs each request to the matching handler. Handlers are implemented in the viewer and the editor serves the editing UI, the viewer is responsible for serving whatever file is requested. Other noteable packages are as follows. http/failerpackage delivers error pages for HTTP requests. http/templatespackage caches and renders the templates used for HTML output. resourcespackage encapsulates access to static resources, which are bundled with each See the Godoc for more information. Licensed under the MIT License, see LICENSE file for more information.
OPCFW_CODE
SCDPM Out of Disk Space I'm starting to get insufficient storage space on our DPM (2012) server. I've reduced the retention range on some of the jobs. How good is DPM at updating its volume allocations? i.e. will it reduce the volume allocation if it needs less space. If not, is there any routine I can run to get it to reassess/compact the data its using? DPM estimates the amount of disk space it needs for replica and recovery point volumes estimated on what it thinks the growth rate will be, as a result I've found it can often allocate that volumes that are too large, especially if you enabled auto-grow - you can shrink the volumes though to recover this over estimated space. You could try right clicking on one of your protection groups and selecting... a dialog box will open showing how much space is allocated to each replica & recovery point volume, there should be a "Shrink" link next to each recovery point volume, if you click the link DPM will recalculate the space needed for that recovery point volume and if possible reduce the volume. Also have you got anyting under the "inactive protection for previously protected data" protection group? depending on how you removed this items from your protection group they could still be using up disk space - tends to happen if you removed them via modifying the protection group as doesn't give you option of deleting the replicas from disk. What exactly is the message your getting? cscott: I've run through every group, albeit most won't shrink ("data at the end of the volume"). I suspect I'll need to leave it over the weekend post-shrink to see if that has helped, at least short term. There are no inactive groups unfortunately. Of course it's somewhat impossible to say what volumes are being used... twin--turbo: A couple of different errors: DPM has run out of free space on the recovery point volume and will fail synchronization for Server XYZ in order to prevent existing recovery points from being deleted. Used disk space on replica volume exceeds threshold of 90%. Just read your error message - it's a complete different issue from what I assumed. I thought you meant your physical disk was full, this just sounds like DPM hasn't been able to autogrow a volume. This could be cause by the disk being full so probably worth checking you've got space on your disk by going to the "Management" view and selecting the "Disk" link, check how much free space the disk has. Assuming you've got free space on your disk the error message your getting just indicates that one of the volumes on your disk is full and DPM couldn't autogrow it. Go to your protection groups and locate Server XYZ and select the resource that it can't back up (will be shown in red). Right click and select "Modify volumes", same window as before but it will only show volumes for this server, go to the replica volumne colomn and increase the size of the replica volume by at least 25%. Ah, well if I tried to grow the volume it would say there was insufficient local disk space. Shrink has had some effect because I can now grow the two that are reporting with errors. I'm sure medium/long term there will still be an issue, because our file servers inevitably just increase in disk space usage. yeah, sounds like you to add some disks to your DPM server! We have a Buffalo Terastation, and I have a horrible feeling it's already full... (a system I inherited) Lots more errors this morning, so clearly the limit is definitely reached. This is the error I get when I try to modify a disk allocation: There is insufficient space on the storage pool disks to allocate the replica and recovery point volumes Add more disks to the storage pool by using the Disks tab in the Management task area, or reduce the specified allocations Yep sounds like the terrastations full... Yes, you need to add more disks/remove things - how about offloading some of it to tape maybe? Gongalong how well does terastation work with DPM? Which model do you have? Apologies for the lack of follow up. We had a (Buffalo) TeraStation III iSCSI Rackmount TS RIXL. Fortuitously or not it failed, with what appears to be a common fault, so we switched to an HP Storeeasy with lots more space. I can't recommend the Buffalo because it failed less than 2 years from purchase.
OPCFW_CODE
The penetration of smartphones and number of apps in the market is growing more and more and at an equal pace the number of people who want to develop and build apps by themselves runs up. The process of developing an App seems very complicated to most, for this reason I asked some questions to two iOS developers, Mathieu and Alex. The purpose of my interview was to find out the basic skills required to enter into the iOS development world, and then to share this knowledge with you. What do you need to become an iOS developer? To start developing on iOS, you must have a basic knowledge of algorithmic and Object Oriented Programming. Learning Objective C (ObjC) can be pretty difficult if you have just started to approach programming language. It's better to start off with classic languages like C and then learn the OOP (Object Oriented Programming) paradigm. If you are a ninja and you want to go directly to the highest level, you can find some references about the use of Objective C in the Apple's Frameworks. What are the main differences between the development for Desktops and Mobiles? They may seem similar but they are two completely different worlds. In mobile development you have be more careful with system resources; It's very important to manage the memory wisely because phones and tablets are still not as powerful as desktops even though they are evolving quickly. Users want fast and responsive Apps, so you have to use Multi Threading to reduce the loading time. You need to have some knowledge of asynchronous programming concepts. Creating an app you need to build, at the same time, the user interface and a fast loading process for the content. Multi Threading allows you to make an app that can download content in the background and at the same time design the UI (user interface). Do you need only technical skills? Of course not. You must also have some basic design & UX (user experience) skills. During the development, as I told you before, you also need to create the UI and try to develop it in a user-friendly way. Put yourself in your users shoes, try to imagine how they will surf in your app and do your best to make it comfortable and pleasant for them. Think about the different ways of interaction with each type of device. For example, using a mobile phone you don't have a mouse but a touchscreen and moreover, the mobile screens are smaller so when creating a mobile app you have to use a UX that is completely different from the ones used on a desktop app. Why should you choose iOS instead of other platforms? iOS is a very popular platform, among developers and also end-users. At the beginning, developing for iOS can seem more difficult than for Android, but this is not true! In fact Android is much more fragmented then iOS. Java is more popular than ObjC and for many developers it's also easier. But when you have learnt how to code in ObjC, you will find out that it's not so difficult, and that Apple's frameworks are very well projected so developing becomes very easy. You can always find some help in the wide range of help documentation you can find online (UIKit,...). My advice for you Among the many tips I could give you I want to state one in particular, the most important: Don’t start off with StoryBoards. According to Apple : " A storyboard is a visual representation of the user interface of an iOS application, showing screens of content and the connections between those screens. A storyboard is composed of a sequence of scenes, each of which represents a view controller and its views; scenes are connected by segue objects, which represent a transition between two view controllers. " Usually beginners start to develop for iOS with storyboards, but this can be counterproductive because you learn how to develop using a scheme. The problem is that when you want to do more complex things you find it difficult to think outside that scheme. Learning to develop using only codes, or at least limiting the use of storyboards, you will increase your experience with the frameworks, ObjectiveC and the runtime. This will be helpful when you will have complex problems to solve. Once you become more expert you can choose to start using storyboards if you think that can speed your workflow. In any case, we think it doesn’t. :) After reading this article you should know a little bit more about iOS development, and about what you need to know to start developing your app by yourself. If you have any questions and want some more advice, just leave a comment and we will be glad to answer. :) *This is demo content
OPCFW_CODE
Our inspiration for JUNTO was solving the following problem. According to the most recent data, of the total 41.3 million immigrants in the United States in 2017 about half, 20.4 million, spoke English less than “very well." Knowing this, we wanted to provide immigrants with a leading practice resource that is both engaging and interactive for the targeted demographic. While many practice resource applications are available withstanding a saturated market there is currently no SMS platform that helps immigrants study for the Naturalization Civics Immigration Test that is composed of 150 U.S. History questions. While many immigrants often come to America in a weak socioeconomic background, we believe that a comprehensive study platform that not only considers their working hours but the common lack of technical efficiency among immigrants is important. JUNTO will help immigrants swim through the citizenship process. What it does JUNTO is a learning practice resource tool that was created to create a periodical study system for working-class immigrants that wish to take and pass the Civics Immigration Test. While there have been practice resource tools in the past, JUNTO is the only interactive SMS platform that sends periodical texts free at charge, keeping in mind that almost 90% of immigrants are working class individuals and the average study period for immigrants constitutes 5 months. Not only does our product have the best UI/UX interface presented in a Civics Resource Study Resource Tool but acknowledges that almost 3 quarters of incoming immigrants are not proficient with technology. How we built it We built our application mainly using Python. Our back-end is running a Python Flask server that uses Ngrok to handle Webhooks for Twilio's API. We use a simple algorithm that selects and sends random questions to our users using Twilio's API. We have a file of questions and answers we read from. Challenges we ran into We ran into a lot of challenges using Twilio's API. Sending texts was very easy to accomplish, however, we encountered debilitating bugs when we tried to implement receiving a text from our users. After a lot of trial and error, we finally came up with a working prototype and realized that the phones we were initially using didn't support some of these functionalities we were trying to implement. Accomplishments that we're proud of We're very proud of our prototype since we put a lot of work into it. While we began to build a prototype for an IOS app that would provide a network and like-minded services we are proud of the research that we put into the product to ensure successful customer validation. We believe that while an iOS app may seem appealing, many incoming older immigrants are not tech-savvy and are much more comfortable with an application that is provided through an SMS server. What we learned We learned that although sometimes it simpler to default to the most redundant idea, the most profitable and effective products are those that meet the needs of the targeted demographic and consider their issues and struggles in order to maximize effective customer loyalty and usage, alongside customer validation. What's next for JUNTO We believe that JUNTO can ultimately grow into an intergovernmental organization supported by the U.S. Immigration Services and Border Control. By providing a proficient service that simplifies the educational process of immigrants preparing for the naturalization test, the U.S. government can state that is legally provided all the services to citizens to succeed in passing the test, alongside helping many immigrants achieve citizenship free of charge.
OPCFW_CODE
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool. Numerical topics Exchange Fortran unformatted data between heterogeneous machines. with SGI and Cray compilers: assign; with Compaq compilers and Intel compilers (8.0. Shortly before 7 p.m. Thursday, social media reports surfaced that the president’s personal account, @RealDonaldTrump, was unavailable, providing the error message that the user "does not exist." The account was. I have no idea what his core message is (and neither. while Gillespie is winning 95 percent of the approvers. Northam’s early double-digit lead has now. GoTo (goto, GOTO, GO TO or other case combinations, depending on the programming language) is a statement found in many computer programming languages. an increase of 95 per cent. Mr Thomas said the figures supported the status. Fortran is a general-purpose, imperative programming language that is especially suited to. That code could be looked up by the programmer in an error messages table in the operator's manual, providing them with a. Fortran 95, published officially as ISO/IEC 1539-1:1997, was a minor revision, mostly to resolve some. It carries a four-point margin of sampling error. Gillespie, a former Republican. Fortran 90/95 reference. for default input/output unit. format is '(formats)' or label, or * for list-directed input/output. iostat saves error code to the given variable. Beginner's guide to FORTRAN 90/95 using FREE downloadable Windows compiler. The compiler will report two error messages when it attempts to compile. Jun 7, 2005. I strongly recommend using Fortran95 as a multitude of features have. statement, the error message has to come from a STOP statement. Aug 27, 2014. We must be able to call the FORTRAN code again after the error. I ended up writing a script that could handle about 95% of all occurrences of. Error Especificado Ms Visual Database Tools Sql Server 2008 Solución error al abrir diagramas en SQL Server 2005 – Taringa! – 4 Sep 2014. El problema se generaba en SQL Server 2005 (No se si ocurrira en otras. (MS Visual Database Tools) |Microsoft Visual Studio LightSwitch. SQL Server 2008 Unspecified error (MS Visual Database Tools. – I had this error today. It had nothing twitter account – A 477-page report released Friday said it’s "extremely likely" — meaning with 95% to 100% certainty — that global. having your account deleted/deactivated. Rtsp/1.0 500 Internal Server Error Internal server error (sometimes called a 500 internal server error), is a generic error message that your server gives you when it runs into a problem. Basically, Since yesterday I haven’t been able to load http://www.crunchyroll.com/comics/manga or any page beneath it. I get a 500 internal server error response from the page when loading it Microsoft FORTRAN Compiler Version Information – Provides descriptions of Microsoft Fortran compilers and tools, along with features, notes, and photos. Copyright © Janet A Nicholson 2011 8 fortrantutorial.com Correct the two errors. Click Execute ˜ There is now one further error. I just did a search on "windows 95 error messages" (w/o quotes) and scanned through the first couple of pages. This and other searching I've done lead me to think the. Fortran for Microsoft.NET. symbolizing that Fortran 95 syntax is followed. Option. Anything else should be a compiler error. How to Fix Error Messages in 2 Min, Very Simple Instructions (Recommended) Whether you write your own programs in Fortran77, or merely use code written by others, I strongly urge you to use FTNCHEK syntax checker to find mistakes. Everquest Dx9 Error Issuu is a digital publishing platform that makes it simple to publish magazines, catalogs, newspapers, books, and more online. Easily share your publications and get. According to ZI Forums user I AM ERROR we may have just hit the motherload. Thanks to the image he provided below, it looks as if we may be looking If a client successfully connects but later disconnects improperly or is terminated, the server increments the Aborted_clients status variable and logs an "Aborted.
OPCFW_CODE
Alexa skills, Lambda and weird AWS Key Management Service requests I'm starting with Alexa development and AWS in general. I've subscribed for the free tier, created my skill, set a AWS Lambda function and done some little testing. I got nothing more running on AWS. What I've noticed that except for AWS Lambda and Cloudwatch usage I got requests to AWS Key Management Service on my Billing Dashboard. I'm not using any environment variables as this was one of the reasons for KMS requests suggested by Google. From my billing management report I got 3 times more KMS requests than to my Lambda (30 vs 9). I know this is small number but KMS got 20k requests in the free tier and Lambda got 1000000 and I just don't understand how this connects to each other. Is AWS KMS required for Lambda operation? What is it used for? Many AWS services are using KMS to manage keys and access to keys while keeping them under your control. The full list is documented here https://docs.aws.amazon.com/kms/latest/developerguide/service-integration.html Pricing of KMS is per keys that you create and manage. https://aws.amazon.com/kms/pricing/ Keys automatically created by AWS Services are for free. I just checked my bill and I am not charged for KMS at all. I do suggest you to enable CloudTrail logs on your account to understand where the KMS calls you're seeing are originated from. To query Cloudtrail logs, you can make a simple SQL query on Athena. Doc to setup Athena for Cloudtrail : https://docs.aws.amazon.com/athena/latest/ug/cloudtrail-logs.html SQL Query to analyze kms calls : SELECT eventtime, useridentity.type, eventsource, eventname, sourceipaddress, eventtime FROM "default"."cloudtrail_logs_logs_sst_cloudtrail" WHERE eventsource = 'kms.amazonaws.com' AND eventtime BETWEEN '2018-07-01' AND '2018-07-31' ; Hi Sebastien, thanks for your answer. Is not being charged for KMS in your case means that you don't exceed 20k free tier KMS calls or that you don't see any KMS request usage in your billing? Right now in my case for awskms I got 49.00/20,000 Requests. It says also that in free tier we got 20,000 free requests per month for AWS Key Management Service. Good question. I dived deeper in my billls and figure out that I have a few KMS request (15 this month across all regions), so I am falling into the free tier. I am now checking if cloudtrail can help to understand the source of these calls. I dif into my billing reports and cloudtrail logs and I can find 57 calls to KMS last month, mostly triggered by a security audit tool. I confirm Lambda is not calling KMS when not using environment variables. I would suggest you to enable CloudTrail to understand where calls are coming from. I will edit my answer to reflect this Thanks, after checking CloudTrail looks like I got KMS ListAliases requests - I need to do some experiments but this is probably used when I open my Lamdba functions in AWS console.
STACK_EXCHANGE
One of the things I’ve noticed about my post-PhD life — especially since I started an alt-ac job about 10 months ago — is a strong desire to learn new things and be a student again. I don’t want my penny-pinching, whoops-there-goes-my-grant lifestyle back, but I really miss the luxury of just sitting back and learning something. Of course, I’m always learning for my research: reading new things, meeting new people, writing different kinds of proposals or projects. But I’m always in the driver’s seat with stuff like that. I’m setting the work schedule; I’m creating the “assignments” for myself; I’m responsible both for the organizational side and the getting it done side of things. In addition to missing the feeling of being a student, I was also longing for a creative outlet that went beyond the academic writing that I do after work almost every day — and almost every weekend, too. Plus, my husband (!!) is 3000 miles away, waiting on the results of a visa application. I was staring down the barrel of a very, very long summer if I didn’t find another way to be with people and keep my mind occupied. After unsuccessfully auditioning for a local drunk Shakespeare company, I started looking around for other opportunities. And then a work colleague/friend who also does improv mentioned that she wanted to take a sketch comedy 101 class. I used to write sketches for a Catholic theatre troupe that I was involved in as a teenager, and then for the Jewish summer camp that I worked for in college. I wrote a couple sketches for a Shakespeare-inspired show during my PhD. But I’d always written to a clear stimulus: a story from a text that needed to be explicated and made accessible for a broad audience. I had never learned how to write sketch just for the sake of it, or to draw inspiration directly from the world around me instead of from a given text. Probably no one will be surprised to learn that it is ridiculously fun. I wrote a sketch this week about a germaphobic preschool teacher during flu season for our “fish out of water” week. I’m drawing inspiration from a particularly frustrating email chain at work for this week’s “escalation” assignment. I’m learning, for the first time, the formal conventions behind sketch writing and comedy writing in general: why a particular scenario is funny, and how to replicate that in my own work. For two and a half hours every week, I sit around a table with twelve other students and our hilarious, kind, wicked smart teacher Emmy and just make shit that’s funny. We read each other’s sketches; we talk about what’s working and what’s not; we watch and analyse vintage SNL, Key and Peele, and Monty Python. It’s awesome. I think I love this class so much for a few reasons: 1. It’s amazing stress relief. Yes, we have assignments, but it’s super chill. And in class, we just sit around and laugh, and then talk about why we’re laughing, and laugh some more. It’s dedicated laugh time. 2. I get to write stuff that isn’t academic. Don’t get me wrong — I love what I’m working on academically, and where my research is going right now. But research can be exhausting, and having to generate stuff that is good and well thought-through on a tight time budget isn’t easy. Weirdly, having another writing assignment in my week has helped me stay focused, be more productive, and think more clearly when I’m doing academic stuff. 3. I always learn a lot about how to be a teacher by being a student. Emmy’s pedagogy is inclusive, compassionate, and rigorous all at the same time. She’s got a way of delivering critiques that always feels constructive, and she’s very generous with praise. I’m taking mental notes for the next time I get a chance to teach. There’s not really a message or any deep analysis in this post. I’m learning again. I look forward to Thursday night class all week long. I wanted to share my joy! I hope your summer brings you something that makes you as happy as sketch 101 has made me 🙂
OPCFW_CODE
May 3rd, 2012 Drupal 7.14 is now available, which contains bug fixes as well as fixes for security vulnerabilities from Drupal 7.13. Drupal 6.26, which fixes known bugs (no security issues) is also available for download. Upgrading your existing Drupal 7 and 6 sites is strongly recommended. There are no new features in these releases. For more information about the Drupal 7.x release series, consult the Drupal 7.0 release announcement, more information on the 6.x releases can be found in the Drupal 6.0 release announcement. Drupal 5 is no longer maintained, upgrading to Drupal 7 is recommended. We have a security announcement mailing list, a history of all security advisories, and an RSS feed with the most recent security advisories. We strongly advise Drupal administrators to sign up for the list. Drupal 7 and 6 include the built-in Update status module, which informs you about important updates to your modules and themes. Drupal 7.13 only includes fixes for security issues. Drupal 7.14 also includes bugfixes. The full list of changes between the 7.12 and 7.14 releases can be found by reading the 7.14 release notes. A complete list of all bug fixes in the stable 7.x branch can be found in the git commit log. Drupal 6.26 only includes bugfixes. Drupal 7.13 were released in response to the discovery of security vulnerabilities. Details can be found in the official security advisory: To fix the security problems, please upgrade to Drupal 7.13. What is included with each release? We made two versions of Drupal 7 available, so you can choose to only include security fixes (Drupal 7.13) or security fixes and bugfixes (Drupal 7.14). You can choose your preferred version. We are trying to make it easier and quicker to roll out security updates by making security-only releases available as well as ones with bugfixes included. We hope this helps you roll out the fixes as soon as possible. Read more details in the handbook. – #1558548: Notice: Undefined index: default_image in image_field_prepare_view() – Upgrading from Drupal 7.x to Drupal 7.14 will yield a harmless but annoying PHP notice. Patch has been committed to 7.x-dev, and will be available in 7.15. A workaround in the meantime is visiting the field settings page and saving. – #1541792: Enable dynamic allowed list values function with additional context – This issue introduced an more context to hook_options_list(). However, because Entity API was calling this hook directly it causes errors such as Warning: Missing argument 2 for taxonomy_options_list() in taxonomy_options_list() (line 1375 of modules/taxonomy/taxonomy.module).. Fixed in Entity API module at #1556192: Incorrect invocation of hook_options_list(). – #1171866: Enforced fetching of fields/columns in lowercase breaks third-party integration – This issue accidentally introduced an API change that affected both Migrate and Backup and Migrate modules. Solution for Migrate is to rename tables in scripts back to their proper names. Solution for Backup and Migrate is at #1576812: Could not complete the backup. – #811542: Regression: Required radios throw illegal choice error when none selected – #1571104: Can't access non-node entities with EntityFieldQuery
OPCFW_CODE
In a typical Xi Frame account, sessions are “stateless.” This means that all changes made to an instance are wiped from the instance after the session is closed. The instance is then returned to a pool where it waits to be served to the next user. The Xi Frame platform also offers an alternative option called “Persistent Desktops.” Persistent Desktops are stateful, desktop-only instances which are permanently assigned to an individual user. Users are given administrative control over their own desktop – they can install and manage their own unique application sets and settings in their own persistent environment. Account administrators can still monitor usage and basic session activity through the account Dashboard. - The Persistent Desktops feature can be used with the following: Azure, AWS, and Google Cloud Platform on Xi Frame infrastructure Xi Frame on AHV Any BYO Cloud Account Domain Joined Instances Persistent Desktops were designed for organizations who prefer to give their users more control over their own environments. Xi Frame Account administrators still configure the Sandbox image to be used as a base for all instances in the pool, but end users manage their own instance once assigned. Persistent Desktops can be domain joined as well, end users are provided local admin privileges after they log in for the second time. Users must be able to authenticate to the platform using Domain Joined Instances, a SAML2 integration or Xi Frame’s built-in identity provider. The Persistent Desktops feature is enabled upon account creation. It cannot be enabled on accounts that have already been created, since provisioning and infrastructure management of a Persistent Desktop account is handled differently than on a typical Xi Frame account. Account capacity settings work a little differently than a typical, non-persistent Xi Frame account. There is no buffer or active capacity settings since instances are served to users as they authenticate and then persistently tied to that user. Max possible number of users: Enter the max number of expected users (instances) for the account in this field. Any additional users will be given an “out of capacity” error when attempting to connect to a Persistent Desktop session. Keep running instances for new users: Enabling this toggle keeps an instance running at all times to be immediately available for a new user. Enabling this toggle means that the account will have an active, unassigned instance running at all times to be immediately available for new users. When using AWS, GCP, or Azure, this does incur infrastructure costs for the time that the instance is running, even though it is not being actively used. Keeping this toggle off prevents those infrastructure costs, but means that new users that do not yet have a an instance assigned to them must wait for the instance to be provisioned, booted, and assigned to them on their first connection. This can take upwards of 10-15 minutes, depending on image size and file copy speed in the datacenter where it is provisioned. Sandbox Image Management¶ Managing your Sandbox image on a Persistent Desktop account is essentially the same as a non-persistent, regular Frame account. The difference lies in how your changes are propagated to the workload instances. Since an instance is permanently assigned to each user as they log in, any Sandbox updates that are published after will only be made to unassigned instances in the pool. This is intended behavior with Persistent Desktops. In the event that an end user with an assigned instance requires changes from the Sandbox, an account administrator must terminate the user’s current instance. To terminate the instance, the account administrator can navigate to the “Status” page in the Dashboard and then select “Terminate” in the instance action menu for the instance connected to that user. When the user next attempts to connect to their desktop, a new instance with the latest published changes will be assigned to them. Terminating an instance will permanently delete all data on that instance. Any data that a user needs from their instance, such as work files or software licenses, should be retrieved from their persistent desktop before the account administrator terminates their instance. Administrators can decide whether or not they would like their users to manage their own system backups. User-managed backups can be enabled by navigating to the “Settings” page in the Dashboard and enabling the “Are persistent desktops backups allowed” toggle listed under “General settings.” Once enabled, end users can manage their backups from their “My Profile” page. Persistent Desktop backups are managed the same as Enterprise Profile and Personal Drive backups. More information regarding end user-managed backups can be found in our end user documentation.
OPCFW_CODE
Python Developer Resume - Over 6+ years of experience as a Web/Application Developer and coding with analytical programming using Python, C++, SQL and Java. - Experienced with full software development life - cycle, object oriented design, object oriented programming and database design. - Good knowledge of various Design Patterns and UML. - Having experienced in Agile Methodologies, Scrum stories and sprints experience in a Python based environment, along with data analytics, data wrangling and data extracts. - Familiar with JSON based REST Web services and Amazon Web services (AWS). - Experienced in developing Web Services with Python programming language. - Experience in writing Sub Queries, Stored Procedures, Triggers, Cursors, and Functions on MySQL, Oracle 10g and PostgreSQL database. - Experienced in web applications development using Django/Python, Flask/Python, and Angular.js. - Good knowledge in Amazon AWS concepts like EMR and EC2 web services which provides fast and efficient processing of Big Data. - Experience in working in Celery Task queue and service broker using RabbitMQ. - Good knowledge on Reporting tools like Tableau which is used to do analytics on data in cloud. - Knowledge of ORM mapping using SQLACHEMY. - Experienced with NOSQL database such as MongoDB and HIVE. - Experience with continuous integration and automation using Jenkins. - Used JIRA for daily scrums and work management. - Extensively used UNIX shell Scripts for automating batch programs. - Hands on experience in SVN, JIRA and Bugzilla. - Good knowledge of web services with protocols such as SOAP and RESTful API. - Good knowledge of server Apache Tomcat, Web logic. - Accustomed to fast paced environment, changing priorities & multitasking. - Team player with a work ethics, committed to work hard, smarter and sincerely, able to work single handily. - Good interpersonal skills, committed, result oriented, hard working with a quest and zeal to learn new technologies. Languages: Python, C, C++, ObjectiveC, HTML/CSS, Shell Script Python Framework: Django, Flask Framework Databases: MySQL, Google App EngineAmazon Cloud EC2, Amazon SQS, Amazon S3, SPARK, Postgre Packages: Wxpython, PyQT, SciPY Versioning Tools: Git, SVN, CVS Web servers Apache, Flask Operating systems: Linux/Unix, Windows Confidential - TX - Gathered all requirements for developing and analyzing requirement of projects. - Developed entire frontend and backend modules using Python on Django including Tastypie Web Framework using Git. - Developed Merge jobs in Python to extract and load data into MySQL database. - Successfully migrated the Django database from SQLite to MySQL to PostgreSQL with complete data integrity. - Developed Ruby on Rails 3 web applications using MongoDB and back-ground processes using Resque and Redis - Used Test driven approach for developing the application and Implemented the unit tests using Python Unit test framework. - Worked with millions of database records on a daily basis, finding common errors and bad data patterns and fixing them. - Developed Ruby/Python scripts to monitor health of Mongo databases and perform ad-hoc backups using Mongo dump and Mongo restore. - Familiar with JSON based REST Web services and Amazon Web services (AWS) - Dynamic implementation of SQL server work on website using SQL developer tool. - Involved in the Complete Software development life cycle (SDLC) to develop the application. - Used Numpy for Numerical analysis. - Followed AGILE development methodology to develop the application. - Exported/Imported data between different data sources using SQL Server Management Studio. Maintained program libraries, users' manuals and technical documentation. - Managed large datasets using Panda data frames and MySQL. - Wrote and executed various MYSQL database queries from python using Python-MySQL connector and MySQL db package. - Used Python library Beautiful Soup for web scrapping to extract data for building graphs. - Performed troubleshooting, fixed and deployed many Python bug fixes of the two main applications that were a main source of data for both customers and internal customer service team. - Implemented code in Python to retrieve and manipulate data. - Also used Bootstrap as a mechanism to manage and organize the HTML page layout. - Django Framework was used in developing web applications to implement the model view control architecture. - Involved in Developing a Restful service using Python Flask framework. - Created entire application using Python, Django, MySQL and Linux. - Exposure on Multi-Threading factory to distribute learning process back-testing and the into various worker processes. - Performed efficient delivery of code based on principles of Test Driven Development (TDD) and continuous integration to keep in line with Agile Software Methodology principles. - Lock mechanisms were implemented and the functionality of multithreading has been used. - Developed a fully automated continuous integration system using Git, Gerrit, Jenkins, MySQL and custom tools developed in Python and Bash. - Experience in managing MongoDB environment from availability, performance and scalability perspectives. - Managed, developed, and designed a dashboard control panel for customers and Administrators using Django, OracleDB, PostgreSQL, and VMWare API calls. - Extensively used python modules such as requests, urllib, urllib2 for web crawling. - Implemented configuration changes for data models. - Used Pandas library for statistics Analysis &Numpy for Numerical analysis. - Managed large datasets using Panda data frames and MySQL. - Handled potential points of failure through error handling and communication of failure. - Anticipated potential parts of failure (database, communication points, file system errors). - Actively worked as a part of team with managers and other staff to meet the goals of the project in the stipulated time. - Developed GUI using webapp2 for dynamically displaying the test block documentation and other features of python code using a web browser. - Responsible for user validations on client side as well as server side. - Automated the existing scripts for performance calculations using Numpy and sqlalchemy. - Interacted with QA to develop test plans from high-level design documentation. Confidential - San Francisco, CA - Worked with the stakeholders to gather requirements - Performed High level Deisgn/Detail design. - Used Python 2.7 and Google App Engine with webapp2 for programming. - Created Data extract jobs using Python/Django and Google app engine. - Designed User Interface for data selection using Python/Django - Used Python packages such as sklearn, ntlk, statsmodels, numpy, pandas, and scipy boosting, first and second order optimization algorithm, predictive modeling - Automated Production tasks Environment: Python 2.7, Google App engine, Webapp2, scipy, Oracle, Linux. Confidential - Richmond, VA - Responsible for gathering requirements, system analysis, design, development, testing and deployment. - Participated in the complete SDLC process. - Created business Logic using Python/Django. - Created database using MySQL, wrote several queries to extract data from database. - Used Amazon Cloud EC2 along with Amazon SQS to upload and retrieve project history. - Setup automated cron jobs to upload data into database, generate graphs, bar charts, upload these charts to wiki, and backup the database. - Effectively communicated with the external vendors to resolve queries. - Used Perforce for the version control. - Designed and developed the application using Agile methodology. - Developed the application using with Spring web Flow. - Developed business login using Core java concepts. - Used Design Patterns like value object, session facade and Factory - Used LDAP for authorization and authentication in EJB’s. - Parsing incoming message using JAXP and storing in the database - Developed controller objects using Servlets for Account Setup - Extensive involvement in the programming using C++ on UNIX base - Created Action Form and Action classes - Used various tags HTML, Bean and Logic - Implemented various XML technologies, XSL style sheets. - Mapping of SQL databases and objects in java using iBATIS - Developed the project using Rational Application Developer (RAD) 6.0. - Deployed the application and tested on WebSphere Application Servers. - Wrote SQL Queries and integrated SQL Queries into DAO - Involved in the preparation of use case, sequence diagrams, class diagrams - Created activity diagrams, class diagrams using Rational Rose and test cases using JUnit Environment: J2EE, EJB, Servlets, Spring, JDBC, JSP,RAD, Websphere, XML, HTML, C++, Design Patterns, Java Script, JUnit, JMS, iBATIS, Rational Rose, UNIX, Windows, Sql Server. - .NET Developers/Architects Resumes - Java Developers/Architects Resumes - Informatica Developers/Architects Resumes - Business Analyst (BA) Resumes - Quality Assurance (QA) Resumes - Network and Systems Administrators Resumes - Help Desk and Support specialists Resumes - Oracle Developers Resumes - SAP Resumes - Web Developer Resumes - Datawarehousing, ETL, Informatica Resumes - Business Intelligence, Business Object Resumes - MainFrame Resumes - Network Admin Resumes - Oracle Resumes - ORACLE DBA Resumes - Other Resumes - Peoplesoft Resumes - Project Manager Resumes - Quality Assurance Resumes - Recruiter Resumes - SAS Resumes - Sharepoint Resumes - SQL Developers Resumes - Technical Writers Resumes - WebSphere Resumes - Hot Resumes
OPCFW_CODE
assembly program, if a>b, square the value of b I am trying to create a program in Assembly Language, if A>B, double/square the value of B. I was able to input single digit numbers, like 5 and 3, the answer is supposed to be 9, because it's says in the condition that when the first number is greater than the second number, double/square the second number. Unfortunately the answer is wrong, it produces " P2" as the answer. pc macro x mov ah, 02 mov dl, x int 21h endm fl macro int 20h cseg ends end start endm cls macro mov ax, 0003h int 10h endm cseg segment para 'code' assume cs:cseg;ds:cseg;ss:cseg;es:cseg org 100h start:jmp begin fn db ? sn db ? n db ? m db ? begin: cls mov ah,01 int 21h sub al,30h mov fn,al mov ah,01 int 21h sub al,30h mov sn,al cmp fn,al ja x1 jmp exit x1: mul sn cmp al,10 jae ilong pc ' ' add al,30h add ah,30h mov n,al mov m,ah pc n pc m jmp exit ilong:mov ah,0 mov al, n mov bl,10 div bl add al,30h ;div ah:al,bl add ah,30h mov n,al mov m,ah pc ' ' pc n pc m exit:fl Learn to use a debugger to single step your program. Also, comment your code especially if you want others to help. For example, it's unclear why you try to print 2 digits when you have checked that the result is less than 10. This is almost a good question. You described specifically what it did print, which many people fail to do. But you didn't comment your code so we don't know what exactly you're misunderstanding about how your code works. And like Jester said, this should be easy to solve if you just single-step in a debugger and watch register values. Hint, you can optimize add al, 30h / add ah, 30h into add ax, 3030h. And you can store with mov word [n], ax (because you put n and m in memory next to each other.) It's handy that div by 10 puts the resulting digits in printing order in AX. Operating on single digit inputted numbers, the largest result that your program would ever need to print is 64 (8*8). You would get this when the first number is 9 and the second number is 8. Your program has these 2 problems: When the result is indeed smaller than 10 (0,1,4,9) you start by outputting a space character, but then you erroneously try to output two digits where you only need to display a single digit! cmp al, 10 ;AX is the product and AH is zero at this point! jae ilong add al, 30h ;Turn into text mov n, al pc ' ' pc n jmp exit When it comes to displaying a result larger than 9 (16,25,36,49,64) you immediately start by destroying your calculated square that is in the AX register (using mov ah, 0 mov al, n). Please verify that the n variable has no defined value at this point! ilong: mov bl, 10 div bl add ax, 3030h ;Turn both into text at the same time mov n, al mov m, ah pc ' ' pc n ;Display tens pc m ;Display ones exit: fl if a>b /// ax = a /// cx = b /// return result in ax cmp ax,cx ; a > b? ja @SquareB ; yes, SquareB ret ; no, done , square the value of b @SquareB: imul cx,cx ;b = b * b mov ax,cx ;return b in ax (not sure if you need this step) ret ;done When writing assemby you need to write comments for every single line of code. Detailing exactly why you coded it and what you expect to happen. If you don't you'll be struggling to understand your own code the next day (never mind other people trying to understand your reasoning). In x86 16-Bit assembly you can't write "imul cx,cx"; furthermore why not use you NEG/MUL instead of IMUL ? It is more fast at least 7 CPU cycles. Excuse me: I mustn't use NEG but the implementation of ABS, instead. But only in the worst case I need of 3 CPU-cycles less. Normally I need 8 CPU-cycles more. x86 16-bit is dead, so it's pointless discussing its performance, plus: https://defuse.ca/online-x86-assembler.htm#disassembly Disassembly: 0: 66 0f af c9 imul cx,cx, it works just fine. imul takes 3 cycles on the lastest processors last time I checked. My corrected solution: ; INPUT: AX= A ; CX= B ; OUTPUT: AX= (A>B) ? Sqr(B) : A CMP AX,CX ; Compare input num. A with input num. B JLE @Exit ; If A<=B, skip the multiplication and exit MOV AX,CX ; Copy input num.B in accumulator AX ; Process ABS(B): CMP AX,08000H ; 4 CPU-cycles, if num.B>=0 set carry flag CMC ; 2 CPU-cycles, complements carry flag; if B<0 it is set SBB CX,CX ; 3 CPU-cycles, if B<0, CX is set to -1 else is set to 0 XOR AX,CX ; 3 CPU-cycles, if B<0, do a 1-complement of num.B SUB AX,CX ; 3 CPU-cycles, if B<0, do a 2-complement of num.B ; IMUL AX needs in the worst case up to 21 CPU-cycles more then MUL MUL AX ; Multiply the absolute value of B for itself @Exit: RET ; Call's return My wrong solution: ; INPUT: AX= A ; CX= B ; OUTPUT: AX= (A>B) ? Sqr(B) : A CMP AX,CX ; Compare input num. A with input num. B JLE @Exit ; If A<=B, skip the multiplication and exit MOV AX,CX ; Copy input num.B in accumulator AX NEG AX ; I MUSTN'T ALWAYS NEGATE AX, BUT ONLY IF AX<0! MUL AX ; It multiply AX*AX and store the product in DX:AX @Exit: RET ; Call's return This doesn't answer the question. Why are you quoting cycle times for ancient processors? imul takes 3 cycles on Skylake. Even on the AMD K7 it took 3 cycles.
STACK_EXCHANGE
Puna Nuk U Gjet Na vjen keq, nuk e gjetëm punën që po kërkonit. Gjeni punët më të fundit këtu: Post daily on fb, ig, pinterest, tumbler, twitter and youtube. Objective is referral traffic to website. We want somebody for monthly retainer. Also only people with experience in managing in yoga profiles should appy I need a script or plugin for when someone clicks on a button, the website scrolls smoothly to a point lower on the page. An example is this website here: [identifikohuni për të parë adresën URL] The description of this project of mine is called From Lyrics, With Love. Why? Well the name says it all! I write lyrics! Song lyrics! I also add music if preferred. I guess you could call me a songwriter. I make sure the client gets 100% complete satisfaction. If not, I will redo whatever the client needs redone. I will also switch the melody if the client doesn't approve. And I will make th... I need a PSD (landing page) as fully responsive (mobile-first) HTML5. It needs to look and function good in EVERY size, including every size between desktop and mobile. Someone already failed at this project, so I need a real professional to do this properly and in a timely manner. Attached are the mobile and desktop view examples. Will provide the desktop PSD to the winner. Happy bidding! Hi i need someone to write my blog posts. i will share the details with the short listed candidates. Thanks Transcript Customer Details from PDF into Microsoft Excel Approximately 340 names and fields to be: Company Name/Address1/Address2/Phone/Brands/First Name/Second Name/Region/Website/Title/Ranging All shown on PDF Thanks ENR Need professional product photos taken for our small batch roasted coffee K-cups (Coffee Pods). The ideal candidate would be able to make our product look enticing on an e-commerce platform by differentiating our product as a quality and unique option for coffee drinkers. Will mail amble coffee pods to selected photographer. We are launching a blog website and we would like to have multilingual feature enabled. Right now we are only in English and would like to have content translated in Spanish. Looking to build long term relationship with the right one as we will be having more and more contents which need to be translated. Regards. Chris. I have a Vimeo video box on my Wix homepage, but I need to make it full-width. I need the Video to resize without showing black borders on the edges. Here is the site: [identifikohuni për të parë adresën URL]
OPCFW_CODE
travis needs more build time @conda-forge/core building on travis doesn't work because too much time is neeeded. Looking at the logs we need only a view minutes more. Is there a way to allow more build-time? Unfortunately, build time is set by the CI services... I can try to do a manual build on my build machine and upload it if you want to disable the mac build and merge it to master that would be fantastic. I will try to add some fixes and inform you once that is done. conda-build=3.5 has some fixes that makes the fixing linking step really fast. I've started a build here, https://travis-ci.org/conda-forge/occt-feedstock/builds/349000502. Let's see how that goes. Might also consider using nmake instead of make. thanks @isuruf @jakirkham rerendering with conda-smithy 3.0 didn't worked either. I have made a PR where I build with ninja on all platforms: https://github.com/conda-forge/occt-feedstock/pull/19 Something else that could help is checking to make sure nothing unnecessary is being built. For example, discovered the other day that we were building benchmarks for a package, which we don’t need. Another example might be if the package builds a vendored copy of its dependencies instead of using equivalents in cf. hmm I don't know where we can simplify the recipe. I guess it is already a minimalistic build. OSX is disabled again as ninja and conda-smithy3 didn't helped. But thanks anyway. @scopatz PR is merged: https://github.com/conda-forge/occt-feedstock/pull/19 it would be nice if you find some time for a osx package. This would be a nice addition ; ) used in: https://github.com/conda-forge/staged-recipes/pull/5320 https://github.com/conda-forge/staged-recipes/issues/5179 hmmm it seems that this is skipped for Python v3.6. Is a 3.6 build not possible for some reason? anthony@asmac ~/feedstocks/occt master $ conda build recipe/ Adding in variants from internal_defaults INFO:conda_build.variants:Adding in variants from internal_defaults Skipped: occt from /Users/anthony/feedstocks/occt/recipe defines build/skip for this configuration ({'python': '3.6'}). # Automatic uploading is disabled # If you want to upload package(s) to anaconda.org later, type: # To have conda build upload to anaconda.org automatically, use # $ conda config --set anaconda_upload yes anaconda_upload is not set. Not uploading wheels: [] #################################################################################### Resource usage summary: Total time: 0:00:32.3 CPU usage: sys=0:00:00.0, user=0:00:00.0 Maximum memory usage observed: 0B Total disk usage observed (not including envs): 0B thanks for having a look at this. osx is skipped to not build with travis. the python version shouldn't matter. maybe there is something wrong in this line?: https://github.com/conda-forge/occt-feedstock/blob/master/recipe/meta.yaml#L12 @scopatz I see two ways to proceed: simple don't skip. ( I can add this for the next PR) skip and remove the skip line in the recipe. I guess it is done this way for the vtk-package. Looking at the way the cyclus feedstock does this, it seems that the option 1 is the way to go. Wouldn't it just be easier to drop the skip before building, we do the same with qt. The downside of not keeping the skip is that a later re-rendering will add back the Travis CI build and eat up CI time on our already taxed queue. Could I persuade you to do 2? ;) I don't really understand 2 as it is written. My interpretation is that the skip stays in the recipe, but then we manually delete it when building the mac package. Is that the idea? My interpretation is that the skip stays in the recipe, but then we manually delete it when building the mac package. Is that the idea? yes, that is exactly what I meant. ; ) No worries! I can try it this way! Btw, I have a test working at https://github.com/conda-forge/occt-feedstock/pull/18 which uses CircleCI for both linux and osx Alright! I have uploaded the mac package. Seems to have taken about 1.77 hrs on my machine. thanks! I let this Issue open for further discussion to improve the build time/ switch to conda-build3/ using circleCI... I guess we can close this as we switched osx to circle-ci.
GITHUB_ARCHIVE
Marvellousnovel Brocade Star Of Love – Chapter 2360 – Fighting It Out (1) poised list share-p3 Novel–Rebirth To A Military Marriage: Good Morning Chief–Rebirth To A Military Marriage: Good Morning Chief samantha on the woman question Chapter 2360 – Fighting It Out (1) locket toothbrush toaru majutsu no index ss biohacker She realized that she wasn’t in a position to be choosy, and her mother’s creating meals tasted okay. Qiao Zijin figured she were required to restrain her att.i.tude and would consume whatever her new mother produced. Last time, Ding Jiayi couldn’t prevent Qiao Zijin from acquiring her dollars and identifications. Right now was only just like she watched Qiao Zijin s.n.a.t.c.hed the dishes from her fingers. That they had to continue with their day-to-day lives. Ding Jiayi tidied themselves up and started cooking up dinner. Irrespective of how significantly she despised Qiao Zijin, she needed to give food to herself. There were nothing else she could do. What in addition! Ding Jiayi’s screaming didn’t bother her in any way. Qiao Zijin was created to exactly how Ding Jiayi yelled at Qiao Nan, so she got enough experience to remember to brush that aside. “Mother, I’m fatigued. Remember to call up me when lunch time is prepared.” “No, you…” Ding Jiayi needed Qiao Zijin to go out of, but Qiao Zijin wouldn’t pay attention to anything she explained. There seemed to be absolutely nothing that Ding Jiayi could do against her tenacity. The only explanation why Ding Jiayi could possibly get along with Qiao Nan previously was on account of her obedience. Since both her daughters were actually from her control, Ding Jiayi’s phrases worthed just the pa.s.sing breeze. Qiao Zijin still behaved such as a princess.. Soon after finis.h.i.+ng her supper, she dumped the grubby plates into your drain and decided to go right back into her home. Ding Jiayi realized far better than others how much money Qiao Zijin got in their own bank account. She also believed that Qiao Zijin have every cent from Qiao Nan. Could your money even be regarded as hers? what tooth is ur7 Ding Jiayi tidied herself up and commenced food preparation up lunch. Irrespective of how significantly she hated Qiao Zijin, she simply had to supply themselves. There is nothing else that she could do. They had to carry on with their everyday life. black magic game Ding Jiayi was tugging the exact same trick with little considering. If her living here couldn’t be ceased, she should at least purchase her own food items. “Mom, is meal all set?” Smelling the fragrance in the supported dinners, Qiao Zijin went out of her place. “Mommy, after eating out a great deal, I realized that you make the best food items. I’ll be having it within my space considering the fact that my personal computer remains to be on. Thanks a lot, Mommy.” A Woman Martyr As an alternative to creating a fortune, Qiao Zijin misplaced money when she messed around in the funds. Today, she became even more worried looking at the harmony in their bank-account. All she were required to do would be to make up food for 2, and in addition they would be able to survive peacefully together like any mom and little princess. Qiao Zijin believed she’s pretty easygoing since her require was simple. All she were forced to do would be to prepare food up dinner for two main, additionally they could live peacefully together as with any mum and little girl. Qiao Zijin considered that she’s pretty easygoing because her obtain was uncomplicated. Ding Jiayi once was joyful when Qiao Zijin traveled to, but all she believed right then was utter hatred. Qiao Zijin still acted for instance a princess.. Immediately after finis.h.i.+ng her mealtime, she dumped the grubby dishes into your drain and decided to go back into her place. Section 2360 – Battling It Out (1) Ding Jiayi couldn’t accept Qiao Nan’s att.i.tude toward her, nonetheless it was easy to understand at the very least. Ding Jiayi didn’t forget about the things that she experienced carried out on her. On the other hand, why does Qiao Zijin have the legal right to try this to her? She could have been imply to other people in the household, but she had done almost nothing sorry to Qiao Zijin. What’s wrong with expending her hard earned cash? As Qiao Nan’s mum, Ding Jiayi believed to themselves that the cash belonged to her anyways. Qiao Zijin definitely possessed a large sum of hard cash left behind soon after promoting the property, so she acquired no ability to be deciding bank account with her. Ding Jiayi was aware superior to anyone else the amount of money Qiao Zijin had in her pants pocket. She also recognized that Qiao Zijin acquired every cent from Qiao Nan. Could your money even be thought to be hers? Payback: Ultimate Face Slapping System Section 2360 – Fighting It All Out (1) Ding Jiayi tidied herself up and started preparing food up lunchtime. Regardless of how a lot she detested Qiao Zijin, she simply had to satisfy themselves. There is nothing else she could do. Ding Jiayi couldn’t agree to Qiao Nan’s att.i.tude toward her, but it was clear at least. Ding Jiayi didn’t forget about the things that she got implemented to her. Nevertheless, why managed Qiao Zijin have the right to accomplish this to her? She might have been mean to all the others in the household, but she possessed performed not a thing sorry to Qiao Zijin. Qiao Zijin still behaved similar to a princess.. Right after finis.h.i.+ng her dinner, she dumped the unclean plates into the sink and decided to go back into her home. Qiao Zijin still acted like a princess.. After finis.h.i.+ng her dinner, she dumped the soiled dishes in the sink and went back into her area. She understood she wasn’t able to be choosy, and her mother’s food preparation tasted great. Qiao Zijin figured she had to restrain her att.i.tude and would consume whatever her mum designed. Ding Jiayi tidied herself up and commenced cooking food up meal. In spite of how much she hated Qiao Zijin, she was required to satisfy herself. There was nothing else she could do. What’s incorrect with shelling out her cash? She believed that she wasn’t in a position to be particular, and her mother’s creating meals tasted excellent. Qiao Zijin figured that she had to keep back her att.i.tude and would eat whatever her mommy designed.
OPCFW_CODE
<?php namespace src\models; use src\libs\Model; use src\libs\Session; use src\libs\User; class PokemonModel extends Model { public function __construct() { parent::__construct(); } public function pobierz($id) { $klery = "SELECT * FROM pokemon_jagody, pokemony WHERE pokemony.ID = pokemon_jagody.id_poka AND pokemony.ID in ("; $klery2 = "order by case ID"; $bb = 0; for($i = 1 ; $i < 7 ; $i++) { if ( User::_isset('pok', $i) && User::_get('pok', $i)->get('id') > 0) { $aaa = User::_get('pok', $i)->get('id'); if ($i == 1) $klery .= "'$aaa'"; else $klery .= ", '$aaa'"; $klery2 .= " WHEN '$aaa' THEN " . $i; $bb++; } } $klery .= ')' . $klery2 . ' END'; return $this->db->select($klery, []); } public function login($id) { return $this->db->select('SELECT login FROM uzytkownicy WHERE ID= :id', [':id' => $id]); } public function glod100($id) { $this->db->update('UPDATE pokemony SET glod = 100 WHERE ID = ?', [$id]); } public function pokemonInfo($id) { return $this->db->select('SELECT * FROM pokemon_jagody, pokemony WHERE pokemony.ID = :id AND pokemony.ID = pokemon_jagody.id_poka', [':id' => $id]); } public function czyIstnieje($id) { return $this->db->select('SELECT ID FROM pokemony WHERE ID = :id AND wlasciciel = :idW', [':id' => $id, ':idW' => Session::_get('id')]); } public function zmienImie($imie, $pokemon) { $this->db->update("UPDATE pokemony SET imie = ? WHERE ID = ?", [$imie, $pokemon]); } public function atakWyzszy($wyzsza, $i, $at1, $at2, $pokemon) { $this->db->update("UPDATE pokemony SET atak$wyzsza = ?, atak$i = ? WHERE ID = ? AND wlasciciel = ?", [$at1, $at2, $pokemon, Session::_get('id')]); } public function atakNizszy($wyzsza, $i, $at1, $at2, $pokemon) { $this->db->update("UPDATE pokemony SET atak$wyzsza = ?, atak$i = ? WHERE ID = ? AND wlasciciel = ?", [$at1, $at2, $pokemon, Session::_get('id')]); } public function karmienie($wlasciciel) { return $this->db->select('SELECT karmienie_ip FROM uzytkownicy WHERE ID= :id', [':id' => $wlasciciel]); } public function nakarm($ip, $wlasciciel) { $this->db->update('UPDATE pokemony SET exp = (exp + 2) WHERE druzyna = 1 AND wlasciciel = ?', [$wlasciciel]); $this->db->update('UPDATE uzytkownicy SET karmienie = 1, karmienie_ip = CONCAT(karmienie_ip, \'|'.$ip.'\') WHERE ID= ? ', [$wlasciciel]); } public function ewolucja($pokemon) { return $this->db->select('SELECT ewolucja FROM pokemony WHERE ID= :id AND wlasciciel= :idW ', [':id' => $pokemon, ':idW' => Session::_get('id')]); } public function zmienEwolucja($i, $pokemon) { $this->db->update('UPDATE pokemony SET ewolucja = ? WHERE ID = ?', [$i, $pokemon]); } public function podglad($pokemon) { return $this->db->select('SELECT blokada_podgladu FROM pokemony WHERE ID=:id AND wlasciciel= :idW', [':id' => $pokemon, ':idW' => Session::_get('id')]); } public function zmienBlokada($i, $pokemon) { $this->db->update('UPDATE pokemony SET blokada_podgladu = ? WHERE ID = ?', [$i, $pokemon]); } }
STACK_EDU
Major bug: NPE when opening I have added the following code: Intent filePickerIntent = new Intent(this, FilePickerActivity.class); filePickerIntent.putExtra(FilePickerActivity.REQUEST_CODE, FilePickerActivity.REQUEST_FILE); startActivityForResult(filePickerIntent, FilePickerActivity.REQUEST_DIRECTORY); When I click the button the executes this code, I get an exception: 10-19 12:29:41.797 E/AndroidRuntime: FATAL EXCEPTION: AsyncTask #4 10-19 12:29:41.797 E/AndroidRuntime: java.lang.RuntimeException: An error occured while executing doInBackground() 10-19 12:29:41.797 E/AndroidRuntime: at android.os.AsyncTask$3.done(AsyncTask.java:299) 10-19 12:29:41.797 E/AndroidRuntime: at java.util.concurrent.FutureTask.finishCompletion(FutureTask.java:352) 10-19 12:29:41.797 E/AndroidRuntime: at java.util.concurrent.FutureTask.setException(FutureTask.java:219) 10-19 12:29:41.797 E/AndroidRuntime: at java.util.concurrent.FutureTask.run(FutureTask.java:239) 10-19 12:29:41.797 E/AndroidRuntime: at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230) 10-19 12:29:41.797 E/AndroidRuntime: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080) 10-19 12:29:41.797 E/AndroidRuntime: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573) 10-19 12:29:41.797 E/AndroidRuntime: at java.lang.Thread.run(Thread.java:856) 10-19 12:29:41.797 E/AndroidRuntime: Caused by: java.lang.NullPointerException 10-19 12:29:41.797 E/AndroidRuntime: at com.devpaul.filepickerlibrary.adapter.FileListAdapter$GetFileSizeTask.getDirectorySize(FileListAdapter.java:386) 10-19 12:29:41.797 E/AndroidRuntime: at com.devpaul.filepickerlibrary.adapter.FileListAdapter$GetFileSizeTask.doInBackground(FileListAdapter.java:372) 10-19 12:29:41.797 E/AndroidRuntime: at com.devpaul.filepickerlibrary.adapter.FileListAdapter$GetFileSizeTask.doInBackground(FileListAdapter.java:358) 10-19 12:29:41.797 E/AndroidRuntime: at android.os.AsyncTask$2.call(AsyncTask.java:287) 10-19 12:29:41.797 E/AndroidRuntime: at java.util.concurrent.FutureTask.run(FutureTask.java:234) 10-19 12:29:41.797 E/AndroidRuntime: at android.os.AsyncTask$SerialExecutor$1.run(AsyncTask.java:230)  10-19 12:29:41.797 E/AndroidRuntime: at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1080)  10-19 12:29:41.797 E/AndroidRuntime: at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:573)  10-19 12:29:41.797 E/AndroidRuntime: at java.lang.Thread.run(Thread.java:856)  This is a major bug as I can't use the library. What should I do? Hello, First off, thanks for using my library! What version of my library are you using? And may I ask what device this happened on? Also, it seems that there is a small error in the code you posted. In your code: Intent filePickerIntent = new Intent(this, FilePickerActivity.class); filePickerIntent.putExtra(FilePickerActivity.REQUEST_CODE, FilePickerActivity.REQUEST_FILE); startActivityForResult(filePickerIntent, FilePickerActivity.REQUEST_DIRECTORY); You are putting the scope type to requesting a file, but you are setting the request type to requesting a directory. I believe this is causing a null exception because no files will be listed. Those two things should match (take a look at the Readme for more examples). If you want to request a file path to a file change the last line of your code to: startActivityForResult(filePickerIntent, FilePickerActivity.REQUEST_FILE); otherwise change your scope type to FilePickerActivity.REQUEST_DIRECTORY. If this doesn't fix your problem let me know and I'll try to replicate the bug. It fixed the problem, thanks for the answer. No problem. Again thanks for using my library!
GITHUB_ARCHIVE
It brings Ruby-like class behavior. When a class is declared, it extends the old class if the class name exists already. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 import new import inspect class RubyMetaClass(type): """ """ def __new__(self, classname, classbases, classdict): try: frame = inspect.currentframe() frame = frame.f_back if frame.f_locals.has_key(classname): old_class = frame.f_locals.get(classname) for name,func in classdict.items(): if inspect.isfunction(func): setattr(old_class, name, func) return old_class return type.__new__(self, classname, classbases, classdict) finally: del frame class RubyObject(object): """ >>> class C: ... def foo(self): return "C.foo" ... >>> c = C() >>> print c.foo() C.foo >>> class C(RubyObject): ... def bar(self): return "C.bar" ... >>> print c.bar() C.bar """ __metaclass__ = RubyMetaClass This meta-class helps adding methods to class which declared already. I wrote this for a practice of how to use meta-class and frame object. Meta-class hooks the class declaring, and it modifies class in caller's scope through frame object. known issue: - Thise class names should become a general word that explain the role. I just named 'Ruby...' for a demo of ruby-like class rule. - Others may be confused by the different class rule. Most of python users expects the new class declaration overrides an old class. really needed? If so, why? This metaclass will not co-operate with other metaclasses, nor is it subclassable. ...because it expects __new__ to be called from the frame where the function is defined. But, if you subclass this metaclass, __new__ will be called via 'super()' in the subclass, so this will inspect the wrong frame. This problem isn't fixable within a metaclass; the only way to fix it is to use an explicit metaclass that wraps the real metaclass, or conversely to use a "class advisor" function (see PyProtocols' 'protocols.advice' module, or Zope 3's 'zope.interface.advice' module). Such advisor functions can identify the correct frame before the class is even constructed, and then get a callback with the constructed class. A class advisor isn't inherited, so you have to use it in each class you want to be updateable, but the approach is combinable with other metaclasses and advisors, while the technique shown here will not work correctly with other metaclasses. Ans: It's for GC. I read so that in the documentation, http://docs.python.org/lib/inspect-stack.html with other metaclasses? I did not know how to do it. Thanks for the information, pyprotocols and zope's code, I had not ever seen them. Both projects has interesting codes I have to learn. About the stack frame scope, that was as I expected. But, I haven't seen the exception case, when the '__new__' is called by 'super', Can I see the minimum code ? Sub-classing, I've tested was ... class C: def foo(self): print "C.foo method is called" class C(RubyObject): def bar(self): print "C.bar method is called" class D(C): pass d = D() class D(RubyObject): def baz(self): print "D.baz method is called" d.foo() d.bar() d.baz() and it worked in this case. But I am not sure it with other metaclasses. about multi meta-classes, and how it works. When I declared '__metaclass__' with subclass of RubyObject, it just shown this error: TypeError: Error when calling the metaclass bases metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases This is seem another problem.
OPCFW_CODE
Ramblings from another nerd on the grid This should get your blood pumping. I mean, what's it like to run Windows Vista on an overclocked Quad processor like this? Let's consider the first part of the specs: Holy cow! That's a lot of juice my friends. I wonder how much power it draws when it starts. See the rest of the technical specs here. Looks like a wicked fast machine. The case isn't bad either. Who's bold enough to order one and not tell the wife, install it and have sitting there glowing at night when she gets up for a drink of water? Sorry, even I'm not that brave. I'm guessing the Windows Vista Experience Index on this baby is a 6.0. It'll be interesting to see what happens when it runs SP1, assuming of course 6.0 is not longer the top score possible. Bet Michael Dell isn't running Ubuntu on this!!! Seems like its all about the disk drive in vista. My machines never seemt o be at the peak for proc/mem but are always "grinding" away. Even when my machine is sluggish its the disk that is causing it. My laptop gets hot enough to cook on usually. Find me a machine where all the data is in RAM and then I'll cheer! ;) Check out some of tweaks I do at the tail end of the post @ http://blogs.technet.com/keithcombs/archive/2007/11/11/installing-windows-vista-x64-on-a-thinkpad-t61p.aspx. Helped a lot on my laptop. Buy one without telling the wife? I bet this rig runs for north of ten grand. That's a bit much for an impulse buy... :-) Back when I was a high paid contractor I bought a cute little sports-car without my wifes involvement. lets just say I'll not be doing that again anytime soon.... ;-) Hey Keith, this is the exact same machine that my boss told me to go buy at Microsoft. Actually I think I'll get two just in case one breaks. :) The system I have (Wife approved!) COULD be overclocked to come close to those specs: Shuttle SP35p2--but it looks far less Alien, and I have a lowly Q6600 in it and PC6400 ram. It doesn't have dual meg networking, nor as much room for video firepower, but it does manage a 5.5, with 3 5.9's and the ram and graphics (fanless 8600) coming in at 5.6. It has the controls and board to take the faster processor and ram, and overclock, but I spent a whole lot less than that system starts out at. My point is really that this system isn't so over the top as it looks--some more basic hardware is capable of similar performance--at a far lower price. When I set all the est options, the total came to over $10,150. Insane. Well, lets not forget how expensive the Dell XPS 720 can get. I wonder how many units are actually sold. Too rich for my blood. Looks cool, but I'm partial to the HP Blackbird, myself. :) Nice computer indeed, but I'd really prefer a Nvidia 8800GTX instead of the included Ati card. Also, I'd definitely want 4GB of RAM if I was going to pay that kind of money for a computer.
OPCFW_CODE
This is an old revision of the document! CAS is designed to calculate things symbolically, so sometimes we can be surprised with results for numerical calculations. e.g. SUM(1/x^2,x,1,100) tries to sum the expression (1/x^2) for x = 1 to 100. However, under CAS, it will result in a ratio of 2 very large integers. You'll have to run approx() to get the decimal result. If you try it with 5000 instead of 100, it will take quite a while and when complete will result in undef as it ran out of decimal digits for the numerator and denominator of the result. It's not trying to evaluate 1/x^2 in a loop and adding the result to a sum. It's trying to evaluate it in a way that it could do it with SUM(1/y^2,y,x,1000x) resulting in Psi(x,1)-Psi(1000*x+1,1). To use a conventional numerical loop, don't use CAS. In this case, go to home mode and use the Σ symbol from the math template key….as in Σ(1/x^2,x,1,5000). This runs much faster than SUM() as it's not trying to solve symbolically, and will result in a numerical solution (1.6447… in this case). Just remember the rule that CAS is trying to solve symbolically, Home solves numerically, so use the appropriate CAS or Home function depending on which you are trying to do. In CAS, it is best to always use * when you mean it. e.g. 2*x, a*x, etc.. You can get away with using 2x (a number multiplied by a variable), but never with 2 variables, as it will see ax as a single variable called 'ax', so you'd always have to use a*x. Also, understand that when using variables in an expression, if that variable has a value assigned to it, CAS may be using that value, instead of solving it symbolically. You can remove the variable from being defined using the purge() function. The restart command purges all the CAS variables. Some CAS functionality can be assisted by telling the system some characteristics about the variables involved, using the assume() and additionally() functions. The function about(var) returns any assumptions on the variable. Purging the variable removes any assumptions. e.g. To get the integral of e^(-a*x) from 0 to infinity will return undef unless you say assume(RE(a)>0) beforehand. Then it will return 1/a as the answer correctly. Sometimes, CAS will state something cryptic about how it's answer may not be perfect and when you hit enter again, it then gives you it's answer. This is because some math can be complex and it's trying to tell you what it knows. Don't let these statements worry you. It's just the CAS system being complete. Make sure you read the help docs for the about, purge, assume and additionally commands.
OPCFW_CODE
Is there a difference between an Ubuntu Live CD and an Ubuntu Install CD I ordered for an Ubuntu Install CD and obtained a CD by post (Ubuntu 9.10). Is this a Live CD or an Install CD ? What exactly is the definition of a live CD ? They are now one and the same. A Live CD allows you to boot the full OS from the CD and try it out fully without affecting your existing HDD OS installation. It then allows you to install fully to the HDD if required. Specific info on Ubuntu's Live CDs is available here: https://help.ubuntu.com/community/LiveCD A quick demo of Ubuntu. Try Ubuntu without any changes to your machine! Windows or whatever you use normally is unaffected after trying this and then rebooting (in 99.999% of cases). Almost any Ubuntu Cd can also be used as a LiveCd as well as an installer. It is the default option when booting from Cd. Only some of the non-standard downloads (such as the "Alternate Cd") lack this functionality. Windows users might be familiar with the term 'boot CD' or 'bootable CD'. A "LiveCD" is more than that because it gives the option of running a normal desktop environment with all the normal programs and some extras. A LiveCD usually finds your Internet connection and Firefox should be able to surf the internet into here. Most distros (versions or "distributions" of Linux) have this LiveCd functionality on their installer CD, a few have a separate Cd to download. It is rare to find a distro that has no LiveCd session at all. Ubuntu tries to make their LiveCD the easiest to use. Both are the same. At least, in the case of Ubuntu. When you boot the CD, you can start Ubuntu Linux in LIVE mode. This mode will let you test your hardware for compatibility, for example. Just one thing to remember: since everything is loaded from the CDROM, everything is much slower than it could be expected when booting from a hard disk. However, you will be able to test all your hardware without touching your disk. This lets you have a better look at Linux before making the big move. I used such CD in the past when shopping for a new laptop. I had the CD with me at the store, asking the salesrep to boot the laptops with the CD, so I can see the compatibility. I then came out of the store with a laptop that was running Ubuntu perfectly, no fuss about drivers, compatibility, wireless, etc. This saved me much time, and avoided me potentially costly errors. I talk about Ubuntu, but many other distributions offer such live CD's. KNOPPIX is one of them. Some of them will offer an option at boot time, to either start LIVE, or go directly in to the installation process. This second option saves you time, in that it only loads the needed files in memory before starting the setup process, while the LIVE version will need to load a functional environment before letting you start the actual setup process.
STACK_EXCHANGE
I shared my initial impressions and setup process in the following two articles. As a brief reminder, Framework set out to deliver a laptop that encourages home servicing and incremental upgrades. The Framework Laptop is fully Windows 10/11 compatible, however, I am a Linux user and the “open” philosophy of Framework felt like a perfect match. In this article I plan to share my experience using the laptop over the past month, covering performance, reliability and usability. It should be noted that as I am running Linux, my impressions may not reflect the Windows experience, which has different performance and power consumption characteristics. With that said, I am pleased to report that Framework has published handy guides that cover the installation process for popular Linux distributions. With Fedora 36, everything works out of the box, including common “pain-points” such as the audio, webcam and fingerprint reader. It is, however, important to ensure you are running the latest BIOS, which at the time of writing is version 3.07. At this time, the BIOS update cannot be triggered from within Linux. However, Framework is actively working to implement LVFS (Linux Vendor Firmware Service). As a result, the BIOS must be updated manually, by downloading the relevant files from the Framework Help Center. Once downloaded, updating is a simple three-step process: - Extract contents of “.zip” to a FAT32 formatted USB drive. - Disable Secure Boot in BIOS (F2 > Security > Secure Boot). - Boot the system while tapping F12, and select the USB drive. The remainder of this article assumes BIOS version 3.07. The specification of my Framework Laptop can be found below. - Framework Laptop DIY Edition - Intel i7-1185G7 4.80GHz (4C/8T) - 64GB Crucial DDR4 PC4-25600C22 3200MHz RAM - 1TB Western Digital Black SN850 NVMe (7GB/s Read) - Intel Iris Xe Graphics - 13.5-inch LCD Display (2256x1504 @ 60Hz) - 2x USB4 (USB-C), 1x USB 3.2 G2 (USB-A), 1x HDMI 2.0b Knowing that the Framework Laptop is a standard x86-64 architecture, running an Intel 11th Generation processor, the performance is highly predictable. For example, any generic benchmark and/or review of the Intel i7-1185G7 will provide a good insight into the performance. The main differentiator would be any limitation regarding thermal throttling, which I am pleased to report is not something I have experienced with the Framework Laptop. To provide some numbers, the Geekbench 5 score was 1727/6134. I also tested Docker, leveraging the build process outlined in my article “Docker and Apple Silicon”. Overall, the performance of the Framework Laptop is very good. Not groundbreaking, but consistent with other laptops with an equivalent specification. Finally, it is worth mentioning that Framework recently announced a series of Intel 12th Generation mainboards, which can be installed as an upgrade. With the combined benefits of the newer 12th Generation architecture and higher core count, I would expect to see a decent performance improvement over the current 11th Generation options. So far, I have been impressed with the reliability of the Framework Laptop running Fedora 36. I have not experienced any errors or unforeseen issues caused by hardware or software. Although this should be a baseline expectation, there has been a worrying trend of poor quality control with consumer electronics, even from “big brands” such as Dell. Knowing that Framework is a new company, combined with the additional complexity of designing a modular laptop that can be serviced at home, I would have accepted a few “issues”. Therefore, it is great to see this level of quality in their first consumer product. The only area I would highlight relates to the display hinge, which (in my opinion) is not rigid enough, which can result in a minor wobble when typing. Interestingly, it would appear this was a common view and Framework recently released a new hinge kit, which includes a more rigid (4.0kg up from 3.3kg) mechanism. This is a great example of listening to your customers and the unique advantage of a modular design, where individual parts can be replaced. Finally, it is important to highlight the battery life, which I would describe as “ok”. In theory, the laptop can achieve 10 hours of usage. However, I do not believe I have ever achieved this outcome. In my experience, I would guess between 6 and 8 hours. However, when using Linux, it is “sleep” mode that causes the biggest concern. By default, using Fedora 36, the Framework Laptop battery can drain as much as 15% per hour while in sleep. This is far from ideal, likely caused by the fact that Fedora 36 (at this time) cannot take advantage of the full sleep state of the Intel processor. To mitigate this issue, I switched the default sleep state from “s2idle” to “deep”, using the commands highlighted in my previous article. This change reduces the battery drain to approximately 3% per hour, which is not perfect, but manageable. As a result, I do not believe I could rely on the Framework Laptop as an “all-day battery” laptop, which is a shame, as I have been spoilt by the latest Apple MacBook Pro running Apple Silicon. Overall, the Framework Laptop has been a pleasure to use, either used as a standalone laptop or connected to my Samsung C49RG90 49-inch Super Ultra-Wide monitor. The keyboard, modular expansion cards, 3:2 aspect ratio, and display flexibility are particular highlights. The touchpad, speakers, webcam and microphone are all good, certainly not the best I have used, but perfectly viable for daily use. I do like the physical buttons to disconnect the webcam and microphone, providing additional reassurance for privacy-conscious users. The display itself and fingerprint reader are my only frustrations. The display aspect ratio (3:2) and resolution (2,256 x 1,504) are great. However, it is highly reflective. The fingerprint is ok, but (in my experience) unreliable when compared to Apple TouchID. This might be due to the Linux driver vs. the hardware, which I have not been able to test with Windows. Finally, it is worth reiterating the primary selling point of the Framework Laptop. In the event of a hardware issue, I can simply swap the individual component. In conclusion, I can’t help but love the Framework Laptop. - Is it perfect? No. - Is it the best laptop I own? No (Apple MacBook Pro M1 Max). - Do I still love it? Yes. The vision and ambition of Framework as a company is infectious and they have done an amazing job delivering a product that redefines how we think about a laptop, with a focus on the user and sustainability. I am eager to support their cause and excited to see what the future holds!
OPCFW_CODE
Paypal Direct Payment Integration In Php Follow these payment integration in paypal php. Can Be Used Independently With Other Applications. We just want to simplify the process of ordering, introducing you the most efficient payment gateway integration methods. Depending on checkout button and paypal direct our subscription site for direct our first, you need further inquiry and. This gateway stores credit card numbers securely and generates the random token. This course we shall store. Sdk for testing purposes. Can we some how implement it here? Lastly, you need to test, test, test your final code before going live as otherwise, you could be dealing with a financial disaster at your hands. You signed out in another tab or window. Nine out your php payment integration in paypal direct credit card? Why are never been completed successfully, you so why should see immediate problems with web application the integration in paypal direct payment gateways can make for our team to best web application. This step implementation help how to fit my vision for a clipboard to use visual analytics, whereas that things, if condition which we left. If an account by using php payment gateway that is a deal. In the product support a mechanism to create our server side is quite a thank you analyze the integration in php payment management built in php and canada; it helps a concept of stripe. This system can be used for subscription payments however the sample code Is not currently set up to do so. How to delete multiple rows with checkbox using aj. Complete when a database: we highly scalable data. Within this checkout process, customers can choose the payment mode and the shipping address while using Express Checkout. The Direct Payment method has several variations that enable you to authorize a payment and complete it at a later date. For transactions in paypal direct payment integration php this can find your. Validate phone in payment in. Explore our engineering solutions. Show alerts in top right corner? Premier or questions related events and optional but it will ensure that script, and sends this information that rely on your account to help you? Now that is green depicting the functionality of putting this integration in paypal payment php login id using an nda should be used for them back to. We can make sure that do so still valid credit card companies around for paypal direct payment integration in php world a month, filling in the payment gateway? So using credit or in paypal payment integration php application, there may be done using name is empty then no set up a method to generate bacs to create seo ranking with. In this means you will display an offline payment solutions private limited to pdf from the differences from you the direct payment. The direct payment details so comments are not ok and output the in paypal direct payment integration php laravel developers must be the appropriately named products will be converted with. This file directly from paypal payment in laravel resource controller. Credit card and php code for payment in paypal direct payment integration php create a vision and. Question from inside, payment integration code? If you can iron out to maintain both stripe php? Some have any chats, all live credentials for highlighting that you just a guideline for integration php: a new checkout. There is payment integration in php with paypal for it is also, see from secure card? Getting all of the products. Paypal page for your app. Log into a notification paypal? Why are video calls so tiring? How to use subquery in select statement? UI and buttons for the types of payment you wish to offer at checkout. In php installed in my user info, direct pay for security measures and manage their standards set in paypal direct payment integration php? Api gateway allows the direct integration? From the records from paypal process payments pro requires full lifetime access online sales, we can make. Debit or in paypal direct payment integration php, find helpful if you can also create a specific commission. Cvv for more information, and be displayed on an order to inform me how is not valid for its content from db in. Each receiver in php. The php insert in paypal direct payment integration php to the recommended not saved in. Paypal integration code along with this is the basics of your server, our support articles in mind blowing article is digital products. As two steps if the php, so there are the rest api keys are testing, when there are my customers remain on the default api connection interrupted or delete multiple php payment integration in paypal direct. In php payment gateway go with direct payment gateway services online, direct paypal payment integration in php insert value pair api for your type gateway can ensure your account and staging environments. Value you to you integrate direct post about this information for paypal direct integration part of businesses are able to after these tutorials and obtain credentials. Thanks a paypal in a certain pricing package for a trial periods, storing the card information. Clipping is a handy way to collect important slides you want to go back to later. APIs and interaction methods to choose from. Integrate PayPal with your WordPress site to improve user. How can proceed to be in paypal direct payment integration php. Your order to your code in the use of stripe php payment integration in paypal direct. Once you can ask you think you currently looking for direct integration in this is preferred country. To enable faster payments Users can easily integrate all payment. It will be used to send the data to Stipe and prevents the submission. Nine out of ten doctors recommend Laracasts over competing brands. With paypal cart, introducing you say that you would set up. When considering which payment gateway is right for you, consider the per transaction fee as this may save you a dime or two daily. The return page will acknowledge the user as shown below. Although i need to work correctly, direct payment process on php payment integration in paypal direct pay via a payment is focused on. In php payment gateways in paypal direct payment integration php. Ahmed fakhr php artisan command which you have pay for currency and verify that is it comes with optimization for each transaction? It takes care of your php retrieve data relevant for direct integration into a lot of fragments geraldo. An api certificate method and other paypal payment method there are happy clients apps development partner with. It to be rejected and php file, and in php create a rebill should review page and. Like direct integration php script? IPN service you must either have a Premier or Business account. We mention the in paypal direct payment integration php laravel. On the page are shown three products with the ability to change three colors for each item. Googled it down into php education, php payment integration in paypal direct payment button style messaging, direct payments we have to get implementation help manage credit card payments get? Quintet helps some tiny issues or in paypal direct payment integration php versions, check login information. Stripe php settings with direct rest api user is collecting both lib folders alongside your php payment integration in paypal direct payment system that your payment you business requirements in the error messages in the common session object and track of. Then follow the full refund an ad blocker on your email. The php api response from fraudulent payments from selling your business owner who processes, direct paypal payment integration in php language, feel of online payment that you want you for? Paypal payment gateway integration in codeigniter. REST API apps and click on create app button. Use different payment related question: can easily record it except i guess that issues at checkout as administration and. We can i would try my php payment integration in paypal direct payment page, direct debit card data transfer money to. Authentication to process direct payment option to integration in paypal direct payment. No currency conversion option. Send the token to your server. And Stripe is one of them. If you get your application we have option for you find it, emails that any payment to test transaction details on the php source codes and in paypal? Select their subscription service also be done easily import this time for direct integration methods are useful and direct paypal transaction status. Help as changing the paypal page checkout payment integration in paypal direct payment gateway page! So that it is there are welcome back to use case if you may be inserted in enough to meet unexpected discrepancies and in paypal direct payment integration php code in a firm place. Our password and international transactions, please feel of this way you accept online payment transaction to paypal direct payment, uk accounts at the payment gateways are going to receive this package. What you to paypal direct payment solutions is paypal direct payment integration in php code or currency setting. Some merchant dashboards to understand what is why you were also as expected inside your php payment integration in paypal direct payment is missing any issues, a little bit cheaper to. It provides a tailored and experience for your website payments standard buttons to payment integration in php pages are looking to. The direct debit cards you are needed to one is converted into complex than getting a direct integration in chargebacks before going to.
OPCFW_CODE
So my mum’s Kubuntu laptop broke; the screen is completely kaput. You know that test pattern LCDs do when they are really not happy and they’re more liquid than crystal? Yeah, it’s doing that. I have it now to play with, but as a result my mum now has a shiny new Windows 11-based gaming laptop. (Funny aside, I backed up everything from her old laptop onto an SD card, and went over yesterday to copy everything I’d recovered… completely forgot I’d formatted the SD card to ext4… whoops xD) Anyway I was setting it up for mum and I asked her what browser she wanted to use. She said Firefox (woo!), which started a discussion with dad about how Edge was a much better and more secure browser than Firefox… which I’m sceptical about. He talked about its integration with the OS, which to me as an absolute layman again seems like a bad thing overall. I don’t hate edge tbh. It’s just… Chromium at this point. Like, I knew it was the Chromium back end, but I wasn’t expecting it to be the Chromium front end as well now. I actually use it on Linux alongside Firefox and Konqueror, though naturally this doesn’t have the OS integration of the Windows version. I was wondering what your thoughts on this were. I think on balance that whatever Microsoft’s best efforts may be I am always going to err towards open source over proprietary as far as security goes but most of this comes from irrational gut feeling rather than any real insight about the state of Edge these days. On the other hand, I guess if Edge is basically Chromium which is also open source presumably this means that Firefox doesn’t have the advantage there. Chromium based browsers are the #1 target when browser exploitation comes to mind. I’d imagine so inasmuch as they have like 90+% market share You might discuss the lack of privacy, and Microsoft’s track record on phoning home. Leading to trackers, browser fingerprinting, and the like. I was impressed recently when Librewolf persuaded Google that I was using a vanilla Windows system. Freaked Google out a bit. No shame in that… I am not convinced there is that much integration; various parts of Windows appear to use Edge’s copy of the Blink renderer, but the browser itself does not appear to integrate in any substantial way, perhaps intentionally, so as to avoid any unwanted scrutiny. Notably, while a fresh install of Windows has the ~/Favorites folder, creating or moving files there does not affect Edge’s Favorites list (bookmarks). The only other distinctive thing that comes to mind is the newtab page with its daily changing background image or video. I am not sure if these mirror the daily Bing picture, the daily Windows lockscreen photo (which differs from Bing), or sources a unique daily image. As would I, though Chromium does still have some potentially unwanted network connections to Google from what I have heard; next time I need a Blink browser I might look at ungoogled-chromium again. Is that mainly just changes to the useragent and the navigator DOM object? Is there an info page where it describes what the developer did? I suppose, it could have just reused the modifications that Tor Browser used. I think recent versions of Tor Browser might not attempt to imitate different operating systems anymore, but I could be wrong. I looked at the FAQ Yes, librewolf uses stuff from the Tor_Uplift project. The biggest security risk on modern browsers is the user themselves, and most attacks aren’t targeting the browser - they’re targeting the user. Phishing scam, download image.jpg.exe and open it, stuff like that. So tell your dad that it’s the user that makes a difference, and no one’s smarter than your Mum From a technical perspective I don’t have any hard data on what’s better than the other. What you hear is that chromium based browsers have more sandboxing features, but whenever I’ve tried to prove it I just find FUD. And honestly sandboxing is only a tiny part of a browser. This. I don’t think there’s much difference in regards to security between the mainstream browsers like Firefox, Edge, Chrome, etc., the biggest security flaw is the user. @cakeisamadeupdrug no sure how much this actually helps security, but I’ve told my mother to use two different browsers, one for online bank and other important stuff, and the other for everything else. I do actually do this, but for slightly different reason. I would never use my main browser for something I might show on stream or with students when online teaching. I use a separate browser for all of these. Many power users I know use Edge, but only because they’re power users. IIRC, Microsoft provides a comprehensive list of almost all of their IPs and subdomains for Windows, which includes trackers and all, which one can block using a Pi-Hole. But for a regular user, anything works and they’ll probably be the biggest vulnerability themselves. The only choice you might have is the varying telemetry.
OPCFW_CODE
An accomplished, results - driven Developer wif solid experience in the design and development of large-scale B2B e-Commerce web portals using.NET technologies. Over 10 years of web development experience developing / managing portals for the financial & medical industry encompassing the design of EDI and business application to support banking/financial partners. Unique combination of skills in multiple areas of technology including Web Design, Software Development, Network Engineering, LDAP, Systems/Database Administration and IT/Web Security. MCP and former MCSE wif deep understanding of Microsoft Systems. Track record for identifying client needs and developing scalable web-based applications that increase functionality, reduce costs and improve overall systems performance. .Net System Analyst / Network Administrator Oversee and deployed all network services, including point-of-sale systems, video surveillance, access control, and active directory. Developed custom SQL Server procedures, triggers and assemblies to aid in the retail operations. Over the course of my employment me was responsible for several database migrations, deployed workflow accounting solutions, and streamline the intake of documents by utilizing on premise MS SharePoint services. Developed custom C# .Net modules for supporting the retail systems to meet government standards, deploying custom IoT software/hardware and interfacing wif several vendor SDK and Web APIs. Extended our current retail operations to utilize several MS Azure services including enhancing our SSRS by talking to Power BI, MS Power Automate and Power Apps in support of the complex accounting requirements to meet the requirements of the organization. Responsible for the developing and of the in-house web base fishery management systems, a tool used in the management of commercial fishing fleet. Supported networking, security and mobile infrastructure, including local PBX and access control systems. Other duties included facility network security, supporting office personnel wif advance MS Office technical issues and ransomware prevention. Primary technologies used where ASP.NET web forms, SharePoint, MS SQL Server and SSRS. Sr. Software Engineer Provided direction and assurances on development methodologies and provided proof of concepts to various government originations wifin Canada & US. Major projects include the migration of an ASP application to rolling out a CDC approved .NET immunization registry for the state of Illinois. The open source Immunization registry (me-CARE) is built upon Oracle 9i and utilizes Oracle Label Security and XML Web services as the foundation technology. Other e-government projects include e-commerce application for the US department of commerce and expense tracking for the Atlantic Canada Opportunity Association. Managed and maintained server development environments including performance monitoring and version control for the development team. Sr. System Analyst Developed multiple web portals for global partners. Prototype models wif other technologies .Net, PHP and JSP, to offer input on the direction of future architecture. Served as key member of Core Technology Group, leveraging ASP technology to stream line content management process by developing and integrating a custom CMS wif the financial planning portal. Selected by Director of Operations to serve as key resource on SWAT team to resolve Production WEB Services and technical issues wif major clients in the financial sector based on SLA. Leveraged MCSE and Operations experience to identify security holes in related to ASP technology and quickly implemented solution to avoid security contingencies. Coordinated wif Business Analyst to provide solution to modeling issues that enabled group to develop true Monte Carlo Simulation wifin MS Excel document.
OPCFW_CODE
Summary: An interesting real-world example of Microsoft’s influence on the press Microsoft’s use of Free software is a subject that we covered many times before, e.g. in [1, 2, 3, 4]. Hotmail, for example, was running BSD long after Microsoft had acquired it, but how far did a dishonest Microsoft go to deny it? Well, Slated has picked up some old links which nicely fit and explain a newer incident. The first link he picked is this one where Microsoft admits being a BSD user. Despite the company’s bitter campaign against open source software, Microsoft continues to use FreeBSD to power important functions of its Hotmail free e-mail service. Much to the chagrin of the folks at Redmond, FreeBSD and Apache continued to run Hotmail for several years after it was purchased in 1997. Microsoft publicly claimed to have removed all traces of FreeBSD last summer, and even published a case study documenting its experiences. Microsoft told BetaNews that solutions such as FreeBSD are in use throughout its IT infrastructure. A spokesperson also clarified the the software giant’s position on OSS technologies, and views on GPL licensing. Microsoft maintains however, that it is migrating to its own proprietary software and any delays are meant to ensure a positive experience for its customers. Contrary to recent claims, the popular Hotmail service does not run entirely on the Windows 2000 platform. First reported by the Wall Street Journal, FreeBSD developer Trevor Johnson determined that Microsoft was still using the open source operating system for DNS hosting and also for tracking advertisements. It has also been reported that FreeBSD software components are utilized in Microsoft products, such as Windows 2000. BSD’s TCP/IP stack, a vital communication protocol, is rumored to have been used in several Windows operating systems, enabling users to connect to the Internet. Slated does not stop there. “The original WSJ article,” he points out, “has mysteriously disappeared, but fragments remain elsewhere.” Wall St. Journal: Microsoft Uses Open-Source Code Despite Denying Use of Such Software Lee Gomes, the reporter who wrote the friendly (and curiously MSNBC-edited) piece last week about “Microsoft’s Uphill Battle Against Linux” is back this week with an amplification on Microsoft’s use of open source software: “Microsoft Corp., even while mounting a new campaign against open-source software, has quietly been using such free computer code in several major products, as well as on key portions of a popular Web site — despite denying last week that it did so. Software connected with the FreeBSD open-source operating system is used in several places deep inside several versions of Microsoft’s Windows software, such as in the “TCP/IP” section that arranges all connections to the Internet. The company also uses FreeBSD on numerous “server” computers that manage major functions at its Hotmail free e-mail service, whose registered users exceed 100 million and make it one of the Web’s busiest sites. Microsoft acknowledged its repeated use of open-source code Friday, in response to questions about the matter. Just two days earlier, it had specifically denied the existence of any such software at Hotmail.” Also from LinuxToday (as per yesterday): Why is the NY Times so Dumb About Linux and Windows? The New York Times seems hard-wired to rarely identify any Windows malware as Windows malware, but rather as “computer malware.” They seem to share this illness with other people too, such as researchers and professors. Can it be that all these educated people who make their livings knowing things and uncovering new knowledge really don’t know that there are other computer operating systems besides Microsoft Windows? Their latest failure at making this distinction is China Orders Patches to Planned Web Filter, and they also missed the real story: since this censoring software is required to be installed on all computers sold in China, does that mean that Mac, Linux, and Unix computers are banned? Because it’s a Windows program. Microsoft and the New York Times are very close. Steve Ballmer publishes articles in there sometimes. A year ago we wrote about the New York Times promoting Silverlight and this was hardly surprising given the strong relationship between those two. Just months ago there was a rumour that Microsoft would buy the debt-saddled New York Times. So, what Carla points out above is that the New York Times, which enjoys a wide daily distribution, consistently defends Microsoft through omission of critical details. The BBC too perpetuates the belief that computers and Windows are synonymous. We previously explained why the BBC and NBC cannot ever be trusted on Microsoft and Novell matters and returning to Slated’s links, he also shows that “The MSNBC even tried to censor the story [about Hotmail running on Free software].” MSNBC has been caught doctoring copy originating from the Wall Street Journal to make it more favourable to the news channel’s co-owner Microsoft. The changes introduced by MSNBC also had the effect of removing references to Microsoft competitors. Amongst many fairly harmless edits, designed to improve readability, were some more ominous changes. The original WSJ report gave a harsh analysis of Microsoft’ offensive against open source software and the GNU General Public License, initiated six weeks ago by Craig Mundie. The WSJ cited Microsoft’s own dependence on open source software, and cited lawyers who were critical of its interpretation of the General Public License. “Microsoft said that since last summer, Hotmail has been running on both Windows 2000 and the Solaris operating system from Sun Microsystems Inc.,” noted the original copy from the WSJ. MSNBC amended this to:- “Microsoft said Hotmail has been running on Windows since last summer.” By Friday, the original version of the story that appeared in the WSJ had been restored to MSNBC. “Here’s the best rebuttal I could find,” writes Slated, “although the author still does not actually deny that Microsoft benefited from “freeloading” the BSD code.” I worked at Microsoft for ten years, most of it on the core Windows NT/2000 (hereafter referred to as NT) networking code. As such I briefly dealt with the Hotmail team, mostly to hear them complain about the lameness of the telnet daemon in NT (a valid point). I do know that when Microsoft bought Hotmail, the email system was entirely running on FreeBSD, and Microsoft immediately set about trying to migrate it to NT, and it took many years to do so. Now it seems that the transition is not complete. Well, what are you gonna do. Now, some of Spider’s code (possibly all of it) was based on the TCP/IP stack in the BSD flavors of Unix. These are open source, but distributed under the BSD license, not the GPL that Linux is released under. Whereas the GPL states that any software derived from GPL’ed software must also be released under the GPL, the BSD license basically says, “here’s the source, you can do whatever you want, just give credit to the original author.” Eventually the new, from scratch TCP/IP stack was done and shipped with NT 3.5 (the second version, despite the number) in late 1994. The same stack was also included with Windows 95. However, it looks like some of those Unix utilities were never rewritten. If you look at the executables, you can still see the copyright notice from the regents of the University of California (BSD is short for Berkeley Software Distrubution, Berkeley being a branch of the University of California, for some reason referred to as “Berkeley” on the East Coast and “California” on the West Coast…and “Berkeley” is one of those words that starts to look real funny if you stare at it too long – but I digress). Keep in mind there is no reason to rewrite that code. If your ftp client works fine (no comments from the peanut gallery!) then why change it? Microsoft has other fish to fry. And the software was licensed perfectly legally, since the inclusion of the copyright notice satisfied the BSD license. To conclude, Slated writes: Did Microsoft satisfy the BSD license? Are they “freetards”, according to [some] definition? Microsoft and their anti-Freedom supporters are a bunch of hypocrites. Or, to use the words of the above author, it’s “like the event horizon calling the kettle black”. So when can we expect Microsoft (or even Spider Systems Ltd.) to compensate The Regents of the University of California for “all their hard work”? It sure changes one’s perspective. █
OPCFW_CODE
Texturing for games using Blender is confusing I am a total noob who is interested in understanding how textures for video gaming engines work and I am completely lost. I can't find tutorials or clear explanations of these topics so I would love to have some insight from some industry professionals. I found a tutorial of a really cool looking mushroom, which I believe is made by a style called toon shading, which reminds me of games like Borderlands and Zelda. Now in the tutorial, this entire process is made inside of blender and as far as I understand, without this technique called UV mapping or Unvrapping, you cannot transfer them to game engines. (I still don't understand the complete difference between these terms as well). Now I kind of have an understanding of what UVs are, they are essentially transferring the surface of the 3D object to a 2D plane, so artists can paint them easily and have these maps get recognized by the game engines(I guess?) like Unity or UE. However, what I don't understand is why the node-based system of Blender cannot be transferred to game engines. after you UV unwrap your object, how can you transfer all of this data, the colors, the grease pencil lines(the black lines) into a 2D map without ruining your texture? Is this why every single full pipeline tutorial I see uses the software called Substance 3D Painter? I haven't been able to find a simple in-depth tutorial on how substance painter works with stylized art. I know the concept of hand- painting but from most of the sped-up tutorials, I have seen people are able to create stuff like this paper without painting anything, they just click on the different parts of the object, apply the materials from the software, and get this stylized look without painting anything. After doing some research I believe some of these smart materials(I may be pulling terms from my bottom at this point, sorry I am really having a hard time understanding all of this!) are provided by substance painter, some are created by professionals and sold for usage and some are coming from a completely different software, where you create these textures. I believe it is called substance designer and the stuff some people achieve with that program is unbelievable. like this lava rock thingy, I know the artist created this using Substance 3D, but I don't understand where this is used, do you make a complete volcano zone-game map thingy with this material without ever painting anything, do you apply this material directly on the game engine or do you transfer this to blender and somehow export it as a completely textured model to your game? Now to wrap this mess of a post up, a final image of this incredibly cool level that is created in Unreal engine. I reached out to the artist who created this but sadly could not get an answer. The listed programs he used do not include Substance 3D Painter. So does this mean that he created these awesome looking textures in substance designer, and applied these textures to his 3D models directly in Unreal Engine? If so does this mean that Blender is only used for modeling and sculpting? If someone desires to create awesome looking stylized textures, without hand painting or adobe products and transfer those textures to game engines, what they should do? Does blender provide the necessary tools tocreate stuff like the pictures? Thanks in advance. https://www.artstation.com/artwork/g2WDRm(The original link for the final picture) Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer.
STACK_EXCHANGE
What is wrong with my tuple? I created a function to take in a text file with such data: 2012-01-01 09:00 Angel Men's Clothing 214.05 Amex 2012-01-01 09:00 Ben Women's Clothing 153.57 Visa 2012-01-01 09:00 Charlie Music 66.08 Cash and it converts that into a list of tuples: Code : myList = [tuple(j.split("\t")) for j in stringX.split("\n")] Result: [('2012-01-01', '09:00', 'Angel', "Men's Clothing", '214.05', 'Amex'), ('2012-01-01', '09:00', 'Ben', "Women's Clothing", '153.57', 'Visa'), ('2012-01-01', '09:00', 'Charlie', 'Music', '66.08', 'Cash')] And further converts it into this: Code: nameList = [(float(item[4]),item[2])for item in myList] Result: [(214.05, 'Angel'), (153.57, 'Ben'), (66.08, 'Charlie')] With that small sized text file, its running perfectly. But I have to convert a big text file that is over 200 MB with over 1 million lines. It manages to convert into list of tuples but it doesn't convert further into the smaller list of tuples as shown above. It gives me the error when i run the program with the Big File: File "C:\Users\Charlie\Desktop\PYC\PYTHON ASSIGNMENT\test3.py", line 34, in <listcomp> nameList = [(float(item[4]),item[2])for item in myList] IndexError: tuple index out of range Are you sure this was not an empty line you are reading? You have to check the "item" length before using it in this way. i think there is an extra empty line in the text file.. so stringX will have '\n' at the end.. and when you split, use get ('',) as last element in nameList it is (' ',) this problem! thank you very much. May i ask if there is a way for the code to ignore the empty line? @JeroenHeier @type_none You tuple has an empty entry that is why you are getting "IndexError: tuple index out of range" You can add an if condition to validate if the tuple has any values in it. EX: myList = [('2012-01-01', '09:00', 'Angel', "Men's Clothing", '214.05', 'Amex'), ('2012-01-01', '09:00', 'Ben', "Women's Clothing", '153.57', 'Visa'), (), ('2012-01-01', '09:00', 'Charlie', 'Music', '66.08', 'Cash')] nameList = [(float(item[4]),item[2])for item in myList if item] print nameList [(214.05, 'Angel'), (153.57, 'Ben'), (66.08, 'Charlie')]
STACK_EXCHANGE
Ignore some nodes from checking Hi, Could you please suggest me a method to ignore some nodes when traversing? I have tried dontGoDeeper() but didn't really help me. Please provide me with an example. Thanks. I'm assuming you'd prefer to not even compare these nodes to begin with, right? Here are some examples that show you how to exclude properties from the comparison process. This way they won't even show up in the result. Thanks for the reply. It worked really well. I came up with another problem. I need diff checker to traverse through classes and check the difference with the relevant classes. Please let me know if there is a way to do that. ex:- Assume a complex object school which has classes and students. Is there a way for me to compare two school objects and check the difference. My diff checker should traverse through inner objects and show me differences. I checked this scenario and currently it returns null for inner objects in one comparing object. Hm, your desired behavior should actually work out-of-the-box. Could you post a working code example, so I can check what's going wrong? Here is my diff checker method: `public void checkDiff(T t1,T t2){ ObjectDifferBuilder objectDifferBuilder = ObjectDifferBuilder.startBuilding(); DiffNode diff = objectDifferBuilder.build().compare(t1,t2); diff.visit(new DiffNode.Visitor() { public void node(DiffNode node, Visit visit) { final Object baseValue = node.canonicalGet(t1); final Object workingValue = node.canonicalGet(t2); final String message = node.getPath() + " changed from " + baseValue + " to " + workingValue; System.out.println(message); } }); }` And here is how school objects are created: `Student student4 = new Student(); student4.setName("Tharanga"); student4.setIndex(678); Student student5 = new Student(); student5.setName("Luke"); student5.setIndex(635); Student student6 = new Student(); student6.setName("Ruchira"); student6.setIndex(800); ClassFoo classFoo3 = new ClassFoo(); classFoo3.addStudents(student4); classFoo3.addStudents(student5); ClassFoo classFoo4 = new ClassFoo(); classFoo4.addStudents(student6); SchoolFoo schoolFooC = new SchoolFoo(); schoolFooC.addClasses(classFoo3); schoolFooC.addClasses(classFoo4);` The other school object is as follows: `Student student1 = new Student(); student1.setName("Mike"); student1.setIndex(500); Student student2 = new Student(); student2.setName("Ryan"); student2.setIndex(523); Student student3 = new Student(); student3.setName("Pieter"); student3.setIndex(400); ClassFoo classFoo1 = new ClassFoo(); classFoo1.addStudents(student1); classFoo1.addStudents(student2); ClassFoo classFoo2 = new ClassFoo(); classFoo2.addStudents(student3); SchoolFoo schoolFooB = new SchoolFoo(); schoolFooB.addClasses(classFoo1); schoolFooB.addClasses(classFoo2);` And finally diff checker: DiffChecker diffChecker = new DiffChecker(); diffChecker.checkDiff(schoolFooB,schoolFooC); So far, so good. The code looks fine. The problem could be in your classes. Do you provide getters for all the relevant properties? Could you post you classes as well? Doing some spring cleaning in the issue tracker, so I'm closing the issue for now, due to inactivity. Feel free to reopen it, if there is anything I can help you with.
GITHUB_ARCHIVE
68. Telegram From the Embassy in Switzerland to the Department of State1 2285. Space Communications Talks with Soviets. US group met June 15 with six-man Sov delegation consisting Blagonravov and Milovidov [Page 131] (Academy of Sciences), Badalov and Kalashnikov (Ministry Communications), Stashevsky and Krasulin (Foreign Ministry).2 Blagonravov said he made head delegation since Academy responsible for communications satellite experiments. Said assumption from outset his talks with Dryden in 1962 was that joint US–USSR efforts would lead to global communications satellite system, but experiments needed on both sides.USSR does not yet have communications satellite but expects to and has had experiments on means of communication in connection its space flights. (When queried later re nature of such experiments, Sovs referred only to television broadcasts from Vostok spacecraft to earth and between spacecraft.) Morning devoted mainly to introduction by Chayes and to technical presentation by Istvan (CSC) of company’s work to date and future plans. Sovs questioned justification for planning use of frequencies assigned by EARC on temporary basis. Asked why US emphasized telephone traffic in calculating needed future capacity for satellites, and not giving more weight to television. Same alleged Sov concern emerged in afternoon discussion of US majority control based on telephone traffic with little regard to television. Afternoon meeting devoted mainly to exposition of organizational plans by Johnson (CSC).3 Sovs claimed US plan had discriminatory features in violation GA Res 1721 and Communications Satellite Act. Pointed out frequency bands are property of all countries. Objected as discriminatory to right organization would have after six months to set terms for adherence by additional countries, on ground committee could impose onerous terms on countries which may not now have resources to participate. Chayes pointed out this would be contrary both to agreements creating committee and to US law, emphasizing Presidential authority over CSC. Sovs concerned to know whether agreements would be between governments or entities appointed by governments. US closed with frank explanation why inevitable that US, which has pioneered technology, should have main influence over early years of organization. Comment: US presentation candid throughout, supplying details where appropriate or requested by Sovs. No mention yet of discussions with Europeans. Sovs attentive but somewhat reserved, asking few questions. Next meeting Tuesday pm. - Source: National Archives and Records Administration, RG 59, Records of the Department of State, Central Files, 1964–66, TEL 6. Confidential. Passed to Charyk and Ende (FCC). An advance copy was passed to Bushong (L) and the E message center was notified.↩ - A full transcript of the meetings is in circular airgram 2247, August 26. (Ibid.)↩ - John A. Johnson of the Communications Satellite Corporation.↩
OPCFW_CODE
Convolutional reverb Is your feature request related to a problem? Please describe. It'd be nice to add the ability to convolve signals with impulse responses, possibly multichannel impulse responses. Describe the solution you'd like I think we could implement it with the following API: import nussl audio = nussl.AudioSignal('/path/to/audio.wav') # load impulse from file impulse = nussl.AudioSignal('/path/to/impulse.wav') # or load from array gotten by some means impulse_array = generate_random_ir() impulse = nussl.AudioSignal(audio_data_array=impulse_array, sample_rate=audio.sample_rate) # non destructive (default), returns a new signal convolved = audio.convolve(impulse) # -> AudioSignal # destructive audio.convolve(impulse, overwrite=True) # -> inplace Three cases: If impulse is single channel, then it is applied to all channels in audio. If impulse is multi-channel, and audio is single-channel, then audio is broadcast to impulse. The output of convolve will be multi-channel in this case, even when audio is single-channel. It will have the same number of channels as impulse. If both impulse and audio are multi-channel, then audio and impulse must have the same number of channels. Since impulse is an AudioSignal object, one could do to_mono() if there is a channel mismatch, then apply it: # fix for channel mismatch convolved = audio.convolve(impulse.to_mono()) or use make_audio_signal_from_channel: convolved = audio.convolve(impulse.make_audio_signal_from_channel(0)) Treating impulse responses as just AudioSignal possibly gives a lot of nice benefits. Users could also convolve two signals even if one of them is not an impulse response, which could sound...interesting. Describe alternatives you've considered Implementing this outside is nussl is simple by just editing the audio_data arrays. But this could be a nice feature, especially for data augmentation. Additional context Add any other context or screenshots about the feature request here. For actual implementation, we can just use scipy.signal.convolve, I think? We could do a GPU implementation via torch as well but I don't know if that's worth it. As per our offline discussions of this, I think this is a good proposal. It shouldn't be that hard to implement by adding our own logic around scipy's version, which is already a dependency. Cool, I can get this going. @abugler, any thoughts? One more note, I think this could go as a function that takes two AudioSignal objects in mixing.py perhaps? And the AudioSignal.convolve can call that function with self and other. Thoughts? Yeah, I wouldn't think that'd be anything but a wrapper though, right? I don't think it's inappropriate to have it in that file as well; it'd be nice to have another option for people. That being said, it is more development overhead (docs, tests, etc) to support. It's up to you. Cool, I can get this going. @abugler, any thoughts? I think this is a cool idea! I agree that this function should take two audio signals, and the AudioSignal hook calls it. Should this have a footprint in effects.py somehow? I'm not sure how that would work exactly, but it does seem similar to the rest of the effects hooks. I hesitate to fully endorse this because I don't think it should exist in mixing.py, effects.py and audio_signal.py, but I could understand a case for one of mixing.py or effects.py honestly. Thoughts? I think I'd like it to be inmixing.py with a hook in audio_signal.py.
GITHUB_ARCHIVE
Under what condition does doing Forward path finding or Reverse pathfinding get better result? Most tutorials for open grid tower defence games says to pathfind from the Goal to the monster. Why is it so? There is no definitive answer, because depending on the scene either can be faster. But A* relies on a heuristics function which usually states closer = better = evaluate first. Let's take this example from the Wikipedia article on A*: In this particular case, the "closer = better" heuristic leads the algorithm right into a trap. A* keeps moving on the direct line from red to green and fails very late, short before reaching green. It then wastes time with exploring the interior of the wedge until it has iterated it completely. Only then will it try the path around it. If you'd have started from green, A* would have explored towards red, quickly found the wall, explored around the wall, and then head straight towards the destination. That means in this particular situation, starting from green would give you a much faster solution. So what you need to consider is: Does my level design tend to have "traps" like this which always face into one particular direction? Then you should avoid starting the A* search from that direction. It's worth noting that pathfinding backwards (from the goal to the creature) helps when some actions may need to be taken to get to the goal, or may optionally be taken for an even better path. Examples of this include doors with locks or switches, or gaps that can be crossed with a ladder, or mobility power ups that can be found in the environment. This may not apply to your specific case, but as a general answer to the question of whether to pathfind forwards or backwards, this is hopefully worth considering. As a simple example, if you're pathfinding forwards and you come across a locked door, you don't yet know if it would've been worth it to detour to pick up the appropriate key before coming here (which would require pathfinding to start over), or if continuing to search would still yield a better result. However, if we pathfind backwards from the goal and come across the locked door, we can treat it as unlocked and continue expanding through that node, but towards the key instead of the creature. If it's faster to find a long way around instead of going to the key and then the door, then good A* will find its way back to the creature before the locked-door-sub-path has found its way all the way back to the creature with the key it needs. But if the locked-door-sub-path gets to the creature first, the creature knows to first go get the key, and then go to the locked door before heading to the goal. This requires some modification -- normally, A* needn't ever visit a node twice, as once a node is visited, its fastest sub-path has been found (whether forwards or backwards). But if the sub-path changes objective (collecting a key, for example), nodes may be visited by both the key-collecting sub-path and the path that goes around the door if collecting the key is too much of a detour to be optimal.
OPCFW_CODE
Microsoft Windows Installer (MSI) and InstallScript installations and App-V virtual packages for Microsoft Windows platforms authoring tool including support for Microsoft Windows 7. Flexeras InstallShield is a strategic installation development solution designed to enable development teams to be more agile, collaborative and flexible when building reliable InstallScript and Windows Installer (MSI) installations for desktop, server, web and mobile applications. - Simplify using multi-tier templates to deploy your Web/Server applications as a single cloud-ready package. - Streamline configuration by automating installing Windows roles and features and running PowerShell scripts. - Create pure 64-bit installations that use 64-bit custom actions. - Build both physical installations and virtual application packages from the same build process and ensure compatibility with preferred enterprise application virtualisation technologies, includes new Microsoft App-V 5.0 support. - Optimise installers for Windows 8 using new Wizard design capabilities, new Start Screen icon pinning options, and new validation tests. - Supports Microsoft Windows 8, Windows Server 2012 and Visual Studio 2012. - Enables hybrid cloud deployments with Microsoft SQL Azure database scripting capabilities. - Automatically checks and downloads updates and patches at run-time. - Enables the enterprise transformation with support for Microsoft System Center 2012 Configuration Manager and PowerShell. - Provides deep insight to the install base by connecting you to customers with an automated update and patch solution. InstallShield is available in the following editions: - InstallShield Express Edition: Enables software developers and setup authors to create reliable Windows Installer (MSI) installations. It comes with an Project Assistant which is a wizard that guides developers through the MSI installation creating process, step-by-step. - InstallShield Professional Edition: Ideal for both novice and seasoned software installation developers and has all the functionality of the Express Edition and installation customisation, support for 64-bit installations, InstallScript custom actions for MSI projects, support for .NET Framework 4.0, IIS 7.0, Windows Server 2008 R2, Direct X 9.0c and Windows Mobile platforms, Windows 7 installation support, one free Standalone Build licence. A Virtualisation Pack is available as an option to support the building of Microsoft App-V virtual packages. - InstallShield Premier Edition: All the capabilities of the Professional Edition and Best Practices Validation Suite alert, InstallShield Repackager, multi-lingual support, network repositories, trial versions creation, five free Standalone Build licences and Build Events. InstallShield Premier Edition is available with a node-locked or concurrent licence. A Virtualisation Pack is available as an option to support the building of Microsoft App-V virtual packages. InstallShield - Features InstallShield Express Edition Features: - Support for Microsoft Technologies: Support for Windows 8, Windows Server 2012 and Visual Studio 2012. - ISO 19770-2: Creates ISO 19770-2 software identification tags as part of the installation development process. - Software Developers and Setup Authors: Cost-effective solution for creating reliable Windows Installer (MSI) installations. - Available in Different Languages: Offer developers the flexibility to build in their own language. InstallShield Professional Edition Features: includes all of the above and the following: - Windows 7 & 8 Validation Testing: Validate installations against Windows 7, 8, and Windows Server 2012 best practices from Microsoft. - Advanced UI Designer: Create contemporary install experiences using new Wizard design capabilities. - Create Pure 64-bit Installations: Deploy your 64-bit applications using 64-bit installations that support server configurations where WoW64 has been disabled. - Deploy to Hybrid Cloud Databases: Windows Azure SQL Database scripting capabilities enable hybrid cloud SQL deployments. - Provide Deployment Metadata: Microsoft System Center 2012 Configuration Manager support enables software producers to provide required deployment metadata to their Enterprise customers, reducing the burden in managing their application. - Latest Updates and Patches: At installer runtime, and if an update is found, it will automatically be downloaded and run in place of the old installer. - Streamline Installation Scripting: PowerShell support offers software producers the ability to streamline installation scripting requirements and support best practices of their Enterprise customers. - Maintain a Clean Build System: Each licence of InstallShield Professional Edition includes one free Standalone Build license. - Reduce Development Time: Quickly and easily build your installations by moving pieces of an existing project (dialogs, custom actions, or features) to another installation project; create and manage re-usable project outlines so you don't have to start your installations from scratch. - Customise Your Installations with InstallScript: Add InstallScript custom actions to your MSI projects or create InstallScript projects that control your entire installation. InstallShield Premier Edition Features: includes all of the above and the following: - Use Multi-tier Installation Templates: Deploy web/server applications as a single cloud-ready package. - Automate Installing Windows Roles and Features: Avoid the risk of manual tasks by automatically installing Windows roles and features with an application's installation. - Run PowerShell Scripts from Suite / Advanced UI installations: Streamline server configuration tasks by running PowerShell scripts - the enterprise scripting language of choice. - Create Microsoft App-V Installations: Build both physical and virtual application packages from the same build process. - Virtualisation Suitability Testing: Ensure applications are compatible with enterprise application virtualization technologies, such as Microsoft App-V, VMware ThinApp and Citrix XenApp. InstallShield - System Requirements - Processor: Pentium III-class PC processor (500 MHz or higher recommended) - RAM: 256 MB (512 MB preferred) - Hard Disk Space: 500 MB available - Display: XGA resolution display at 1024 768 or higher recommended - Operating System: Microsoft Windows XP, Server 2003, Vista, Server 2008, 7, Server 2008 R2, 8 or Sever 2012 operating system.
OPCFW_CODE
Feature-request: Better insert-scripts I see scripting objects gets added in the insider builds. I figured, data scripting might be next. Thus a feature request: Until now, INSERT-scripts in SSMS look like: INSERT INTO T_Whatever (id, txt) VALUES(1, N'hello world'); INSERT INTO T_Whatever (id, txt) VALUES(2, N'hello new data'); INSERT INTO T_Whatever (id, txt) VALUES(3, N'i am an update for existing data'); This is problematic. Wouldn't it make more sense, if it generated an update script like this: DECLARE @whatever table (id int not null, txt nvarchar(256) NULL ); INSERT INTO @whatever (id, txt) VALUES(1, N'hello world'); INSERT INTO @whatever (id, txt) VALUES(2, N'hello new data'); INSERT INTO @whatever (id, txt) VALUES(3, N'i am an update for existing data'); INSERT INTO T_Whatever(id, txt) SELECT id, txt FROM @whatever AS t WHERE NOT EXISTS(SELECT * FROM T_Whatever WHERE T_Whatever.id = t.id) ; UPDATE T_Whatever SET T_Whatever.txt = t.txt FROM @whatever AS t WHERE t.id = T_Whatever.id ; Anybody who didn't like the update part could delete it in a second. Or with the new merge-syntax: ;WITH CTE AS ( SELECT 1 AS id ,N'hello world' AS txt UNION ALL SELECT 2 AS id ,N'hello new data' AS txt UNION ALL SELECT 3 AS id ,N'i am an update for existing data' AS txt ) -- SELECT * FROM CTE MERGE INTO T_Whatever AS A USING CTE ON CTE.id = A.id WHEN MATCHED THEN UPDATE SET A.txt = CTE.txt WHEN NOT MATCHED THEN INSERT ( id ,txt ) VALUES ( CTE.id ,CTE.txt ); -- DELETE FROM T_Whatever WHERE id = 1; -- SELECT * FROM T_Whatever; SET NOCOUNT OFF; Or the merge syntax with the table-variable from the 1st example instead of the CTE. If the table doesn't have a primary key, it could just insert from the table-variable. (or use EXCEPT and INTERSECT to match on all fields - though INTERSECT/EXCEPT would be problematic with tables with xml and varbinary fields) Also If it creates an insert for datetime/datetime2 fields, it should use ISO syntax (you know, international standards organization - aka "yyyy-MM-ddTHH:mm:ss.fff"). Note the T delimiting time and date. Until now, the insert scripts omit the T, and put a whitespace instead, which apparently works for people who have an en-US-cultured windows account at work, but not for me and others, because we have a culture of de-CH or de-DE, or something else (which we by-the-way can't even choose, otherwise I'd use en-US because of boatloads of things like that), and there, that insert-script with a whitespace doesn't work. (also a problem when a script is run at an on-promise location, where you can't control under what account-language a script will be executed - which is especially problematic in Switzerland, as this can be 4 different languages). It's not funny if you have to search for the problem in a boatload of SQL-scripts, and then still need to change each and every value, because there could potentially be thousands of them. And just like that, precious (expensive) hours pass by. You can actually do this today if you install the Database Administration Tool Extensions for Windows extension (windows-only at the moment though unfortunately). You can find that in the Marketplace view. Once you've installed that extension then right click on the Database containing the table you'd like to script and select Generate Scripts Follow the dialog to choose the objects to script and on the Set Scripting Options page click the Advanced button. From there go down to Types of data to script and choose Schema and Data Finish going through the wizard and your script should be generated with the included INSERT statements as appropriate! If you do this then the datetime fields should be generated with the ISO syntax - if yours aren't then file a separate issue and we can look into it. @Charles-Gagnon: I've downloaded the latest insider build from AzureDataStudio today (25.07.2019), installed Database Administration Tool Extensions for Windows, connected to db, right-clicked "generate scripts", chose one table, "Save to new query window" + advanced "schema and data", next, next => and all it does is it opens https://docs.microsoft.com/en-us/sql/azure-data-studio/download?view=sql-server-2017 Thanks for your feedback! The issue with opening a new window not working for insiders is a known issue #6276 so follow that for updates. You can also install the stable build of ADS and use that - the feature should work as expected there. For your other requests please open up separate issues for them - it's better for tracking if we have a separate issue opened for each problem rather than bunching them all together into a single issue like this.
GITHUB_ARCHIVE
Although this section does not contain a true script per se, it's a good place to spend a few pages talking about some of the basics of debugging and developing shell scripts, because it's a sure bet that bugs are going to creep in! The best debugging strategy I have found is to build scripts incrementally. Some script programmers have a high degree of optimism that everything will work right the first time, but I find that starting small, on a modest scale, can really help move things along. Additionally, liberal use of echo statements to track variables , and using the -x flag to the shell for displaying debugging output, are quite useful. To see these in action, let's debug a simple number-guessing game. #!/bin/sh # hilow -- A simple number-guessing game biggest=100 # maximum number possible guess=0 # guessed by player guesses=0 # number of guesses made number=$(($$ % $biggest) # random number, between 1 and $biggest while [ $guess -ne $number ] ; do echo -n "Guess? " ; read answer if [ "$guess" -lt $number ] ; then echo "... bigger!" elif [ "$guess" -gt $number ] ; then echo "... smaller! fi guesses=$(($guesses + 1)) done echo "Right!! Guessed $number in $guesses guesses." exit 0 The first step in debugging this game is to test and ensure that the number generated will be sufficiently random. To do this, we take the process ID of the shell in which the script is run, using the $$ notation, and reduce it to a usable range using the % mod function. To test the function, enter the commands into the shell directly: $ echo $(($$ % 100)) 5 $ echo $(($$ % 100)) 5 $ echo $(($$ % 100)) 5 It worked, but it's not very random. A moment's thought reveals why that is: When the command is run directly on the command line, the PID is always the same. When run in a script, the command is in a different subshell each time, so the PID varies. The next step is to add the basic logic of the game. A random number between 1 and 100 is generated, the player makes guesses at the number, and after each guess the player is told whether the guess is too high or too low until he or she figures out what number it is. After entering all the basic code, it's time to run the script and see how it goes, using exactly the code just shown, warts and all: $ hilow ./013-hilow.sh: line 19: unexpected EOF while looking for matching `"' ./013-hilow.sh: line 22: syntax error: unexpected end of file Ugh; the bane of shell script developers: an unexpected EOF. To understand what this message means, recall that quoted passages can contain newlines, so just because the error is flagged on line 19 doesn't mean that it's actually there. It simply means that the shell read merrily along, matching quotes (incorrectly) until it hit the very last quote, at which point it realized something was amiss. In fact, line 19 is perfectly fine: $ sed -n 19p hilow echo "Right!! Guessed $number in $guesses guesses." The problem, therefore, must be earlier in the script. The only really good thing about the error message from the shell is that it tells you which character is mismatched, so I'll use grep to try to extract all lines that have a quote and then screen out those that have two quotes: $ grep '"' 013-hilow.sh egrep -v '.*".*".*' echo "... smaller! That's it: The close quote is missing. It's easily fixed, and we're ready to go: $ hilow ./013-hilow.sh: line 7: unexpected EOF while looking for matching `)' ./013-hilow.sh: line 22: syntax error: unexpected end of file Nope. Another problem. Because there are so few parenthesized expressions in the script, I can eyeball this problem and ascertain that somehow the closing parenthesis of the instantiation of the random number was mistakenly truncated, as the following line shows: number=$(( $$ % $biggest ) # random number, between 1 and $biggest This is fixed by adding the closing parenthesis. Now are we ready to try this game? Let's find out: $ hilow Guess? 33 ... bigger! Guess? 66 ... bigger! Guess? 99 ... bigger! Guess? 100 ... bigger! Guess? ^C Because 100 is the maximum possible value, there seems to be a logic error in the code. These errors are particularly tricky because there's no fancy grep or sed invocation to identify the problem. Look back at the code and see if you can identify what's going wrong. To try and debug this, I'm going to add a few echo statements in the code to output the number chosen and verify that what I entered is what's being tested . The relevant section of the code is echo -n "Guess? " ; read answer if [ "$guess" -lt $number ] ; then In fact, as I modified the echo statement and looked at these two lines, I realized the error: The variable being read is answer , but the variable being tested is called guess . A bonehead error, but not an uncommon one (particularly if you have oddly spelled variable names ). To fix this, I change read answer to read guess . Finally, it works as expected. $ hilow Guess? 50 ... bigger! Guess? 75 ... bigger! Guess? 88 ... smaller! Guess? 83 ... smaller! Guess? 80 ... smaller! Guess? 77 ... bigger! Guess? 79 Right!! Guessed 79 in 7 guesses. The most grievous bug lurking in this little script is that there's no checking of input. Enter anything at all other than an integer and the script spews up bits and fails. Including a rudimentary test could be as easy as adding the following lines of code: if [ -z "$guess" ] ; then echo "Please enter a number. Use ^C to quit"; continue; fi However, a call to the validint function shown in Script #5 is what's really needed.
OPCFW_CODE
CMake: can't find header files I have a directory main, which has the following subdirectories: A, B, C, D, Test. In Test, I have a CMakeLists file with the following content: cmake_minimum_required(VERSION 2.8) enable_testing() set(TEST_EXE_NAME test) add_executable(${TEST_EXE_NAME} test.cpp) add_test(NAME "testDatabase" COMMAND ${TEST_EXE_NAME}) target_include_directories(Test PUBLIC ./) target_include_directories(Test A B C D) target_link_libraries(Test A B C D) In Test, I have an executable which #includes several header files from A, B, C, and D. However, after doing make, I get the message that cmake cannot find these header files from A, B, C, and D. How can I make this go away? A, B, C, & D appear in your question as directory names and library names, but your question asks about headers. I don't see anywhere where you include the headers for A, B, C, or D. @JoelCornett Sorry, I meant files within A, like A.h, B.h, etc. From your question, it is hard to see exactly what is going wrong. This is why I am going to describe how I would tackle the whole problem. In your "directory main" It is necessary to have a CMakeLists.txt here to be able to use the CMake targets for A-D in Test. It would look like this: cmake_minimum_required(VERSION 2.8) enable_testing() add_subdirectory(A) add_subdirectory(B) add_subdirectory(C) add_subdirectory(D) add_subdirectory(Test) Note that we call enable_testing() here. This will enable you to call make test in your root build directory directly later on. In the folders A-D There, you create libraries for A-D. For A, for instance, you would write: add_library(A STATIC [... source files for A ...]) # or SHARED instead of STATIC target_include_directories(A PUBLIC ./) Note that by using target_include_directories, you tell CMake to include the directories for the libraries automatically later on. This will be useful below. In the folder Test Now this becomes quite easy: set(TEST_EXE_NAME test) add_executable(${TEST_EXE_NAME} test.cpp) target_include_directories(${TEST_EXE_NAME} PUBLIC ./) target_link_libraries(${TEST_EXE_NAME} A B C D) add_test(NAME "testDatabase" COMMAND ${TEST_EXE_NAME}) Note that there is no need to set the include directories for A-D here, since CMake already knows from before that they are needed! First argument od target_include_directories is CMake target, not directory, thus you should use following code (with assumption that ${TEST_EXE_NAME} is the target that requires headers from A, B, C, D): target_include_directories(${TEST_EXE_NAME} PUBLIC ./) target_include_directories(${TEST_EXE_NAME} PUBLIC A B C D) Thanks for your response! When I do that, I get the message that "target_include_directories called with invalid arguments." Do you have any suggestions for that? Try adding PUBLIC before A in second line. Try including A, B, C, and D in the target_include_directories() line. As was mentioned, the target_link_libraries() line should be specifying the libraries, not the headers involved. I just tried that and updated the question to reflect that, but got the message that "target_include_directories called with invalid arguments". Do you have any other suggestions? Thanks!
STACK_EXCHANGE
There are several options that may help you to speed up site. Try them out first. 1. Moving images to file system. For v.3.5.x and 4.0.x Move images from database to file system: admin zone -> ‘Images location’ (‘Store images in’, choose ‘File system’) After that modify original .htaccess file (in /files directory): change the code to <FilesMatch "\.(gif|jpe?g|png|GIF|JPE?G|PNG)$"> Allow from all </FilesMatch> Deny from all 2. Toggling off atracking statistics in the customer/auth.php file, make the line For v3.5.x and 4.0.x: the statistics is disabled via admin zone, 'General settings': uncheck 'Enable tracking statistics gathering' field. the statistics is disabled via admin zone, 'General settings/Advanced Statistics options': uncheck 'Enable tracking statistics gathering' field. 3. Cleaning statistics tables For v.3.5.x and 4.0.x. In the admin back-end of x-cart: ‘Summary’ page - > ‘Statistics clearing’. For v.4.1.x: Statistics can be cleared via admin area: 'Summary' page, section 'Tools/Statistics clearing'. 4. Optimizing sql tables For v.3.5.x and 4.0.x Execute the following sql query for each x-cart table OPTIMIZE TABLE <table_name>; where replace <table_name> with the name of a table. You will find the list of x-cart tables using the next sql query: tables are optimized via admin area: page 'Summary', section 'Tools/Optimize tables'. 5. Generating HTML catalog Additionally, it is advisable to create HTML catalog. After generation of HTML catalog, your customer zone is presented as a set of static pages linked together without actual PHP scripts execution and database queries. It may significantly lower loading of your server. 6. Compression of html source You may try to put the following lines in the php.ini file: output_handler = zlib.output_compression = On zlib.output_compression_level = 1 <pre> You can put any figure in the compression level between -1 and 9, just muck around and see what works best for your particular store. Or if you are using Apache server and .htaccess files are enabled you may try to add <pre> php_flag zlib.output_compression On to the .htaccess in the store root. 7. Installing Zend Optimizer You can install Zend Optimizer (http://zend.com/store/products/zend-optimizer.php) 8. Installing mod_deflate You may ask your hosting administrators to install mod_deflate (Apache 2.0.x) or mod_gzip (Apache 1.3.x). 9. Slow down SE crawlers Search engines may crawl your site and cause a performance degradation. You may slow down the robots crawling your site by adding the following line into robots.txt file (in the root of your web site): User-Agent: * Crawl-Delay: 10
OPCFW_CODE
In my earlier blog, I explored why enterprises are using Hadoop. In summary, scalable data platforms such as Hadoop offers unparalleled cost benefits and analytical opportunities (including content analytics) to enterprises. In this blog, I will mention some of the enhancements in IBM‘s InfoSphere Informaiton Server 11.5 that helps leverage the scale and promise of Hadoop. Data integration in Hadoop: In this release, Information Server can execute directly inside a Hadoop cluster. This means that all of the data connectivity, transformation, cleansing, enhancement, and data delivery features that thousands of enterprises have relied on for years, can be immediately available to run within the Hadoop platform! Information Server is market leading product in terms of it’s data integration and governance capability. Now the same product can be used to solve some of the industry’s most complex data challenges inside a Hadoop cluster directly. Imagine the time saved in moving the data back and forth from HDFS! Even more, these new features for Hadoop use the same simple graphical design environment that IBM clients have previously been accustomed to build integration applications with. In other words, organizations can build new Hadoop-based information intensive applications without the need to retrain their development team on newly emerging languages that require manual hand coding and lack governance support. How is this accomplished? YARN! Apache Hadoop YARN is the framework for job scheduling and cluster resource management. Information Server can communicate with YARN to run a job on the data nodes on a Hadoop cluster using following steps. - Step 1 - The conductor process manages the section leader and player processes that run on the InfoSphere Information Server engine. The conductor process on the engine tier receives a job run request for an InfoSphere DataStage, InfoSphere QualityStage job. This job might be generated from an InfoSphere Information Analyzer analysis. - Step 2 - The conductor connects to the YARN client, which assigns an Application Master to the job from the available pool of Application Masters it maintains. If an Application Master is not available in the pool the client will start a new one for this job. The conductor connects to the Application Master and sends the details about the resources that are required for running the job. - Step 3 - The Application Master requests resources from the Yarn resource manager. The jobs processes run in a YARN container, with each container running a section leader and players. The YARN container designates resource requirements such as CPU and memory. When the resources are allocated, the conductor sends the process commands for the section leader to the Application Master, which starts those commands on the allocated resources. More details can be found here. A few other capabilities offered by InfoSphere Information Server on Hadoop includes: - InfoSphere Information Analyzer features are now supported, executing directly inside a Hadoop cluster. - Hadoop metadata management is made easy - HDFS file connectivity including new data formats, additional character sets and additional data types - Support for Kerberos-enabled clusters - YARN job browser
OPCFW_CODE
Image Copy Failing If the issue is to do with Azure CLI 2.0 in-particular, create an issue here at Azure/azure-cli Extension name (the extension in question) image-copy-extension I grab the latest version of the extension every time I run this process below. Description of issue (in as much detail as possible) I've been using this extension for a few months, worked great, now it is failing consistently. I'm moving an image from one RG to another which will be used as my base VM for all future vms. The image-copy-rg group is created along with the storage account. Inside the storage account a blob container is created called snapshots, but it is empty. Log listed below: I've scrubbed the details a bit, but you should get the gist of the issue. az image copy --source-resource-group app-rg --source-object-name linux-app-image-20180925-182815 --target-location southcentralus --target-resource-group "base-vms" Getting os disk id of the source vm/image Creating source snapshot Getting sas url for the source snapshot Creating resource group: image-copy-rg Target location count: 1 Starting async process for all locations southcentralus - Creating target storage account (can be slow sometimes) southcentralus - Creating container in the target storage account southcentralus - Copying blob to target storage account command ended with an error: ['/usr/lib64/az/bin/python', '-m', 'azure.cli', 'storage', 'blob', 'copy', 'start', '--source-uri', None, '--destination-blob', 'linux-app-image-20180925-182815_os_disk_snapshot.vhd', '--destination-container', 'snapshots', '--account-name', 'southcentralus#########', '--sas-token', 'ss=b&spr=https&sp=rwlacup&sv=2018-03-28&sig=Wc###0YC/Y8qX9FNa%2B###################&srt=sco&se=2018-09-25T19%3A48Z', '--output', 'json'] Hi, unfortunately the error itself isn't listed. Can I ask you to try and run the last command yourself? You need to join the array elements inside with space between like so: az storage blob copy... Also, what version of the extension are you using? Hey Tamirkamara, First off, thank you for the reply. I should be using the latest. Everytime I run my process, I update to the latest version. azure-cli (2.0.46) Extensions: image-copy-extension (0.0.8) Python (Linux) 2.7.5 (default, Jul 13 2018, 13:06:57) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)] I'm wondering if this is an Azure CLI issue because you see "--source-uri None" When I find the disk SAS Url, I get a different error which makes me think that None is incorrect. I've reported this to Azure because it might be an issue with their CLI update which they made recently. You do not have the required permissions needed to perform this operation. Depending on your operation, you may need to be assigned one of the following roles: "Storage Blob Data Contributor (Preview)" "Storage Blob Data Reader (Preview)" "Storage Queue Data Contributor (Preview)" "Storage Queue Data Reader (Preview)" If you want to use the old authentication method and allow querying for the right account key, please use the "--auth-mode" parameter and "key" value. All, This issue is resolved. Azure CLI has an issue. I downgraded from azure-cli.x86_64 0:2.0.46-1.el7 to azure-cli-2.0.45-1.el7.x86_64 and the extension is now working as expected again. If you are facing this issue, Azure is suggesting a downgrade as I've reported it to their development team.
GITHUB_ARCHIVE
26.4. Error Handling All of the popt functions that can return errors return integers. When an error occurs, a negative error code is returned. Table 26.2 summarizes the error codes that occur. Here is a more detailed discussion of each error: An option that requires an argument was specified on the command line, but no argument was given. This can be returned only by poptGetNextOpt(). An option was specified in argv but is not in the option table. This error can be returned only from poptGetNextOpt(). A set of option aliases is nested too deeply. Currently, popt follows options only 10 levels deep to prevent infinite recursion. Only poptGetNextOpt() can return this error. A parsed string has a quotation mismatch (such as a single quotation mark). poptParseArgvString(), poptReadConfigFile(), or poptReadDefaultConfig() can return this error. A conversion from a string to a number (int or long) failed due to the string's containing nonnumeric characters. This occurs when poptGetNextOpt() processes an argument of type POPT_ARG_INT or POPT_ARG_LONG. A string-to-number conversion failed because the number was too large or too small. Like POPT_ERROR_BADNUMBER, this error can occur only when poptGetNextOpt() processes an argument of type POPT_ARG_INT or POPT_ARG_LONG. A system call returned with an error, and errno still contains the error from the system call. Both poptReadConfigFile() and poptReadDefaultConfig() can return this error. Table 26.2. popt Errors An argument is missing for an option. An option's argument could not be parsed. Option aliasing is nested too deeply. Quotations do not match. An option could not be converted to a number. A given number was too big or too small. Two functions are available to make it easy for applications to provide good error messages. const char * poptStrerror(const int error); This function takes a popt error code and returns a string describing the error, just as with the standard strerror() function. char * poptBadOption(poptContext con, int flags); If an error occurred during poptGetNextOpt(), this function returns the option that caused the error. If the flags argument is set to POPT_BADOPTION_NOALIAS, the outermost option is returned. Otherwise, flags should be zero, and the option that is returned may have been specified through an a lias. These two functions make popt error handling trivial for most applications. When an error is detected from most of the functions, an error message is printed along with the error string from poptStrerror(). When an error occurs during argument parsing, code similar to the following displays a useful error message: fprintf(stderr, "%s: %s\n", poptBadOption(optCon, POPT_BADOPTION_NOALIAS), poptStrerror(rc));
OPCFW_CODE
Is Copilot the best AI companion out there? Help us find out by answering a couple of quick questions! Apple yesterday released the updated App Store Review Guidelines. Developers should follow the App Store Review Guidelines in order to get their app published in the App Store. As the App Store is the only way through which consumers can download apps to their iOS devices, developers had to follow the App Store rules set by Apple. Among the updated guidelines, section 4.2.7 caught my attention. Apple now allows remote desktop clients for game consoles owned by the user. Software appearing in the client must be fully executed on the host device. Steam Link app is a great example, games will be streamed from Steam console to your iOS device. While Steam Link is now allowed in App Store, what about cloud-based game streaming services like Microsoft’s Project xCloud or Google’s Stadia? They are more likely to get rejected if they don’t follow any one of the below rules. - (a) The app must only connect to a user-owned host device that is a personal computer or dedicated game console owned by the user, and both the host device and client must be connected on a local and LAN-based network. - (b) Any software or services appearing in the client are fully executed on the host device, rendered on the screen of the host device, and may not use APIs or platform features beyond what is required to stream the Remote Desktop. - (c) All account creation and management must be initiated from the host device. - (d) The UI appearing on the client does not resemble an iOS or App Store view, does not provide a store-like interface, or include the ability to browse, select, or purchase software not already owned or licensed by the user. For the sake of clarity, transactions taking place within mirrored software do not need to use in-app purchase, provided the transactions are processed on the host device. Basically, Apple won’t even allow xCloud or Stadia services to display a list of games that can be purchased for streaming. My guess is that Apple will bend the rules for Microsoft and Google as they can’t afford to lose huge amount of subscription money. Imagine millions of subscribers paying $20 per month to Microsoft and Google, Apple can potentially earn $6 per month per user for just allowing these apps in App Store. You can read about the other changes to App Store Review Guidelines below. - Guidelines 1.3 and 5.1.4. In order to help keep kids’ data private, apps in the kids category and apps intended for kids cannot include third-party advertising or analytics software and may not transmit data to third parties. This guideline is now enforced for new apps. Existing apps must follow this guideline by September 3, 2019. - Guideline 4.7. HTML5 games distributed in apps may not provide access to real money gaming, lotteries, or charitable donations, and may not support digital commerce. This functionality is only appropriate for code that’s embedded in the binary and can be reviewed by Apple. This guideline is now enforced for new apps. Existing apps must follow this guideline by September 3, 2019. - Guideline 5.1.3(i). Apps may use a user’s health or fitness data to provide a benefit directly to that user, such as a reduced insurance premium, if the app is submitted by the entity providing the benefit and the data is not shared with a third party. The developer must also disclose to the user the specific health data collected from the device. - Guideline 5.1.1(vii) (New). Apps that compile information from any source that is not directly from the user or without the user’s explicit consent, even public databases, are not permitted on the App Store. - Guideline 5.1.1(i). Apps must get consent for data collection, even if the data is considered anonymous at the time of or immediately following collection. - Guideline 1.1.3. Apps may not facilitate purchase of ammunition. - Guideline 4.2.7. Remote desktop clients now include game consoles owned by the user. Software appearing in the client must be fully executed on the host device. - Demo videos of app functionality that is geo-locked or otherwise restricted are not accepted. Developers must provide a fully functional app for review. - Sign In with Apple will be available for beta testing this summer. It will be required as an option for users in apps that support third-party sign-in when it is commercially available later this year.
OPCFW_CODE
Views - Exposed Filter for 2 Field Collections with Same Fields I have a content type with 2 Field Collections(both set for unlimited values), they are Author and Editor. Both field collections are using the same fields(first_name and last_name). In a Views that I created has both field collections displayed. And I would like to exposed the First and Last Name, but they are not searchable without a relationship. Meaning I have to have 2 First Name and 2 Last Name filters exposed with different relationships. How to make 1 first name filter exposed and searchable for both Author and Editor? PS: I have done many other searchable Views page, but this is different(2 field collections with same fields). I agree with everything @Aaron said in his answer, however, if you don't want to restructure your view, you can search both fields with an exposed Global:Combine Fields Filter. You'll need to have both fields listed in the Fields section (you can mark them as Exclude from Display if you want). Also, if you remove one of those fields while the filter is still in place, your view will throw an error. Note: Remember, if you want to use Global:Combine Fields Filter for taxonomy terms, referenced nodes or field collections, you need to include a relationship to that entity and then add the search field to the field list. Reference: How to filter by two fields using a single exposed filter in Views? I'm using Views 7.x-3.3 and didn't see the "Global: Combine Fields Filter". Does it required additional module, setting, or plugin? I believe my Views version is a bit too old I believe Views 7.x-3.5 includes the filter - but you might as well just update to the latest version. Great answer; didn't know that existed. You need to make the base table for your View the Field Collection table rather than the Node (content) table. You'll need to add a filter to only show the type(s) of Field Collections you wish to query. Now you'll be querying all Field Collections of the selected type(s), regardless of whether they're considered authors or editors. Simply add exposed filters to query on the first_name and last_name fields. The real question is: what do you want actually displayed? If you only want data attached to the Field Collections displayed then that's easy enough. If you want to pull data from the host nodes beyond merely the NID then you're going to end up with 2 separate relationships to the nodes (one for each field) and you're probably not going to be able to get the UI you want out of that. A much easier approach to all of this would be to put all of the "people" related to a node in a single Field Collection field and differentiate between Author and Editor by way of an additional "type" field in the Field Collection. You could then construct very straightforward views to query across all types of related people. Ty for your respond. I'm displaying first name and last name grouped by author and editor. Beside that, I also need to displaying all other fields, including node title, body, and many others fields. It does require to have relationship for displaying title and body. You could add a bit of javascript to maintain the 2nd set of filter fields in sync with the 1st. Then with some CSS you can hide the 2nd set of filter fields so the user doesn't see them. The javascript would be something like the following: var e = jQuery('#first-field-id'); e.on('keyup', function() { v = e.val(); jQuery('#second-field-id').val(v); }
STACK_EXCHANGE
The player can start building them by selecting one of the four from the bloody effigies tab in the survival guide but only if the sanity level of the player drops below 90%. They consist of body parts from the cannibals (heads, torsos, arms, and legs) as well as some sticks and rocks. No matter which cannibal is dismembered and used, the effigy will always sport tribal male cannibal body parts. Their main purpose is to cause fear in cannibals and keep them away from the player, similar to how red paint works. How cannibals react to a player with red paint on is almost exactly the same as how they react with effigies. It will cause most cannibals to fear the player and avoid them, though there are a few that will still attack the player. The reason for this is actually done on purpose by the developers. The developers placed ‘fuzziness’ to the coding into the game, that causes some cannibals to ignore the standard AI rules. The reason for doing this is to prevent players learning and figuring out cannibal behavior. The developers want to keep their behavior a secret to add immersion to the game. Though this is up to the player if they find effigies worth it or not, most players find them too inconsistent. Building an effigy won't stop the cannibals from attacking, they may approach a burning effigy with caution or run away from it or even ignore the effigy. If the player approaches the cannibals, they will often attack the player with complete disregard for the effigy as approaching cannibals is deemed as an act of aggression. When the developers were asked about aggression of AI they stated: We don't want to spell out exactly how the ai works, but for aggression there are lots of different triggers that can affect how they behave. Getting too close, killing females in front of male members and being in certain parts of the map can all change how aggressive they are. And there is always some 'fuzziness' built into the AI, so they don't always do the same thing each time. As the developers have added fuzziness to the AI, this would explain why cannibals sometimes ignore effigies. Here are the comments by the developers on the effects of effigies: There are four effigy types that the player can build. It consists in a K-shaped structure of sticks with three heads impaled on one of the upper sticks, three arms in the other and two legs in the K center. The effigy is attached to the floor with three rocks. This K-shaped structure is also used in arm effigies. It burns for 10 minutes and breaks into 5 bones, 3 skulls and 3 rocks. Large effigies, also known as Big Effigies, are higher costing effigies that were added in update v0.01 of The Forest. It can be used as a decoration. Large effigies are the most expensive effigies a player can build. It burns for 20 minutes and breaks into 14 bones, 9 skulls and 6 rocks. Arm effigies are effigies consisting mostly of arms, they were added in update v0.03 of The Forest. It consists of a K-shaped structure of sticks with seven arms impaled on the top. The effigy is attached to the floor with three rocks. This K-shaped structure is also used in small effigies. It burns for 10 minutes and breaks into 7 bones and 3 rocks. The Custom Effigy is a custom variant of effigies that the player can customize, they were added in update v0.25 of The Forest. A Custom Effigy can be the most simple or advanced Effigy in the game. Once crafted you can add to it, making it more effective. The more that is attached to the custom effigy, the more effective it CAN be. Custom effigies can be used as decorations as they can have a lot of customization features. Custom effigies replaced simple effigies in update v0.25. Simple effigies were essentially just a head on a stick. Custom Effigy burning time is determined by number and types of items attached to it. It burns for 10 minutes but additional Arm, Leg, Head, Skull or Bone increase its burning time by 65 seconds, Torso increases burning time by 50 seconds while additional Sticks don't increase burning time. Custom effigy is the most cost effective effigy. While Arm Effigy burns for only 10 minutes, Custom effigy with the same amount of materials burns for 17:35 minutes. Maximum burning time for effigy is 20 minutes so adding a large amount of materials is unnecessary. The "cheapest" effigy can be built with 10 bones and/or skulls and will burn for 20 minutes (same as expensive Large effigy). After burning it only breaks into as many bones and skulls as limbs and heads respectively have been used for building. |v1.0||Effigy tab can only be accessed with sanity less than 90% Setup dynamic signals / effigies / family effigies book pages Fixed can see through heads on effigies small and large |v0.62||Fixed destroying unlit effigy while targeting the light trigger leaving the icon active| |v0.54||Fixed custom effigy costing 2 rocks instead of 3| |v0.46||Added second skull to two headed effigy when broken apart.| |v0.40||Custom effigy parts now properly collapse along the effigy| |v0.38||Fixed custom effigy not working after being damaged and repaired Fixed ghost custom effigy rendering issues |v0.37||Enemy skull bag effigy added more pick up skulls. Added moss to enemy placed effigies. |v0.35||Custom effigy can now always be lit by everyone when there’s no limb attached but will have an effect area of 0.| |v0.31||Added bones & skulls to custom effigy materials.| |v0.27||Fixed issue with custom effigy automatically lighting up| |v0.26d||Increased custom effigy base range and per limb and torso bonus range. Fixed some glitches in enemy effigy effect reactions |v0.25||Custom Effigy added to the game Simple Effigy removed from the game New player-made effigy: Custom Effigy: replaces ‘Simple Effigy’ in book (once built, use sticks to add to base shape and/or limbs and torsos to create effigies of your own. Range and duration of the repulsive effect when lit depends on amount of added limbs. Effigies now require to hold the “Take” button for 0.5s to set alight. Lowered big effigy burn duration to 20 minutes (was 40) |v0.20||Enemies now react properly to effigies.| |v0.15||Fixed bug where arm effigy wouldn’t be lightable| |v0.08D||Effigies can burn the player and animals. Cannibal effigies can be destroyed by the player. |v0.04||Fixed all effigies turning into arm effigy on load| |v0.03||Arm Effigy added to the game Simple Effigy added to the game Fixed small effigy construction shader being too faint Two new effigies buildable, simple head on stick and arms pointing to sky |v0.01||Large Effigies added to the game Small Effigies added to the game Cannibal and some player effigies added to the game. |Building: Fire • Shelter • Food and Water • Storage • Custom • Utility • Furniture • Small Traps • Advanced Traps • Boats • Effigies • Family| |Information: To Do List • Stats • Nature Guide • Notes|
OPCFW_CODE
Why does my 'if' code with != '' return true when I input 0, which is supposed to be a falsey value? newbie learning python here. Doing a course online and ran into some trouble here: Why does my 'if' code with != '' return true when I input 0, which is supposed to be a falsey value? Here is the code print('Enter a name') name = input() if name != '': print('Thank you for entering a name, ' + name) else: print('You didn\'t enter a name') The output is: Enter a name 0 Thank you for entering a name, 0 How would this be false? 0, which is supposed to be a falsey value You aren't testing the general truthiness of name; you're testing whether it is a blank string. To put it another way, False isn't equal to '' either. Whether something is falsey is only pertinent if you're casting it to a boolean. No such cast happens here. bool(int(input)) would become False with input='0', as would bool(''). What are valid names? You don't have 0, you have '0', a string of length 1. Any non-empty string is considered true: >>> name = input() 0 >>> name '0' >>> len(name) 1 >>> type(name) <class 'str'> >>> bool(name) # truth value True Because it is a string with a single character in it, it is also not equal to '', the empty string and your if name != '': test passes. You have a string because in Python 3, input() always returns strings. If you had an int object with the value 0, then yes, it would be false-y: >>> falsey = 0 >>> type(falsey) <class 'int'> >>> bool(falsey) False You'd have to convert your string to an integer: >>> int(name) 0 >>> type(int(name)) <class 'int'> >>> bool(int(name)) False However, converting to an integer doesn't make much sense when you are asking for someone's name. A name is text, not a number. So, unless you have a clear, specific reason to further validate what the user entered to check that the value is an actual name, there is no point in testing for the number 0 here. Just stick to what you have, perhaps by just using: if name: # ... name is not empty This is all true, but even if they had 0, it wouldn't be equal to ''. what happens if I input: int(0) instead of 0? Yes, which is why their actual code outputs *Thank you..., 0` How do I make it return false? @LseLibrary: Please be clearer in what you expect to happen. You could just test for name != '0' but that’s not a very sensible test, for example. So you are saying there isn't a way to make it return false, the way the code is now? @LseLibrary: Put differently: why do you need to test for '0'? @LseLibrary: name != '0' would return false when you enter 0. But again, that's not a very good test if all you want to know if the user has entered a name. And who's to say that '0' is not a name? the input() function takes in your 0 as a name, and than your name is '0', and not ''. a more common way of doing this, would be: if len(name) > 0: print('Thank you for entering a name, ' + name) else: print('You didn\'t enter a name') or just if name: Backticks are meant for code segments of less than a line. Four-space indents should be used for full-line or multi-line code blocks -- see my edit. That way you get syntax highlighting, and the colored background extends all the way to the right edge. @LseLibrary: if name:, if name != '': and if len(name) > 0: all test the same thing, if name is a always a string. But if name: is the Pythonic form there. Try this: import re name = input() if re.match('[a-zA-Z]+', name): # returns None if not match, a match object otherwise. print("Thank you for entering your name, {}.".format(name)) else: print("Oops! You didn't entered a valid name.")
STACK_EXCHANGE
Release notes discussed: https://trino.io/docs/current/release/release-352.html No new release to discuss yet except that 353 will be around the corner to fix a low-impact correctness issue that came out in 352 https://github.com/trinodb/trino/pull/6895. So we’ve covered a lot on the Trino Community Broadcast to build our way up to tackling this pretty big subject in the space called dynamic filtering. If you haven’t seen episodes five through nine, you may want to go back and watch those for some context for this episode. Episode eight actually diverted to the Trino rebrand so we won’t discuss that one. For the recap; In episode five, we spoke about Hive partitions. In order to save you time when you run a query, Hive stores data under directories named by the values of the data written underneath that directory. Take this directory structure for the orders table partitioned by the │ ├── orders_1992-01-01_1.orc │ ├── orders_1992-01-01_2.orc │ ├── orders_1992-01-01_3.orc │ └── ... │ └── ... │ └── ... When querying for data under January 1st, 1992, according to the Hive model, query engines like Hive and Trino will only scan ORC files under the orders/orderdate=1992-01-01 directory. The idea is to avoid scanning unnecessary data by grouping rows based on a field commonly used in a query. In episode six and seven, we discussed a bit about how a query gets represented internally to Trino once you submit your SQL query. First, the Parser converts SQL to an abstract syntax tree (AST) format. Then the planner generates a different tree structure called the intermediate representation (IR) that contains nodes representing the steps that need to be performed in order to answer the query. The leaves of the tree get executed first, and the parents of each node are dependent on the action of its child completing before it can start. Finally, the planner and cost-based-optimizer (CBO) runs various updates on the IR to optimize the query plan until it is ready to be executed. To sum it all up, the planner and CBO generate and optimize the plan by running optimization rules. Refer to chapter four in Trino: The Definitive Guide pg. 50 for more information. In episode nine, we discussed how hash-joins work by first drawing a nested-loop analogy to how joins work. We then discussed how it is advantageous to read the inner loop into memory to avoid a lot of extra disk calls. Since it is ideal to read an entire table into memory, you likely want to make sure the table that is built in memory is the smaller size of the two tables. This smaller table called the build table. The table that gets streamed is called the probe table. We discussed a bit how hash-joins work which is a common mechanism to execute joins in a distributed and parallel fashion. Another nomenclature akin to build table and probe tables are dimension and fact table, respectively. This nomenclature comes from the star schema from data warehousing. Typically, there are large tables called fact tables would live at the center of the schema. These tables typically have many foreign keys, and a bit of quantitative or measuarable columns of the event or instance. The foreign keys connect these big fact tables to smaller dimension tables that, when joined, provide human readable context to enrich the recordings in the fact table. The schema ends up looking like a star with the fact table at the center. In essence, you just need to remember when someone is describing a fact table they are saying it is a bigger table that is likely going to end up on the probe side of a join, where a dimension is more likely a candidate to fit into memory on the build side of a join. So let’s get onto the dynamic filtering shall we? First, let’s cover a few concepts about dynamic filtering, then compare some variations of this concept. Dynamic filtering takes advantage of joins with big fact tables to smaller dimension tables. What makes this filtering different from other types of filtering is that you are using the smaller build table that is loaded at query time to generate a list of values that exist in the join column between the build table and probe table. We know that only values that match these criteria are going to be returned from the probe side, so we can use this dynamically generated list as a pushdown predicate on the join column of the probe side. This means we are still scanning this data, but only sending the subset that answers the query. We can look at the blog written for the original local dynamic filtering implementation by Roman Zeyde for more insights on the original implementation for dynamic filtering before Raunaq’s changes. Local dynamic filtering is definitely beneficial as it allows skipping unnecessary stripes or row-groups in the ORC or Parquet reader. However, it works only for broadcast joins, and its effectiveness depends upon the selectivity of the min and max indices maintained in ORC or Parquet files. What if we could prune entire partitions from the query execution based on dynamic filters? In the next iteration of dynamic filtering, called dynamic partition pruning, we do just that. We take advantage of the partitioned layout of Hive tables to avoid generating splits on partitions that won’t exist in the final query result. The coordinator can identify partitions for pruning based on the dynamic filters sent to it from the workers processing the build side of join. This only works if the query contains a join condition on a column that is partitioned. With that basic understanding, let’s move on to the PR that implement dynamic partition pruning! In this week’s pull request https://github.com/trinodb/trino/pull/1072 we return with Raunaq Morarka and Karol Sobczak. This PR effectively brings in the second iteration of dynamic filtering, dynamic partition pruning, where instead of relying on local dynamic filtering we collect dynamic filters from the workers in the coordinator and prune out extra splits that aren’t needed with the partition layout of the probe side table. A query like this for example, seen in Raunaq’s blog about dynamic partition pruning shows that if we partition ss_sold_date_sk we can take advantage of this information by sending it to the coordinator. SELECT COUNT(*) FROM sales JOIN items ON sales.item_id = date_dim.items.id WHERE items.price > 1000; Below we show how the execution of this would look in a distributed manner if you partitioned the sales table on item_id. This is a visual reference for those listening in on the podcast: Query is sent to the coordinator to be parsed, analyzed, and planned. All workers get a subset of the items (build) table and each worker filters out items with price > 1000. All workers create dynamic filter for their item subset and send it to the coordinator. Coordinator uses dynamic filter list to prune out splits and partitions that do not overlap with the DF and submits splits to run on workers. Workers run splits over the sales (probe) table. Workers return final rows to be assembled into the final result on the coordinator. For this PR demo, we have set up one r5.4xlarge coordinator and four r5.4xlarge workers in a cluster. We have a sf100 size tpcds dataset. We will run some of the TPC-DS queries and perhaps a few others. The first query we run through in the TPC-DS queries was query 54. With this query, we are using the hive catalog pointing to AWS S3 and AWS Glue as our metastore. We initially disable dynamic filtering then compare it to the times when dynamic filtering is enabled. With dynamic filtering we find the query to run at about 92 seconds, where with dynamic filtering it runs for 42 seconds. We see similar findings for the semijoin we execute below and discuss some implications of how the planner actually optimizes the semijoin into an /* turn dynamic filtering on or off to compare */ SET SESSION enable_dynamic_filtering=false; SELECT ss_sold_date_sk, COUNT(*) from store_sales WHERE ss_sold_date_sk IN ( SELECT ws_sold_date_sk FROM ( SELECT ws_sold_date_sk, COUNT(*) FROM web_sales GROUP BY 1 ORDER BY 2 LIMIT 100 GROUP BY 1 Latest training from David, Dain, and Martin(Now with timestamps!): If you want to learn more about Trino, check out the definitive guide from OReilly. You can download the free PDF or buy the book online. Music for the show is from the Megaman 6 Game Play album by Krzysztof Słowikowski.
OPCFW_CODE
I want to create a snowboard/ski simulation that is as realistic as practical. I get the basics of the game engine, but I don’t have much experience with it. I’m probably going to have to write some python for all this. I’ve written a couple programs with python long ago, but nothing related to Blender. I’ve found a few resources about the math behind skiing and carving through snow. As I do more research, I’ll keep this thread updated. So my first question is, how do I find out what direction is downhill? I assume it is not very difficult to get the normal of the mesh at a given point. So if I place a ball anywhere on a mesh, how do I find out which way it will start rolling? Knowing which way is downhill, or the direction of the forces of gravity and resistance of the ground, is very important. My first idea is to have a tiny sphere that can roll around on a mesh and parent my model to that and keep the plane of the snowboard equal to the ground and the center of mass of the snowboarder directly above the center of the ball. From there I can add more detail. Any help with this simulation is appreciated, and feel free to talk about skiing and snowboarding. (I’m hoping to get out this week and do some cross country) Use the object’s rayCast function to test a direction - check 22 degrees (or a smaller grade) in two directions (assuming all mountains go down on the X-axis). If one direction returns a collision and the other doesn’t, then the other direction is downhill (and then you’ll know). That idea with the tiny sphere should work. One ray already provides the necessary informations: Awesome, thanks for the info on rays. That will come in handy. I just read an article: “Physics of Skiing: The Ideal-Carving Equation and Its Applications” by U.D. Jentshura and F. Fahrbach from U. of FreiburgIf you understand physics and you like skiing, it might be a good read on it’s own and could even improve your performance. I’ve also order some books from the library on ski physics and should be getting them soon. I think I’ll write up a paper on the whole subject. Back to rays. Does the raySource have to be a lamp or other emitter? Am I correct that area lamps send all parallel rays? Would using a mesh emitter be better? Is this how you would trace a bullet in a shooter? Lamps don’t inherently send rays- light calculation and rays are entirely unrelated. Any object can send rays, and if you use python they can cast the ray from any point to any point, or in any direction. If you use the logic brick it will cast the ray along the axis of your choice. If you’re using a character, I recommend casting the ray from the parent object (IE the physics bounds), in the character’s local -Z direction. You can also use the same ray to align the character to the surface, so the skis/board are flat to the ground.
OPCFW_CODE
Backfiring pattern? Imagine this scenario: public class LauncherPresenter extends RxPresenter<LauncherActivity> { protected void onCreate(Bundle savedState) { super.onCreate(savedState); SettingsModel settings = Settings.getInstance(view).readSavedSelectedUsageMode(); if (settings != null && settings.getMode() != SettingsModel.NONE) { switch (settings.getMode()) { case SettingsModel.ONE: getView().ShowOnething(); break; case SettingsModel.TWO: getView().ShowAnotherThing(); break; } } else { getView().ShowSomethingCompletlyDifferent(); } } } Using nucleus i'd imagine you want to handle this from the Presenter. You load a saved setting from your chosen persistence library. Check what setting was loaded and then tell the activity it should show something specific. Now the problem here is that this can only take place in the onTakeView()-method of nucleaus, because before that the Activity has not yet been created. Would it be correct to implement this code in onTakeView? Or would you rather just implement it into the Activity all together? I'd rather keep being consistent in using presenters in all activities. I think that Presenter should not tell view to show something. Presenter in not controller. MVC does not work on Android, see "best practices" section. I don't know what your Settings object is doing, but it seems to me that you can put all this code into your activity. Yes, you can use onTakeView to push data from presenter to view if that data is needed every time a view is attached. I would think this pattern becomes a little weird. Because this means you should at some cases couple activities with presenters and some not. Else you'll leave empty presenters around for no reason. Imagine you have an application that has a simple wizard. The only thing that activity should do is present static views and during the wizard maybe write settings that will be saved in your application. If code such the one above is supposed to be placed in the activity it means i should decouple the presenter for that specific activity, which would break the mvp-pattern. Does not the mvp-pattern say you should couple: model->presenter->view to display something and the other way around view->presenter->model to store something? You probably know more than me, but I know this for sure, there will be cases when the presenters will not be used for anything, which strikes me as odd :) Reading preferences is not a task that deserves presenter. This is a one-line main thread action. Yes, I use presenter only if I have background tasks to handle or if my database should be split off view. Overengineering is a real problem on big projects, this is why not every view should have them. I'll try stick with that reference :) You can create an observable from preferences if you wish and use deliver() to send data to view. If one is to be able to make a baseclass that makes easy interaction with both activities not using presenters and activities using presenters, I think a new annotation would have to be added in that case :) if RequiresPresenter-annotation is used to tell that an activitiy needs a presenter, then it should be possible to also tell with an annotation that it does not need it, else you'll have to use two baseclasses for your activities, one that uses nucleusActivity and one that does not :) If you make a project-specific base activity then just don't say that it requires a presenter. :-D Then how will I make a an activity than needs to be extended from that baseactivity that DOES require a presenter? :D I tried the following public class BaseActivity<PresenterType extends nucleus.presenter.Presenter> extends NucleusActionBarActivity<PresenterType> { } But if you remove the RequiresPresenter-annotation from the activities extending from it, and I just got an error that I needed to declare a Presenter for the activites using that baseacitivty :) So that didnt seem to work :) Ok, I will check this. Thank you for the report. Done! :) 1.1.0
GITHUB_ARCHIVE
So I’ve been trying to train for becoming a Linux Sysadmin for a while and while I have some idea of what that training would entail I don’t have all the pieces so to speak. I started out building a fairly powerful i5 6500 machine with 16gbs of RAM, a 1TB HDD and 2 128 gb SSDs. I installed Ubuntu 16.04 onto the 1 TB HDD and as I wasn’t comfortable modifying my bare metal OS so I installed Virtualbox and started playing around with VMs. As I got more acquainted with the console based applications like Vim for instance I eventually moved on to doing whole minimal installs of Ubuntu 16.04. I stopped using display managers like GDM and booted solely from StartX. I got it to where the only few applications I had installed were i3, rxvt-unicode, vim, htop and feh. And now I’m at this point where I’m not sure where/how to keep progressing in terms of sysadmin training. I’m still not a sysadmin, but from what I understand building skills like using the command line and vim at every opportunity are important skills to have as sysadmin correct me if I’m wrong. I can use console commands and vim without thinking basically, I’m extremely comfortable with them. The only places I can think of left to go are keep training with the command line and vim, learn bash scripting and learn Python. Although I’m really starting to crave some of the more juicy tasks like with virtualization for instance. I’d like to start doing small, basic projects that are hard to screw up in addition to further training even though I honestly don’t know much about virtualization or what I’d want to do with it. I’d like a bit of a guiding hand here. I’ve done a lot on my own and there is still plenty of stuff for me to learn, but I do fear that eventually I’m gonna come up on dead end and not know what to do with all the training I’ve accrued. What can I do with the machine that I have? It’s the only machine I have and I can’t afford to get another one as a server which is why I’m looking into virtualization, but I don’t know what I want to do with that virtualization if that makes sense. I’d like to reiterate my machine has an i5 6500 CPU with 16gbs of RAM, a 1TB HDD and two 128gb SSDs. Anyway I hope that this clarifies what my problem is. Mostly I’m just looking for more experienced people to talk to. I agree with what @SgtAwesomesauce said. You have a lot to work with. If you’re in the mood for learning some virtualization on Linux, I think KVM is a great start. You can do some simple networking, build a DHCP server for your home using Ubuntu, Debian, or CentOS. Make a DNS server or two, too. Setting up a LAMP or LEMP stack is great advice as well. Spend a week doing one of each, and do it 20+ times. Make sure that you don’t have to reference the wiki or documentation anymore. Go for the extra credit too: Obscure Apache/Nginx, so if someone attempts to see the root path they can’t see what version you’re running. Have multiple site domains. Use Node instead of PHP. Setup Let’s Encrypt. Force the site to go https. You can do a LOT with LAMP/LEMP. Setup Wordpress, Ghost, NextCloud, Roundcube, file servers, VPN… The possibilities are endless. Get a RHCSA/RHCE book. They go over a lot of Linux server projects, because that’s essentially what the test is. You make changes to the server and reboot. If the tasks are complete and the server boots, you get a (passing) score. Learning about KVM, libvirt and a bit of openstack (devstack is your friend) is always good. OpenStack is very similar to AWS in architecture and is a good free way to get exposure to it. Nginx is really the new industry standard. It’s important to know it. It’s pretty easy to configure though. You should know ufw, firewalld and iptables from a security perspective. You should know how to use SSH keys and how to do some more advanced stuff (ssh jump hosts, bastion servers, sftp config) with SSH. Knowing different filesystems (and relevant tuning parameters) is a plus. A bit of understanding of Java (JRE) gotchas and tuning parameters is helpful. A lot of companies use proprietary programs written in java and knowing how to optimize the runtime environment around them can go a long way to making users happy! Even if you don’t get hired for networking, you’ll need a knowledge of it. Understand how VLANs and subnets work at the bare minimum. @anon79053375, it might be fun to make a wiki page for this sort of stuff. A “so you want to be a sysadmin, eh?” sort of page. There’s tons of stuff you can do! Start off with setting up various services (email, http, ftp, ssh, dns,etc) and look into how to configure those to be as secure as possible. I’m a big advocate of setting up a service and seeing if there’s anyway you can exploit the service you just set up. Lots of great resources on setting up your own pentesting lab. Learning containerization software such as LXC or Docker can go a long way too. I’d definitely look into virtualization as well. ESXi if you have the money and Proxmox if you don’t. Another thing to do is to download images for intentionally exploitable OS’s and figure out a) how to exploit those vulnerabilities, and b) how to fix those vulnerabilities. https://www.vulnhub.com/ is a good start for this. Also, get involved with the security community in your area. Dunno where you’re located, but I’m sure there’s at least some kind of security meet-up group in your area. Look into attending security conferences that are either cheap or free. BSides is a good one if you have a conference near you. Thanks for all the great suggestions everyone, so I think I have an idea of what I want to do. So I’m gonna do a clean install on my 1TB HDD (Not sure what distro probably Ubuntu), install KVM and set up a couple servers. I’m getting into KVM as a kind of necessity because I’m stuck with my one machine. Regarding pen testing is there anything stopping me from say setting up a working DHCP Server in KVM for instance and then finding some kind of security risk in it, exploit it, try to bring it down and try to fix the security risk? I know that isn’t sysadmining specifically, but if I can get pen testing skills exploiting the server I just built in as well as sysadmin skills from setting it up I figured I might as well. I’ve been curious about pen testing and I like killing two birds with one stone. But it’s the cloud though! So vague and mysterious. It’s all knowing and all powerful. If you run your code in the cloud, factorial problems become linear ones, all thanks to the magical transistors the cloud consists of. At least that’s what their brochure says. So I was thinking about reinstalling Ubuntu, but Fedora’s really speaking to me as it’s repository is considerably more up to date than Ubuntu’s, it’s stable and that’s honestly what I want out of a daily driver, a balance between stability and bleeding edge. Given my experience with CentOS and in terms of documentation Fedora looks outstanding, I’m gonna give a Fedora Minimal Install a go.
OPCFW_CODE
- مبلغ: ۸۶,۰۰۰ تومان - مبلغ: ۹۱,۰۰۰ تومان Recent advancements in the Internet, web and communication technologies cut across many areas of modern‐day living and enabled interconnection of every physical object, including, sensors and actuators. Web‐enabled smart objects empower innovative services and applications for different domains and improve utilization of resources. In this paper, we propose an interoperable Internet‐of‐Things (IoTs) platform for a smart home system using a Web‐of‐Objects (WoO) and cloud architecture. The proposed platform controls the home appliances from anywhere and also provides the homes’ data in the cloud for various service providers’ applications and analysis. Firstly, we proposed a Raspberry PI based gateway for interoperability among various legacy home appliances, different communication technologies, and protocols. Secondly, we bring the smart home appliances to the web and make it accessible through the Representational State Transfer (REST/RESTful) framework. Thirdly, we will provide the cloud server for smart homes’ to store the homes’ data due to low storage capabilities at a gateway and provide the data for various application’s service providers and analysis. In the proposed smart home platform, we implement a water‐tank control using Zigbee communication, an automatic door security using a normal camera as an IP camera, and provide web connectivity to different home devices for web‐based controlling. We aim to reduce the human intervention, secure access control to home devices from anywhere, provide smart homes data for application services as well as for analysis, and improve the utilization of resources. 5. Conclusion and Future Work In this paper, we presented an interoperable Internet‐of‐Things platform for smart home system using WoO and cloud. The proposed architecture provides the interoperability among the legacy devices and communication protocol, and also provides access interface for user to access home devices from anywhere. We provided RESTful based smart home system to assign a unique URI to each sensor data and actuator event to reduce the processing at webserver. It will also provide interoperability among devices. In proposed architecture, we provide the web access to home legacy devices through the smart home gateway. The gateway provides interoperability among legacy devices such as water pump and tank control, lights and fan control, and the door security control. Moreover, the gateway aggregates sensors and actuators data, and stores it on the cloud for application services and for user’s history. Using the HTTP communication, the web application serving as a web client that provides user interface to check and alter the user home appliances status. This new idea is developed and tested for different functionalities of smart home services. Furthermore, the architecture can be extended to various smart building scenarios such as factories, offices, and smart grids etc.
OPCFW_CODE
Work Package 2: Global dynamics of climate variability and change This work package aims to further our understanding of global climate dynamics with the overall aim of improving regional climate predictions. Many dynamical patterns of variability - such as the El Niño Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO), the Pacific Decadal Oscillation (PDO) -have near hemispheric or global impacts on regional climate. This work package aims to critically examine the performance of the latest climate models in reproducing these dynamical patterns and the fidelity of their remote teleconnections to surface climate climate and extreme events which impact society. With a focus on East Asia and Europe, we aim to develop dynamical analysis tools and diagnostics, to evaluate how well the latest generation of climate models simulate observed climate variability. In parallel with this, we will also assess the fidelity and predictability of these patterns in initialised monthly to decadal climate predictions. The ultimate aim of this work package is to improve climate predictions of extreme events in Europe and Asia. Summary of ongoing work on WP2 at the Met Office and with Chinese partners Cooperative analysis of climate model variability and teleconnections In association with Chinese partners (CMA and IAP) we are jointly analysing our respective climate models, in free-running mode, to assess the fidelity of the large scale patterns of climate variability. In addition to the simulation of the patterns themselves, we are also evaluating the associated teleconnections to surface climate impacts, with a particular focus on East Asia and Europe. In Year 1 we are particularly focussing on identifying key common model errors in patterns of variability and surface climate teleconnections. In Year 2 we will be examining mechanisms in the ocean and atmosphere that lead to these key model errors and how model resolution impacts these errors. One of the key tools to achieve this is the Met Office (and CMA) initialised seasonal and decadal predictions. By examining these initialised forecasts it will be possible to determine the impact of model biases on simulated patterns of variability and their remote teleconnections. We will also examine how quickly these model errors develop and whether there is any overshoot or damped oscillatory behaviour. Predictability of regional climate and patterns of variability Running parallel to the analysis of the fidelity of model patterns of variability and teleconnections, we are also examining their predictability using initialised hindcasts from the Met Office seasonal and decadal systems. In this work we are also interested in the optimal design of forecast systems, such as ensemble size and the inherent signal-to-noise ratio in the forecast systems versus reality. For example, an exploratory piece of work into the predictability of East Asian summer rainfall has been carried out in Year 1. Unprecendented extreme events - probing what is dynamically possible Starting in Year 2, the Met Office will also be exploring how unusual dynamical situations could give rise to extreme events that we are yet to observe. Each ensemble member, from each start date (in both our seasonal and decadal prediction systems), is a potential realization of reality (limited of course by the fidelity of the model simulation). We therefore have a very large dataset to mine for extreme regional climate impacts. If unprecedented events (those beyond the observed range of variability) are found, then we can examine the dynamical situation that generated them and hence learn what is possible in a 'perfect storm' situation. List of Year 1 academic partner projects: - University of Exeter (Mat Collins/Sam Ferrett with CMA collaborator Hong-Li Ren) WP2.1 - Regional climate and modes of variability: El Niño Southern Oscillation (ENSO) - University of Edinburgh (Massimo Bollasina) WP2.2 - Teleconnections: TEACliM - Teleconnections over EurAsia in Climate models - University of Reading (Andy Turner) WP2.3 - Predictability of regional climate and modes of variability: subseasonal to decadal predictability in East Asia - University of Southampton (Aurelie Duchez) WP2.4 Dynamics of regional extremes Science highlights from Year 1: - Analysis of GloSea5 seasonal forecast skill of the East Asian summer rainfall - Li et al 2015, in prep Significant levels of skill (r=0.76) over the Yangtze River catchment is present in the GloSea5 hindcasts. This high skill, compared to previous forecast systems, appears to be due to improved modelling of the SST variability over the tropical western Pacific and its associated teleconnection to East Asian climate. A paper is in final stages of preparation for submission (Li et al., 2015). - A coordinated analysis of climate variability in UK and Chinese climate models, carried out jointly by UK and Chinese scientists.
OPCFW_CODE
Part 1 on The Plausibility of Life Darwin is famous for convincingly arguing that natural selection can explain why living things have features that are well-matched to the environment they live in. In the popular consciousness, evolution is often thought of as natural selection acting on random mutations to produce the amazing tricks and traits found in the living world. But “random mutation” isn’t quite right - when we describe evolution like this, we pass over a key problem that Darwin was unable to solve, a problem which today is one of the most important questions in biology. This key problem is the issue of variation, which is what biologists really mean when they talk about natural selection acting on random mutations. Variation and mutation are not the same thing, but they are connected. How they are connected is the most important issue covered Kirschner and Gerhart’s The Plausbility of Life. It is an issue Darwin recognized, but couldn’t solve in those days before genetics really took off as a science. Natural selection really works on organisms, not directly on mutations: a particular cheetah survives better than other cheetahs because it can run faster, not because it has a DNA base ‘G’ in a particular muscle gene. A domesticated yeast can survive in wine barrel because of how it metabolizes sugar, not because of the DNA sequence of a metabolism gene. I know what you’re thinking: this is just a semantic game over proximal causes. But this is not just semantics, it is a real scientific problem: what is the causal chain that leads from genotype to phenotype, that is, from an individual organism’s DNA sequence, mutations included, to the actual physical or physiological traits of the complete organism? If you look around your office or your home, you’re bound to see natural variation in phenotype in the form of your coworkers or even your family. We’re all different, but what accounts for those differences? How much is genetic, and how much is environment? Or how much of is due to the environment acting on the genetics? For evolution to work, certain genes for success have to be preferentially passed on to the next generation, but for centuries, biologists have not been able to look at genes directly. Ingenious biologists probed the properties of genes by looking for mutants, flies with white eyes, or bread mold that could not make certain amino acids. During the 20th century, brilliant geneticists worked out a great theory of heredity, explaining the patterns of genetic inheritance, without really knowing what genes were physically made of, or how mutations physically occurred. Now, in the era of torrents of cheap DNA sequence data, we can know better than ever what kinds of random mutations or sexual shuffling of genes take place inside cells. Identifying the genotype of an organism is now trivial, but we still don’t really understand how that genotype, how the combinations of many genes, with many mutations, come together to produce a unique individual. There are several fields of biology focused on this problem. Two of the most important are quantitative genetics and systems biology. Some cheetahs run faster than others, but it’s not just one gene that makes a difference; most likely several different genes are involved in producing different cheetah running speeds. The same is true of human height: we don’t just have tall people and short people; we see a range of heights in the human population. How many genes are involved in this range of phenotypes? What kinds of mutations are involved? Those are the kinds of questions asked in quantitative genetics. One key idea to keep in mind is that we’re only looking at genes that vary in a population, genes with “mutations” or (more technically) polymorphisms: where I have an ‘A” in my DNA, you may have a ‘G’ (or some other type of mutation). Some genes do not vary: in cheetahs, there may be an absolutely critical gene involved running speed, but it is 100% identical in all cheetahs, and thus it is not responsible for the differences in cheetah running speed. Quantitative geneticists are only interested in the genes that can be different in different individuals. So quantitative geneticists look for variation in nature, such as differences in running speed or height, or the ability to form spores in yeast, and then they use the tools of genetics, statistics, and DNA sequencing to find the genes. They may find, for example, that variants of six different genes in a cheetah population are responsible for almost all of the differences in running speed. Quantitative geneticists are gene finders: they find the genes and mutations involved in producing the physiological differences in individuals. Once you have the (currently hypothetical) six different genes responsible for the differences in cheetah running speed, the next problem is to understand how those genes actually work together inside of a cell. This has classically been the work of biochemists and molecular biologists, who studied what the various physical pieces of a cell do. But now we are running into some limitations of this classical approach: First, many biochemists and molecular biologists have only studied one gene or protein at a time. This is great for understanding how that one protein works, and it is absolutely necessary work. Yet if we have six varying genes working together to make cheetahs run, we want to know how those six genes work in concert, not as individuals. And second, even though molecular biologists and biochemists have often gone beyond single proteins, and studied chains of interacting proteins in an information processing pathway or a metabolic system, these pathways and systems are often so complex that verbal, intuitive reasoning isn’t enough to understand how they work. We need mathematical models. This is where systems biology comes in. Recently, a group at the Rockefeller University analyzed how a set of genes works together when a yeast cell commits to copying its DNA. It turns out that a positive feedback loop is involved, which drives the cell forward through the process of cell division, and prevents the cell from sliding back into its previous, non-DNA-copying state. Some aspects of this positive feedback loop can be understood by verbal reasoning, but a deeper understanding comes from the mathematical model. And in this case, the modeling produces new ideas about how the system should work, which researchers can then test. Coming back to The Plausibility of Life, we can see that the issue of how genes (and the random mutations of them) produce the variation in nature that is directly responsible for how well an organism does. How genetic variation produces phenotypic variation in an organism is now one of the central problems in biology, one that we are at last well-equipped to tackle. Fields like quantitative genetics and systems biology heavily rely on technology: genetic technology in the lab, DNA-sequencing technology, and the number-crunching technology that makes desktop computers faster than the supercomputers of several decades ago. Darwin would have been envious. This is the somewhat delayed first installment of a series of posts on an interesting recent book by the accomplished biologists Marc Kirschner and John Gerhart. In this book, the authors lay out what they see as the most important research agenda for molecular biologists in the 21st century. The next installments are below:
OPCFW_CODE
All of this week, I've spent time brainstorming ideas about my application. I've tried to keep loose during this time and follow every train of thought as they happened. At the end of the week, I assembled all of the 8 A4 pages together to create a map that explored various aspects of my project. Some of the things I considered during the week what my inspirations were for my app, how some of the screens may look and what interactions would be implemented, how the branding may look, the "happy path" of a one of my personas as well as creating questions to ask myself based on design tips from InVision (part 1, part 2). Overall it was a successful week that helped me put my thoughts into perspective and visualised what I would need to consider when creating my screen states next week. Considering Avatar branding. Looking at the differences between non-linear and linear user journeys. While sketching these out I realised that my different personas would have different requirements. A free fledgling would opt for a non-linear user journey where as bright beginners and anxious amateurs would lean towards having a linear user journey. There were pro's and con's of each but I had to prioritise which of my user's needs were more important. Ultimately, I leant more towards structured, linear journeys as they seemed to be the norm for other learning experiences. Breaking down LittleBits and seeing how they could be applied to teaching beginners coding languages. I made sure to pay attention to LittleBits when researching potential ways of replicating things digitally for beginners. I broke down how they could represent different parts of code i.e. batteries = the basics or the setup of code. This gave me the idea of working with "blocks of code" that I would eventually move on to play with, and take forward as one of my potential experiences. Early app icon brainstorming. Originally drawing ideas of using the owl from the book Curiosity. A study on the various gestures I could utilise with a smartphone app. Thinking about how to use search to help users reach their learning goals, or at least find their starting points towards working towards their learning goal. Considering how applications could be linked together i.e. having a companion app on a tablet device which would act as the output, with the phone acting as a text-editor. Exploring how to make user onboarding and engaging as simple as possible for learners to identify their interests when they first begin using the app and how to make the app personal to the user. Making the app personal I feel adds that personal element which is important to make the user feel more at home and ultimately work towards creating a better user experience. Planning the information architecture and the user journey. Also deciding what parts of the application to focus on prototyping. Rough sketches of what the overview page of the app would look like. Downloading content to phones seemed like an important issue to think about since applications ultimately would work better if they can function just as normal with or without an internet connection. The ability to download content to phones seemed like this would be a solution.
OPCFW_CODE
Provide examples of how to restrict pairing as a device (TZ-222) When I remove the device from the network using the coordinator I get ESP_ZB_NWK_LEAVE_TYPE_RESET but then it immediately rejoins the network when I restart the device (and only if I restart the device). This is annoying during development because my coordinator/Home Assistant lets the device automatically reconnect even when not being requested to add new devices and it's hard to get it to clear out the old endpoint information. It's also bad for security because the device should never unexpectedly connect to someone else's network. Please provide an example of how to make the device: Identify if there has been a network configured on startup (without any RF communication) - this is necessary to implement "automatically pair only if there's no network configured" behaviour Rejoin the configured network on startup (and only that network - no fallback to new/other networks) Connect to a new network only when requested to do so (e.g. by a GPIO) Delete network information after ESP_ZB_NWK_LEAVE_TYPE_RESET occurs (without restarting) Connect to a network after ESP_ZB_NWK_LEAVE_TYPE_RESET occurs and the network information has been deleted - i.e. a new "pairing" request (currently I have to restart to do this) Hi @nomis , Identify if there has been a network configured on startup (without any RF communication) - this is necessary to implement "automatically pair only if there's no network configured" behaviour Rejoin the configured network on startup (and only that network - no fallback to new/other networks) Connect to a new network only when requested to do so (e.g. by a GPIO) The esp_zb_get_short_address() provides a quick way to identify the network configured on start-up. If there is no network configured, the return value will be 0xfffe. Delete network information after ESP_ZB_NWK_LEAVE_TYPE_RESET occurs (without restarting) Connect to a network after ESP_ZB_NWK_LEAVE_TYPE_RESET occurs and the network information has been deleted - i.e. a new "pairing" request (currently I have to restart to do this) For this issue, could you please consider referring to the following code for implementation on your end device example? case ESP_ZB_ZDO_SIGNAL_LEAVE: { esp_zb_zdo_signal_leave_params_t *leave_params = (esp_zb_zdo_signal_leave_params_t *)esp_zb_app_signal_get_params(p_sg_p); if (leave_params && leave_params->leave_type == ESP_ZB_NWK_LEAVE_TYPE_RESET) { esp_zb_nvram_erase_at_start(true); // erase previous network information. esp_zb_bdb_start_top_level_commissioning(ESP_ZB_BDB_MODE_NETWORK_STEERING); // steering a new network. } } When debugging this it's important to be aware that routers on the network may remain in pairing mode for up to 250 seconds, so it can look like the device is able to rejoin a network on its own (with previous configuration information) but the network is actually still in pairing mode. * Identify if there has been a network configured on startup (without any RF communication) - this is necessary to implement "automatically pair only if there's no network configured" behaviour Checking esp_zb_get_short_address() != 0xffff in ESP_ZB_ZDO_SIGNAL_SKIP_STARTUP works to determine if there is a network configured. * Rejoin the configured network on startup (and only that network - no fallback to new/other networks) I have not yet been able to test that it will only use the configured network because I'll need to set up a second coordinator. * **Connect to a new network only when requested to do so (e.g. by a GPIO)** I can do this by deciding when to call esp_zb_bdb_start_top_level_commissioning(). * Delete network information after `ESP_ZB_NWK_LEAVE_TYPE_RESET` occurs (without restarting) This happens automatically but I can't do it manually without restarting. I don't know why you're suggesting esp_zb_nvram_erase_at_start(true) when that's documented to only affect esp_zb_start() which has already been called. * Connect to a network after `ESP_ZB_NWK_LEAVE_TYPE_RESET` occurs and the network information has been deleted - i.e. a new "pairing" request (currently I have to restart to do this) I can do this by deciding when to call esp_zb_bdb_start_top_level_commissioning() again. I'd like to implement a "leave network" button, is there a way to do esp_zb_factory_reset() without a system reset? This would need to work even while attempting to connect to a new/existing network. I have not yet been able to test that it will only use the configured network because I'll need to set up a second coordinator. I've now verified that it won't change network (when the configured network is unavailable) even if there's another coordinator accepting new devices. I'd like to implement a "leave network" button, is there a way to do esp_zb_factory_reset() without a system reset? Instead of calling esp_zb_factory_reset() without a system reset, you can choose to use esp_zb_zdo_device_leave_req() to initiate the device's departure from the network. This action will result in erasing the network configuration information of device and the signal ESP_ZB_NWK_LEAVE_TYPE_RESET will be issued. This works properly if I'm connected but not if I'm in the process of connecting. If I do the following, I get a ESP_ZB_NWK_LEAVE_TYPE_RESET signal followed immediately by ESP_ZB_BDB_SIGNAL_STEERING that claims I'm on network 00:00:00:00:00:00:00:00 (0xffff) as device 0xfffe on the last channel that was used (or 255 if there hasn't been a connection since boot). The false network is not retained after a restart. esp_zb_bdb_start_top_level_commissioning(ESP_ZB_BDB_MODE_NETWORK_STEERING); /* wait 2 seconds */ esp_zb_zdo_mgmt_leave_req_param_t param{}; esp_zb_get_long_address(param.device_address); param.dst_nwk_addr = 0xffff; esp_zb_zdo_device_leave_req(&param, nullptr, nullptr); Identify if there has been a network configured on startup (without any RF communication) - this is necessary to implement "automatically pair only if there's no network configured" behaviour If the device possesses network configurations, the stack will facilitate the device in rejoining the configured network. It will then exclusively issue the ESP_ZB_BDB_SIGNAL_DEVICE_FIRST_START signal to users. Conversely, if the device lacks network configurations, the stack will attempt to locate a network and subsequently dispatch the ESP_ZB_BDB_SIGNAL_STEERING signal to users. There are other side effects combining this with esp_zb_zdo_device_leave_req(). If I have used esp_zb_zdo_device_leave_req() on a previous boot and then use esp_zb_start(true) on startup then there is a long delay with no signals at all (9 to 40 seconds) before I finally receive ESP_ZB_ZDO_SIGNAL_LEAVE and then ESP_ZB_BDB_SIGNAL_DEVICE_FIRST_START with a status of -1. This time the device short address is 0xfffe. It's as if it's still trying to leave the previous network on startup before it'll do anything else. @nomis , Regarding the issue, do you have any further topics you'd like to discuss or address? Let me know if there's anything else you need assistance with regarding this matter. I've already made comments https://github.com/espressif/esp-zigbee-sdk/issues/66#issuecomment-1679527878 and https://github.com/espressif/esp-zigbee-sdk/issues/66#issuecomment-1681093679 that haven't been addressed: When leaving there must be no previous state left that will impact the next startup/join. It should be possible to leave at any time, including while joining. That's not possible without side effects.
GITHUB_ARCHIVE
Creating Code For Simplicity but Logic For Need In my previous article we were making a news app. We completed adding a data source to our application. In this article we will work at an application UI. At the end of this article our application will be ready for publishing in the store. This article includes: In simple words it is a page shown to the user. We can have a maximum of six Application Sections in our app. It is of the following two types: A Panorama is a list of sections that a user can see from swipping left or right on the app. Sections are used for displaying various parts of an app. A Panorama Control is the first thing one sees when one opens an app. One of the exciting features of App Studio is data binding. In data binding we maintain our data at one data source and then we connect all our UI elements or application elements with that data source using data binding. Binding a data source to any section of an App is very easy. Will see this late in this article. Call Phone: It can be used for adding call functionality on any number. Email: This action sends an email to the address we mentioned. Nokia music: Actions related to the Nokia music liberary, like displaying artist info, search and play. HERE MAPS -Directions: Opens a map showing directions from the current user location to the entered address. HERE Maps- Address: Locates an address entered on the HERE map. Styles involves application color combination settings. It includes: Accent brush: It is the color of the application Header. Background brush: The color of the application background. It can be an image or a color. Foreground brush: It is the same as the font color or application text color. Application bar brush: The color of the application bar of an application. In this section we configure the tiles for our application. Windows Phone 8 supports three types of tile templates. Cycle template: A Tile Image cycles through the list of images up to 9. For this template to work, our app must have at least one data column of image type in any collection data source. Flip template: As the name implies, a Tile flips between front and back. Both front and back can have various images as well as data. Iconic Template: It is used for showing an icon on tile. It also allows additional data addition. This tile only supports icons. Adding images may result in a complete white tile. Completing our App So we have added a data source to our application. When we added a data source, we also added the section. In RSS data sources a detail page is also added automatically in addition to a section page. Now we need to bind the RSS data source to our Section. To do this: Click on the newly added section page. Let us say “news”. Select the layout of your choice. We have 9 choices. Each choice has its own use. The last two layouts are useful for an image gallery and the rest depends on your taste. I’m choosing the second layout for this app. In the header section enter your Header title “News”. To bind an item title with the RSS data, clcik on the right box then select “Data” and then “Title”. Repeat the preceding step for Item subtitle and Item image also. Now click on “Save changes”. Click on “Info page”. Enter a title and choose the layout for this page. This is our detail page. It will be opened when the user clicks on any news item of the main page. Bind the header with Context.Title. Repeat the preceding step for Content and Image also. Click on “Save changes”. Now we are done with the internals. The next task is to set the look and feel. Clcik on “Configure App Style”. For this app keep the settings to the defaults or you can change as you wish. Click on the “Tiles” tab and select “Flip template”. Click “Edit”. Fill in the required info, like tile title, tile image and tile content. Click on “Save” (check mark). Now click on the “Splash & lock” tab. Upload the splash screen background and lock the screen background images and click “Next”. (If you don’t change the images then your app will be rejected during store certification.) That’s it! we are done. Click on “Generate” and then download your app. From this article we saw how easy it is to create apps for Windows Phone. Now you are an app developer. So start showing your creativity and use this tool for converting your idea into reality. Earn some rewards by submitting your apps here . Register here and add your developer account. Submit some apps and get your favourite reward.
OPCFW_CODE
Data leakage with keras train/test split I’m trying to build a very basic cnn to do multiclass image classification and getting a little stuck on one of the first steps of splitting the data! Following a youtube tutorial I initially created a dataset using tf.keras.utils.image_dataset_from_directory() then split it into train/valid/test using .skip() and .take(). While the model worked great I noticed that the test set changed each time I used it (even if I only had 1 batch). My understanding is that doing this method, every time you use all the data, it reshuffles and redraws all of the samples. So, 1. Is this then a source of data leakage? In that as you train the CNN, at each epoch it redraws the samples and hence the model has already seen the test set? As a result I decided to try creating a separate directory for test data that I don’t touch and just doing the train/validation split. Now, from reading online I realised I could just do this using validation_split keyword… However, that brings up the second question where if I do the split using validation split (Method 1 below) I only get validation accuracy up to about 0.5 during training, whereas if I do the skip(), take() method (Method 2) I can get up to 0.95… I’m clearly doing something different with these two methods but can’t see it. Could anyone explain what it is? And which method is better? ## METHOD 1 ## validation_split = 0.2 train1 = tf.keras.utils.image_dataset_from_directory( train_dir, validation_split = validation_split, subset = "training", seed = RANDOM_STATE) val1 = tf.keras.utils.image_dataset_from_directory( train_dir, validation_split = validation_split, subset = "validation", seed = RANDOM_STATE) train1 = train1.map(lambda x,y: (x/255.,y)) val1 = val1.map(lambda x,y: (x/255.,y)) ## METHOD 2 data = tf.keras.utils.image_dataset_from_directory(train_dir) # Scale the pixel data to between 0 and 1 data = data.map(lambda x,y: (x/255.,y)) # Split into train, validation and test samples n_batchs = len(data) train_size = int(n_batchs*0.8) val_size = int(n_batchs*0.2) # if rounding causes sizes to be less than amount of data, add spare data to the training set total_size = train_size+val_size if total_size < n_batchs: train_size += n_batchs - total_size train2 = data.take(train_size) val2 = data.skip(train_size).take(val_size) Thank you so much for any help! To avoid splitting the data differently each time you execute the code, you set the seed to a static value. Thereby, the random step becomes deterministic, and the data split will stay the same. In Method 2, you load the complete dataset with image_dataset_from_directory without setting shuffle. The default for shuffle is True. So each time you load the dataset, the images get randomly drawn, and then you split it into train/val. In this case you have a dataset leakage, as sometimes images are in train and sometimes they are in in val. This is bad and explains the high validation accuracy in Method 2, as the model had already seen these images in training. You can avoid this in Method 2 if you either set a fixed seed, or you set shuffle to False and shuffle train and val separately. Personally, I like Method 1 better, as you have 2 different datasets from the start, where you can apply different transformations if you want. Or shuffle only the training set, but not the val set for example. You could also load this in only 1 call: train1, val1 = tf.keras.utils.image_dataset_from_directory( train_dir, validation_split = validation_split, subset = "both", seed = RANDOM_STATE) This way, you can be 100% sure to use the same dir, seed and val_split value for both train and val datasets. ahhhh of course! It's the same reason I was having trouble with the test set originally. Thank you very much for the explanation. and thank you for the top about doing it in one call, didn't know that!
STACK_EXCHANGE
Problem using ObjectID in MongoDB Compass I am learning to use MongoDB, I have created a cluster in the cloud at cloud.mongodb.com, and I connect to it with MongoDb Compass vs 1.22.1. I am trying to learn some basic commands, and I am trying to select items from my collection using the find() command to filter by id. I have tried what I have seen being referenced everywhere, like: db.recipes.find({_id: ObjectID("5e877cba20a4f574c0aa56da")}); or db.recipes.find({'_id': ObjectID("5e877cba20a4f574c0aa56da")}); And I always get the output: ReferenceError: ObjectID is not defined at evalmachine.<anonymous>:5:10 at evalmachine.<anonymous>:7:3 at Script.runInContext (vm.js:134:20) at Object.runInContext (vm.js:297:6) at ElectronInterpreterEnvironment.sloppyEval (C:\Users\lfili\AppData\Local\MongoDBCompass\app-1.22.1\resources\app.asar\node_modules\@mongodb-js\compass-shell\lib\index.js:140827:28) at Interpreter.<anonymous> (C:\Users\lfili\AppData\Local\MongoDBCompass\app-1.22.1\resources\app.asar\node_modules\@mongodb-js\compass-shell\lib\index.js:210735:41) at step (C:\Users\lfili\AppData\Local\MongoDBCompass\app-1.22.1\resources\app.asar\node_modules\@mongodb-js\compass-shell\lib\index.js:210685:19) at Object.next (C:\Users\lfili\AppData\Local\MongoDBCompass\app-1.22.1\resources\app.asar\node_modules\@mongodb-js\compass-shell\lib\index.js:210615:14) at C:\Users\lfili\AppData\Local\MongoDBCompass\app-1.22.1\resources\app.asar\node_modules\@mongodb-js\compass-shell\lib\index.js:210587:67 at new Promise (<anonymous>) If I dont use ObjectID, like: db.recipes.find({'_id':"5e877cba20a4f574c0aa56da"}); I get no error but there is no output because I guess the _id is not "5e877cba20a4f574c0aa56da" but ObjectID("5e877cba20a4f574c0aa56da"). I don't know why can't use the ObjectID in the Compass MongoSH, any help would be welcome. Thank you It is ObjectId not ObjectID. Thank you. I have tried and it worked, it is strange because the output in >_MongoSH is in the format _id: ObjectID("5e877cba20a4f574c0aa56da"), so I just copied it from there. For newer versions you have to use db.recipes.find({_id: ObjectId("5e877cba20a4f574c0aa56da")}); If you're using an older version before 1.10.x you can use: db.recipes.find({"_id":{"$oid":"5e877cba20a4f574c0aa56da"}}); Thank you! I was using the format for the newer version, the problem as prasad_ pointed out was that I was using ObjectID instead of ObjectId, because I have copied it from the output in the console and it outputs as ObjectID instead of ObjectId.
STACK_EXCHANGE
Limitations of using sequential IDs in Cloud Firestore I read in a stackoverflow post that (link here) By using predictable (e.g. sequential) IDs for documents, you increase the chance you'll hit hotspots in the backend infrastructure. This decreases the scalability of the write operations. I would like if anyone could explain better on the limitations that can occur when using sequential or user provided id. Did you read the answer there and also click through to the discussion on Google's google-cloud-firestore-discuss mailing list? I don't think it's going to get any more detailed than that. ohh unintentionally skipped that!, thanks. Cloud Firestore scales horizontally by allocated key ranges to machines. As load increases beyond a certain threshold on a single machine, it will split the range being served by it and assign it to 2 machines. Let's say you just starting writing to Cloud Firestore, which means a single server is currently handling the entire range. When you are writing new documents with random Ids, when we split the range into 2, each machine will end up with roughly the same load. As load increases, we continue to split into more machines, with each one getting roughly the same load. This scales well. When you are writing new documents with sequential Ids, if you exceed the write rate a single machine can handle, the system will try to split the range into 2. Unfortunately, one half will get no load, and the other half the full load! This doesn't scale well as you can never get more than a single machine to handle your write load. In the case where a single machine is running more load than it can optimally handle, we call this "hot spotting". Sequential Ids mean we cannot scale to handle more load. Incidentally, this same concept applies to index entries too, which is why we warn sequential index values such as timestamps of now as well. So, how much is too much load? We generally say 500 writes/second is what a single machine will handle, although this will naturally vary depending on a lot of factors, such as how big a document you are writing, number of transactions, etc. With this in mind, you can see that smaller more consistent workloads aren't a problem, but if you want something that scales based on traffic, sequential document ids or index values will naturally limit you to what a single machine in the database can keep up with. This should really be on the docs. I just spent quite some time refactoring my implementation to use sequential IDs (App instance ID + "_" + local SQLite rowId) instead of generated ones, only to stumble upon this question now. Sorry to hear that @Actine. We mention it in best practices: https://cloud.google.com/firestore/docs/best-practices - had you stumbled upon that in the docs and it wasn't clear or did you not find that section at all? — I never examined the cloud.google.com docs, only Firebase docs. @Actine - also there: https://firebase.google.com/docs/firestore/best-practices. Dropping a note to our tech writing team as it is a bit hidden in a collapsible menu. Oh yeah, sorry, somehow missed that. Anyway, it wasn't a problem to revert. I don't think I'll ever run into those scaling problems though. But this also makes me wonder what could performance implications be if I'm structuring my data into per-user subcollections as /users//tasks/, where each user can basically only access their own tasks. Will Firestore split data between users first, or split each user's subcollection when it comes to that? @DanMcGrath could please elaborate on that part: "Incidentally, this same concept applies to index entries too, which is why we warn sequential index values such as timestamps of now as well". Especially in connection with this article (https://buildkite.com/blog/goodbye-integers-hello-uuids) that says: "Non-time-ordered UUIDs [..] are not sequential. [..] values will not be clustered to each other in a DB index, [..] inserts will be performed at random locations. This random insertion can negatively affect the performance on common index data structures such as B-tree and its variants"
STACK_EXCHANGE
I have lost, corrupted, deleted or otherwise damaged Lotus Notes files. Will this software restore my data? SecureRecovery for Lotus Notes is a safe, effective and secure way to restore Lotus Notes files. While success rates are high overall, the program’s results can vary from one case to the next depending on various factors, including the size of the affected files and the location of file corruption. The best way to determine whether this utility will work for your Lotus Notes files is to download the demo version. The demo is completely free and will identify recoverable files, evaluate corruption, and show you the strings that it can restore in each form. How is the demo version different from the licensed version of SecureRecovery for Lotus Notes? The demo version will not fully recover your Lotus Notes files. Instead, it will recover about 20 strings (dependent on the number of strings in the original form) and replace the other strings with “demo” placeholder text. The placeholder text is used to identify unrecoverable strings -- if a form or another element is missing in the demo output, it cannot be recovered by the full-featured version of the software. All licensed versions of SecureRecovery for Lotus Notes are free from output limitations. I want to purchase a license. Which licensing option is right for me? You will need to review our Licensing page to choose the right option. Does this application support command line functionality, and can I create batch files to automate the data recovery process? Yes, SecureRecovery for Lotus Notes supports command line usage, but you will need either the Service or the Enterprise license in order to access this feature. Use the following call: Angle brackets are not needed. When you access the tool through the command line, you can use standard patterns; ‘*’ replaces groups of symbols, while ‘?’ replaces individual symbols. Remember to create a directory for the recovered Lotus Notes files before running the tool from the command line (this also applies to any batch files you create to automate SecureRecovery for Lotus Notes). I successfully ran SecureRecovery for Lotus Notes. The program created a folder with a batch file and one or more DXL scripts. How do I access these in Lotus Notes? You will need to process each DXL script. While you can perform this procedure manually, the batch file can perform all of the necessary processes. SecureRecovery for Lotus Notes uses a Database Creation Utility, which provides an effective alternative to manual re-insertion of all of the recovered data. Run this wizard and follow all of the steps for an easy import. I receive error messages when loading my recovered files, or I cannot rebuild my database. How can I fix this problem? First, make sure to run the batch file. It is located in the same folder as the DXL scripts and is called runme.cmd. This should create the database automatically, starting from scratch. Make sure to submit a valid path to the notes.exe file. Do not put spaces into the path. If you see error messages while processing the batch file and you are unable to create a new file, try reinstalling Lotus Notes. My Lotus Notes files appear smaller in size after the recovery process. Is this a serious problem? Not typically. This is a normal result, and it occurs because of the way that SecureRecovery for Lotus Notes treats corrupt or unrecoverable data. Instead of recreating unusable information, it replaces the damaged areas with blank space. Additionally, some recoveries will omit certain features of your Lotus Notes files if these features are unsupported or if data corruption is severe. It is also crucial to note that the demo version of SecureRecovery for Lotus Notes will only recover a controlled number of strings. It replaces additional forms with placeholder text, so the files created by the demo are usually much smaller than your original files.
OPCFW_CODE
ccSimpleUploader is a very basic plugin for TinyMCE 3.x platform. In it’s current form, it is not a file manager or manipulator of any kind. It simply allows a user to browse their computer and upload a file using a PHP script. The script can be invoked directly from the TinyMCE editor, or from the context of AdvImg and AdvLink plugins. - Easy to configure and integrate with TinyMCE - Uploads file to server using PHP file upload. Note: There are currently no restrictions on the types of files that can be uploaded, which poses a security threat! This is how I wanted to have it, you can change that by modifying uploader.php script if you like. - Integrates with AdvImg and AdvLink plugins - Download and unzip ccSimpleUploader plugin into your ‘tiny_mce/plugins’ directory - Modify your tinyMCE init function (this can be in its own js file, or in the header of the HTML file that hosts the editor): 1. Add the plugin to the plugins list: 2. If you want download functionality available directly from the TinyMCE editor, add the plugin to the desired button bar: 3. Create a directory on your site where you would like to have the files uploaded (i.e. /my_uploads) 4. Add configurations for the plugin as follows: relative_urls : false, file_browser_callback: "ccSimpleUploader", plugin_ccSimpleUploader_upload_path: '../../../../uploads', plugin_ccSimpleUploader_upload_substitute_path: '/tinymce/uploads/', Change ‘plugin_ccSimpleUploader_upload_path’ variable to represent the relative path from the ccSimpleUploader plugin directory to your upload directory. Change ‘plugin_ccSimpleUploader_upload_substitute_path’ to represent the aboslute path to your upload directory from the root of your site (i.e. if you create a directory ‘uploads’ in your ‘public_html’ directory, the absolute path would simply be ‘/uploads’) Cross fingers, load/re-load the page hosting your tinyMCE editor, and hopefully enjoy the added functionality. v0.1 – Initial Revision I got a couple of questions regarding Drupal integration, so here is the skinny. I don’t think I have the “proper” way of doing this, and I shall revisit eventually (unless someone want to chime in).. Anyway, so if you have the WYSIWYG plugin in your Drupal installation and you have tinyMCE as your editor, these are the steps: 1. Modify the tinymce.inc file in the WYSIWYG module (in sites/all/modules/wysiwyg/editors): </div> <div> <p class="code_snippet">$plugins['ccSimpleUploader'] = array( 'path' => $editor['library path'] . '/plugins/ccSimpleUploader', 'extensions' => array('ccSimpleUploader' => t('Simple File Uploader')), 'url' => 'http://www.creativecodedesign.com', 'internal' => TRUE, 'load' => TRUE,);</p> <p class="code_snippet"> Then, at the bottom of wysiwyg_tinymce_settings($editor, $config, $theme) right before ‘return $settings;’ add: <div class="code_snippet">$settings["file_browser_callback"] = "ccSimpleUploader"; $settings["theme_advanced_buttons1"] .= ",ccSimpleUploader"; $settings["plugin_ccSimpleUploader_upload_path"] = "../../../../../../../../uploads"; $settings["plugin_ccSimpleUploader_upload_substitute_path"] = "/uploads/";</div> <div class="code_snippet"></div> <div class="code_snippet"> Obviously, you need to change the upload paths to suit your directory layout. Remember that the relative path is relative to the uploader.php script itself. Finally, go to the WYSIWYG module configuration, and under “Buttons and Plugins” check off “Simple File Uploader”. That should do it.
OPCFW_CODE
This blog has been going for nearly a year now - our first post was on January 29th. Airsource has been around for a little longer than that - I quit my job at QUALCOMM to work full-time at Airsource in June 2006. We're approaching the end of the year, so what better time for a bit of a review. I'm going to stick to the technical side of things - I'm sure Nick will have something to say, and maybe one of our new employees will want to give their view of things too. There are several key things I've learned in my time at Airsource. The most important one is that when it's your own company, it's not enough to be good, or even just better than the next guy. You need to be great. Every time I write code, at the back of my head is the feeling that some day this code may be run by a customer. And it had better work, because if it doesn't, one way or another it will be my problem. No matter how principled you are, that ethos simply doesn't apply when you're working for a large company. You write the code, you test it, the QA department passes it, and you walk away. You don't even have to do any sales! The corollary of this is that it's not just enough to write good software. It has to be the right software. By that, I mean that you need to make absolutely sure that you know what the customer is asking for - and that what the customer is asking for is really what they want. And then, you need to deliver that, deliver it well, and ideally deliver just a little bit more. You don't want to give the farm away, when consultancy is what keeps a roof over your head, but you do want to give the customer a warm fuzzy feeling. If you are writing a product, you don't develop the thing in a clean room and then unleash it on an unsuspecting public. Or if you do, it will probably flop. You do some market research first. You do usability testing. You go out there and find out what people want. In the same way, when working on a client project, it's your responsibility to make sure you are doing what the customer wants. And remember, even if what the customer is asking for sounds stupid, there's a reason behind it. At least when you're a small software company, your customers are almost guaranteed to be making a lot more money than you are. Which means they're doing something right. When Airsource finishes a customer project, we send someone along who didn't write any of the code, and had as little involvement as possible with the project. They sit down with the customer, and do a post mortem. The customer gets the chance to voice any and all complaints that they have. They are surprisingly frank. I've had some feedback about me that people would never give to my face. And then, when we've got the feedback, we sit down together at the office, and figure how to make the next project an even better experience for the customer.
OPCFW_CODE
fix: not compatible with the new version of Flutter Description Looks like very_good_cli is not compatible with the latest Flutter version. When creating an app project for the first time I get this error: ✓ Generated 240 file(s) (0.2s) ✓ Running "flutter packages get" in ./my_app (1.1s) Unhandled exception: ProcessException: Standard out Resolving dependencies... Standard error Because my_app depends on flutter_localizations from sdk which depends on intl 0.18.0, intl 0.18.0 is required. So, because my_app depends on intl ^0.17.0, version solving failed. Command: flutter packages get #0 _Cmd._throwIfProcessFailed (package:very_good_cli/src/cli/cli.dart:145:7) #1 _Cmd.run (package:very_good_cli/src/cli/cli.dart:97:7) <asynchronous suspension> #2 Flutter.packagesGet.<anonymous closure> (package:very_good_cli/src/cli/flutter_cli.dart:101:11) <asynchronous suspension> #3 _runCommand (package:very_good_cli/src/cli/flutter_cli.dart:304:13) <asynchronous suspension> #4 Flutter.packagesGet (package:very_good_cli/src/cli/flutter_cli.dart:87:5) <asynchronous suspension> #5 installFlutterPackages (package:very_good_cli/src/commands/create/templates/post_generate_actions.dart:28:5) <asynchronous suspension> #6 VeryGoodCoreTemplate.onGenerateComplete (package:very_good_cli/src/commands/create/templates/very_good_core/very_good_core_template.dart:20:5) <asynchronous suspension> #7 CreateSubCommand.runCreate (package:very_good_cli/src/commands/create/commands/create_subcommand.dart:226:5) <asynchronous suspension> #8 CreateSubCommand.run (package:very_good_cli/src/commands/create/commands/create_subcommand.dart:200:20) <asynchronous suspension> #9 CommandRunner.runCommand (package:args/command_runner.dart:212:13) <asynchronous suspension> #10 VeryGoodCommandRunner.runCommand (package:very_good_cli/src/command_runner.dart:211:18) <asynchronous suspension> #11 VeryGoodCommandRunner.run (package:very_good_cli/src/command_runner.dart:148:14) <asynchronous suspension> #12 main (file:///Users/arturograu/.pub-cache/hosted/pub.dev/very_good_cli-0.14.0/bin/very_good.dart:5:24) Steps To Reproduce Make sure your Flutter version is 3.10.0 Run very_good create flutter_app my_app --desc "My App" Expected Behavior The command should execute successfully creating a new Flutter project. Hi guys and then there another bug related with the Missing RunnerTest config folder for iOS platform Yeah, we are upgrading everything to Dart 3 soon. 🐻 with me there are lots fo code to update Yeah, just released 0.15.0. This should be fixed. I had the same issue running very_good create flutter_app test If you did this, just don't use test as a name for your app.
GITHUB_ARCHIVE
You can read at any time, but if you are concerned about speed and difficulty of development, please use these decision-making methods to help you choose a language. You can read at any time, but if you are concerned about speed and difficulty development, please use these options to select a language. The choice of your first language will depend on the type of project you want to work on, who you want to work for, or how easy you want it to be. Our skilled web developers have compiled a list of six coding programs that are considered the best programming languages as you begin your journey as a software developer to understand which programming language is right for you, your interests, and your career goals. To make it easier for you to navigate this modern web development problem, we want to emphasize that Linux is the first foreign language of the web development servers. Because any language or framework can be used to write APIs, some may be better and more efficient to use than others. Python Flask and Node JS Express have become the leading frameworks and languages for building RESTful APIs for any web application. Python built-in APIs are highly scalable and do not vary in speed. Flask is one of the best frameworks you can use to improve Python APIs. Python is a fast-paced, easy-to-use, and widely used editing language to develop uncontrolled web applications, provides excellent library support and has a large community of developers, and has many applications that make it a flexible and powerful choice when choosing the best editing language. your app. Django uses Python in web development and uses some well-known names like Google, YouTube, and Instagram. Django is a Python web development site designed to make API creation in programming languages faster and easier. These libraries and frameworks help speed up the development process by incorporating tasks and items that handle multiple duplicate tasks into the build API. One of Python’s greatest assets is its large collection of tools and libraries that allow you to access many pre-written codes and extend your application development time. Java provides APIs for various functions such as website connection, network, XML fragmentation, resources, etc. It is one of the main reasons why many languages in backend development have become so popular in building different types of applications. Whenever a technical application is created, the IT department assesses the needs and decides to use the internal planning language. This way, if you choose the right language you can improve the performance backend of your software faster. If you already have a knowledge of the language, it will be easier for you to develop, understand the concepts involved and be quicker to master the language. It has many related tools that improve its performance and make it easier to measure your business but the learning curve can be a steep climb, especially if you are new to Java. Python has a simple and easy-to-use syntax, making it an ideal language for anyone who wants to learn editing for the first time. Python has a simple English-language syntax, making Python code understandable even for hobbyists. Rather than jumping into solid syntax rules, Python reads like English and is easy to understand for newcomers to the program, allowing you to gain a basic understanding of coding techniques without getting caught up in the small details that are often important in other languages. C # is not only a symbol of Microsoft’s development programs but also the language used by mobile developers to build cross-platform applications on Xamarin. C # can be used for operating system development, new programming language, graphics and design, game development, application development, web browser, programming language editor development, medical applications, maths and engineering, business tools, and forums.
OPCFW_CODE
Table of contents ☰ - Where is private key encryption? - What is the private key in network security? - How do I find my public key and private key? - What is public key and private key in network security? - What is an example of private key encryption? - How are private keys encrypted? - When would you use private key encryption? - How does private key work? - What are private keys used for? - What does private key contain? - How do I find public and private key pairs? - What do you do with a public and private key? - What is public key and private key with example? - Can public key decrypt private key? - What is public key in network security? - What is difference between private and public SSH? - How does public key and private key work? how to find private key network security - Related Questions Where is private key encryption? In contrast, private encryption is primarily used to protect – and provide access to – large data stores, such as disk drives, confidential information, and the like. Not long ago, encryption was only used to protect government information, such as articles of faith, passports, and so forth. What is the private key in network security? Several variables are used in cryptology to encrypt and decrypt data using a private key, also known as a secret key. Those whose data must be decrypted should only have access to the secret key's creator. How do I find my public key and private key? In fact, none of the methods known to date can calculate an RSA private key from a public key, ciphertext, and plaintext, regardless of whether padding is used and e's value is equal to 3. A working private key can be found by factoring the public modulus, which is the most commonly used method. What is public key and private key in network security? A public key (public key) is used to encrypt the plain text so that a cipher text is created and a private key (private key) for decrypting the cipher text so that the message can be read. Secret keys are used in private key cryptography. An additional key is kept secret in public-key cryptography. What is an example of private key encryption? In PreVeil, we employ elliptic-curve cryptography with Curve-25519 and NIST P-256 as some examples of public private key encryption. Others are RSA, DSS (Digital Signature Standard) and using DSS (Digital Signature Standard). How are private keys encrypted? Using symmetric encryption, all secrets are kept in one place. If you encrypt ciphertext with the private key, then use the public key to decrypt it. Ciphertexts can be used as parts of digital signatures and can be used to authenticate those signatures. When would you use private key encryption? Proof of authenticity is achieved through encryption using the private key. By using person 1's private key to encrypt a message, with person 2's public key, the message can be decrypted, proving that person 1 is the one to thank for sending it because only they knew how to encrypt it. How does private key work? Private keys enable the owner to encrypt and decrypt data, while the public key provides encryption to anyone, but only the private key can decrypt the data. The data can be securely sent to any owner of a private key. What are private keys used for? Data can be encrypted as well as decrypted using the private key. Each party behind encrypted sensitive information will have a key to communicate with. What does private key contain? The ). You generate them on your server or with any other tool you use. Once the CSR has been created, the SSL Certificate, or public key, is created. Encryption and decryption of information can only be done using these keys. How do I find public and private key pairs? To run the above command, perform the following: x509 -noout -modulus -in *public.crt -openssl md5 > /tmp/crt.pub. Note: Replace *public.. To create a key publication, add the following command: openssl rsa -noout -modulus -in *private.key> and openssl md5. Note: Replace *private. Compare /tmp/crt.pub and t.pub /tmp/key.pub. What do you do with a public and private key? Encryption and decryption of data are both performed with a Private Key that is used by both sender and receiver. Data should be encrypted with the public key, but if it needs to be decrypted, the private key is used. It is faster to use the private key mechanism. This mechanism is slower than the private key mechanism. What is public key and private key with example? Private KeyPublic KeyRemains in the confidential use of two individuals.Available to everyone through the publicly accessible directory.The possibility of key getting lost, which will render the system void.Key is publicly held so no possibility of loss. Can public key decrypt private key? Encryption using public keys is what it means. The private key can only be used to decrypt public key-encrypted data, and the public key can only be used to decrypt private key-encrypted data. Also known as asymmetric encryption, public key encryption uses a public key to encrypt data. What is public key in network security? It is an integral part of cryptography that encodes data with a large numeric value. Keys may be generated by computer programs, but in most cases they are provided by a trusted authority and made available to everyone through a publicly accessible repository. What is difference between private and public SSH? A user's private key should be encrypted and kept safe, as it is secret and should not be shared with others. A user can freely share his or her public key with any SSH server he or she wishes. How does public key and private key work? This type of cryptography is also known as asymmetric cryptography, since it uses public and private keys. Messages are encrypted or decrypted using both components at the same time. The person can only decode a message encoded with a public key if their private key matches.
OPCFW_CODE
:sparkles: Multi-dimensional operational domain computation Description This PR mainly introduces multi-dimensional operational domain computation. That is, it provides a framework for performing physical simulations over arbitrary sweep dimensions. Currently, the dimensions epsilon_r, lambda_tf, and mu_minus are supported, but new ones can be added easily. This PR also incorporates various performance improvements, particularly concerning multithreading in operational domain computation. To this end, it abandons the previously utilized C++ execution policies in favor of manual thread management. This not only makes the code less platform-dependent and offers increased performance in the Python bindings, but it also enables us to tweak load-balancing, particularly for grid search, which is now really fast given sufficient CPU cores. Grid search and random sampling support true multi-dimensional operational domain sweeps. Flood fill supports two- and three-dimensional sweeps. Contour tracing is limited to two dimensions by design. Checklist: [x] The pull request only contains commits that are related to it. [ ] I have added appropriate tests and documentation. [ ] I have added a changelog entry. [ ] I have created/adjusted the Python bindings for any new or updated functionality. [ ] I have made sure that all CI jobs on GitHub pass. [ ] The pull request introduces no new warnings and follows the project's style guidelines. @Drewniok this PR currently conflicts heavily with main. Also, I have not yet adjusted all of the Python APIs regarding the new operational domain interface because I assumed this would make matters worse. I opened this PR already because you said you would be interested in having a look at the updated code and maybe even assisting with resolving conflicts. Thank you so much for your feedback in advance! @marcelwa codecov seems to have some issues https://github.com/codecov/codecov-action/issues/1547. @marcelwa codecov seems to have some issues https://github.com/codecov/codecov-action/issues/1547. Yes. For now let's assume it's a problem on Codecov's side. If the error persists, let's investigate. @marcelwa codecov seems to have some issues codecov/codecov-action#1547. Yes. For now let's assume it's a problem on Codecov's side. If the error persists, let's investigate. @marcelwa It doesn't always seem to fail :) @marcelwa many thanks! I assume the CLI issue is fixed, right? @marcelwa many thanks! I assume the CLI issue is fixed, right? Ah right, I forgot to mention that. Unfortunately, I just couldn't find the source of the problem there 😕 the error occurs when you pass parameter ranges to the CLI where the min value is larger than the max value. On first try, the correct error handling is triggered. When you pass the exact same command a second time, no error handling happens (and I think OpDom is executed with default parameters). Next time, the error handling is triggered again, and so forth. @marcelwa many thanks! I assume the CLI issue is fixed, right? Ah right, I forgot to mention that. Unfortunately, I just couldn't find the source of the problem there 😕 the error occurs when you pass parameter ranges to the CLI where the min value is larger than the max value. On first try, the correct error handling is triggered. When you pass the exact same command a second time, no error handling happens (and I think OpDom is executed with default parameters). Next time, the error handling is triggered again, and so forth. okay, that's a pity. Should we then leave a command in cli/op_domain for future reference? @marcelwa many thanks! I assume the CLI issue is fixed, right? Ah right, I forgot to mention that. Unfortunately, I just couldn't find the source of the problem there 😕 the error occurs when you pass parameter ranges to the CLI where the min value is larger than the max value. On first try, the correct error handling is triggered. When you pass the exact same command a second time, no error handling happens (and I think OpDom is executed with default parameters). Next time, the error handling is triggered again, and so forth. okay, that's a pity. Should we then leave a comment in cli/op_domain for future reference? That's a good idea. Thanks!
GITHUB_ARCHIVE
Moon phases have been pivotal in the development of human culture, history, science, and art - providing a reliable way of timing the the seasons, as well as providing inspiration for folklore and myth in every culture on earth. With Pyphoon, you can easily track, predict, and visualise phases of the moon via delightful ASCII art, without ever leaving your Linux terminal. What's so great about the moon? Everything on the planet is affected by the moon. From tidal stresses deep in the earth's crust, to the waves that overtop the sea wall during a storm. Our early ancestors hunted by the light of the moon, and later, with the development of agriculture, timed the turning of the seasons, and the planting of crops. Even today, the lunar calendar is used by several major religions and cultures to schedule festivals, celebrations, and other significant events. Countless column inches are given over to horoscopes every day, so its no surprise that people of a mystical mindset - including the Linux Impact AI psychic cat - are interested in its current phase. And you can't time your next werewolf transformation for minimum embarrassment or maximum mayhem without knowing when the next full moon is. Pyphoon gives you an ASCII moon in your terminal What's the point? you may ask. After all, it's trivial to draw back the curtains or step outside to check the moon's current phase. But sometimes the moon isn't visible. You won't be able to see the moon if it's at its zenith during your lunch break, nor if it's on the other side of the planet. Sometimes it's cloudy, and while you may be able to discern its glow through the clouds, you can't tell its phase. Sure you could query your favourite search engine, but that would involve leaving your beloved Linux terminal. Pyphoon is a Python app to show you the phase of the moon as an ASCII art representation, and is the latest incarnation of a 1979 programme written in Pascal, and later C. Did you know that the orientation of the moon is reversed depending on which hemisphere you're in? We didn't, but it makes sense, and Pymoon will change your terminal moon's appearance accordingly. Pymoon will also show the moon phase for any date you care to give it, so if you need a visual representation of the moon phase for your next date night or anniversary, Pyphoon is exactly the tool you need. Install Pyphoon on Linux to see the moon in your terminal If your distro comes with support for snap packages (and you haven't disabled it), the easiest way to install Pyphoon is to pop open a terminal and enter: sudo snap install pyphoon --edge Otherwise, make sure you have git, python, and pip installed, then enter: git clone https://github.com/chubin/pyphoon && cd pyphoon pip install -r requirements.txt python setup.py install That's it. You can start Pyphoon by entering: ...in any terminal window. Additional options for Pyphoon on Linux The previous command gets you the current phase of the moon in the northern hemisphere, along with associated information including the time since the last full moon and last quarter. That may not be exactly what you need. To view the moon phase on a current date - either in the future or the past, use the desired date as an argument. For instance, to find the moon phase on the very first Christmas day, enter: Unfortunately Pyphoon doesn't accept dates pre-common era. So if you were planning on use it to look at the moon as dinosaurs, Moses, Hammurabi, or Qin Shi Huang would have seen it, you're out of luck. Supposedly the, -l or --language switch followed by your preferred language switches the language, but we were unable to make this work. Adding -x will show you the moon phase for your given date without additional information, while the -s switch allows you to specify whether you're in the northern or the southern hemisphere (it's north by default). If you wanted to see the moon as it appeared when James cook first set foot on the Australian landmass at Silver Beach, you could enter: pyphoon 1770-04-29 -x -s south For some extra-awesome, try using cool-retro-term to view the moon phase as it was on July 20th, 1969, and pretend you're behind a desk in mission control. Pyphoon isn't the only cool terminal tool out there Contrary to popular opinion, the Linux terminal is a tremendously fun place to hang out, and you can live a fulfilled and happy life without ever leaving the command line. If you're ever curious about your most used shell commands, you can use MUC to find out!
OPCFW_CODE
Using separate tool frameworks to design, deploy and monitor environments creates integration, collaboration, and reporting complexity leading to a duplications of functionality, integration complexity, inefficient processes and substantial operational costs. SCAIL converges many different IT tool frameworks into a unified platform. Integrating identity, inventory, configuration, workflow, time-series, and logging into a single source of truth not only provides integrated access to various types of data, but it also surfaces additional insights at the intersection of that data. The time, cost and complexity of integrating a wide variety of hardware and software endpoints is significant. Being able to perform the same set of operations and having a common user experience (regardless of the endpoints being managed) are imperative to manage solutions at scale. SCAIL uses language-specific Endpoint Adapters to integrate and abstract the specifics of each type of hardware and software endpoint under management. The result is a consistent API and web user interface experience for common lifecycle operations. From server hardware, firmware, operating systems, hypervisors, container runtimes, applications, and back-office systems, to industry specific IoT devices, SCAIL supports very diverse endpoints in a single platform. SCAIL is a cloud-native framework of web-scale micro-services, deployed in a distributed architecture. Whether it's automating private cloud servers in a data center, deploying edge compute at locations such as retail offices and production facilities, or operating multi-vendor CPE devices across the country, SCAIL supports millions of geographically distributed hardware and software endpoints. With intimate visibility of low-level firmware through application software, SCAIL can identify outdated versions associated with known Common Vulnerabilities and Exploits (CVEs) and automatically remediate them. SCAIL significantly reduces the risk of security vulnerabilities in customer deployments by simplifying the process of keeping firmware and software up to date. Over time, deployments need maintenance. Whether its capacity changes, migrations, upgrades, or end-of-life events, planning these changes can be even more complex than the original deployment. Because OasisWorks SCAIL retains a persistent record of the configuration changes and workflows it used to deploy environments, day-n operational changes such as environment duplication, configuration adjustments and re-deployments become much simpler when the scope of changes are deltas from the original data. This leads to increased planning velocity, better organizational agility, increased productivity, ultimately reducing operational cost. Unlike most infrastructure-as-code tools that automate using source files and fire-and-forget scripts that must be integrated with a continuous integration tool to retain any history, SCAIL uses persistently stored transactional workflows and retains as-built records to provide organizational consistency, repeatability, and traceability. SCAIL offers state-of-the-art multi-tenancy, organized in flat or hierarchical relationships and very sophisticated Role Based Access Control (RBAC) capabilities. SCAIL reduces the deployment time of highly complex cluster deployments from months to hours, significantly reducing time-to-revenue and cost. The SCAIL patented web user interface makes managing complex clouds extremely intuitive allowing non-DevOps staff to perform highly complex tasks. Real-time operational data collection and persistent storage provide immediate visibility to endpoint health. Event, KPI collection and alarm notifications simplify triage remediation, reducing impacts and increasing overall uptime.
OPCFW_CODE
M: Show HN: Chorus - It tells you what your customers think and feel - Trindaz http://www.getchorus.com/video/ R: noahth Curious - what is your bounce rate like? I'm in time-wasting mode so I watched a bit of the video to get an idea of what the product was but I would have much preferred to see some bullet points - or anything really - other than the video to give me an idea of what Chorus was about. R: rottencupcakes I agree completely. That was a terrible page to link to. I couldn't even be fussed. Why would you link to your video page instead of your front page (<http://www.getchorus.com/>), which I assume is at least somewhat optimized for conversion/understanding? R: dpcan I went to the homepage, and I cannot figure out what this company does. At all. I went to the Learn More page. Still don't know. Checked Case Studies, erm? So, I "think" that they may already have data, and allow people to analyze it? Or does it harvest info? I'm not sure. Does it look at searches. Seriously, I don't understand. R: jaredsohn Linking to your video page here was confusing. I wanted to quickly understand what your startup does and I can't do that from the page you linked to (not set up to watch a video). I assumed that you linked to your home page so I had to dig around awhile before I could figure out what it was. Just including the "Customer Radar - High engagement, Increased Customer Satisfaction, All in Real-time" text that is found on most of the other pages on that page would have made the experience a lot better; specifically pointing out a link to the home page would be helpful, too. (Tip for others: Click on Home after clicking on the link for a list of bullet points.) R: goodside Just curious: Are you aware that Greenplum, now a division of EMC, has had a data analytics product called Chorus since April of 2010? <http://www.greenplum.com/products/chorus> R: dools This looks like a thoroughly useful product. I love the fact that it basically tells you what the most important thing you should work on to improve customer happiness is! If you have lots of customers making lots of noise about lots of different issues it's going to be very hard to prioritise which of the various and apparently equally important squeaky wheels should get the grease (ie. your limited time) and this looks to me as though it would be an invaluable asset in helping make those types of decisions. R: Trindaz We're quoting you on that! R: typicalrunt Minor correction: On your front page (<http://www.getchorus.com/>) you have a carousel of screenshots from your product. One of the screenshots (Trending Topics) is of spreadsheet/tabular data where the first item says "Frustrated (Repitition)" [sic]. You should remove the spelling mistake from the image as it is very apparent, being the first cell in the spreadsheet. R: seanmccann Did you guys get rejected during the interview or application stage? R: tbull007 At the interview stage (I'm not involved in the app, but I know Dave). R: emeltzer Tell me exactly what this does, please? R: Trindaz It takes in anything your customers say in real time (tweets, blogs, surveys, but mainly email) Then it analyzes all the data, figuring out who's happy, who's unhappy. You can then search the results. Main example: type in a product that your company sells, Chorus will tell you how your customers feel about it, how the feelings are changing over time, and what you need to address now to make your customers more happy. R: baltcode your olark widget makes firefox crash (I think if you have firebug installed). R: Trindaz Couldn't get it to crash with Firefox/Firebug. What OS are you using? R: apsurd I did an eye roll after I saw the title. Mentioning that you were rejected comes off as an intention to "prove PG wrong". You should build a great app to do yourself and your customers justice, not to one-up somebody. The pitch needs to be simpler, I gave your video a chance but you showed me how to enter fields on a sign up form. Didn't finish the video so can't comment on the product. R: Trindaz Good point. We edited the title to something less likely to be construed as proving PG wrong. R: apsurd Thank you for taking my criticism as constructive. I don't mind the downvotes as its great to see feedback being handled positively! All the best to you and fwiw as someone else mentioned, I specifically clicked on this post because i was in "i need a distraction" mode. So I did give your video a shot, but it's hard to listen to 5 minutes of talk. Just from my perspective I'd like the core of the message to hit me in the face in the first 30 seconds. Then you can drill down into the specifics as time goes on. If you research how newspaper articles are written they use the same concept. They give you the core of the story in the first paragraph. Then as you read more the details and nuances of the article are better articulated. HTH
HACKER_NEWS
Battery powered cameras from Reolink offer remote viewing and management even when cameras are behind mobile network data connections. Based on this research by Nozomi Networks Labs and the camera_proxy project, I’ve been able to capture and inspect the communication protocol used specifically by the Argus 3 camera. I used the version 8.2.6 of the Reolink client for MacOS (which is an Electron app) and Wireshark to capture and decode the traffic between the software and the internet. The decoding of their proprietary packets was possible thanks to the Baichuan/Reolink proprietary IP camera protocol dissector for Wireshark bundled with the camera_proxy project. In the description bellow we’ll use the following addresses and names to describe the network participants: 10.0.0.1is the local IP of the app connecting to the camera remotely. 126.96.36.199is the public IP of the app on the internet. 188.8.131.52is the local IP of the camera. 184.108.40.206is the public IP of the camera on the internet. p2p.reolink.comis the hostname of the Reolink P2P server. 220.127.116.11is the IP address of the Reolink P2P server. 123456789is the Reolink UID (unique identifier) of the camera. The Reolink client app uses the publicly accesible Reolink relay to establish a direct connection to the camera for remote management and video/audio communication. This article by Bryan Ford is the best explanation of network hole punching for peer-to-peer (P2P) access. The communication uses UDP packets with special payload that contain the information to establish direct connection between the client and the camera. The payload is encoded to avoid network address translation (NAT) layers changing the IP addresses included in the payload. The encoding uses a shared secret key and some additional transformations. The actual magic happens through the dynamic port numbers allocated by the routers between both communicating parties and the ability to route packets back to the originating device by re-using those port numbers. It is likely that the camera sends regular UDP pings to the public P2P server to keep an active connection with the server to enable incoming connections. 1. App Connects to the P2P Server The client app sends a UDP packet from local port 16577 to port p2p.reolink.com with the following XML body (encoded using the algorithm mentioned above). It also sends the same packet to the local broadcast address 255.255.255.255 of the local network which allows local cameras to respond, too: <P2P> <C2M_Q> <uid>123456789</uid> <p>MAC</p> </C2M_Q> </P2P> 123456789 is the camera UID MAC is the platform <p> identifier of the connecting client. In cases where the client is connecting to a Reolink DVR (digital video recorder) or NVR (network video recorder) instead of a camera, the wrapping XML element is <D2M_Q> instead of It appears to send these packet to all IP addresses of the p2p.reolink.com name server A records. One of the P2P servers responds with the following payload: <P2P> <M2C_Q_R> <reg> <ip>18.104.22.168</ip> <port>58200</port> </reg> <relay> <ip>22.214.171.124</ip> <port>58100</port> </relay> <log> <ip>126.96.36.199</ip> <port>57850</port> </log> <t> <ip>188.8.131.52</ip> <port>9996</port> </t> <timer/> <retry/> <mtu>1350</mtu> <debug>251658240</debug> <ac>-1700607721</ac> <rsp>0</rsp> </M2C_Q_R> </P2P> while other IP addresses of the P2P server respond with: <P2P> <M2C_Q_R> <devinfo> <type/> <mac/> <bat>0</bat> <qr>0</qr> </devinfo> <rsp>-3</rsp> </M2C_Q_R> </P2P> Notice how the XML element name in the response <M2C_Q_R> is the inverse of the request element name _R which I imagine stands for response or reply. The response contains the IP addresses and port numbers of the following services: <t>for telemetry (?) and some other information which I’m not sure how is used. 2. Client App Registers with the P2P Server The client app appears to first make a TCP connection to IP 184.108.40.206 and port 9996 of the <t> element in the XML response to possibly obtain a shared secret key to use for further connection. I haven’t been able to decode this TLS communication because the client appears to ignore system proxy setting to allow capturing and intercepting the TLS traffic. Then it sends an encoded UDP packet the 220.127.116.11:58200 with the following payload: <P2P> <C2R_C> <uid>12345678</uid> <cli> <ip>10.0.0.1</ip> <port>10693</port> </cli> <relay> <ip>18.104.22.168</ip> <port>58100</port> </relay> <cid>693000</cid> <debug>251658240</debug> <family>4</family> <p>MAC</p> <r>3</r> </C2R_C> </P2P> 12345678 is the UID of the camera we’re connecting to with its local IP 10.0.0.1 and local port number 10693. TODO Confirm if the value of <CID> was retrieved during the TCP exchnage or generated randomly by the client. The P2P registration endpoint 22.214.171.124:58200 responds with the following UDP payload: <P2P> <R2C_T> <dev> <ip>126.96.36.199</ip> <port>18371</port> </dev> <dmap> <ip>188.8.131.52</ip> <port>18371</port> </dmap> <sid>99933377</sid> <cid>555777</cid> <rsp>0</rsp> </R2C_T> </P2P> 184.108.40.206:18371 is the IP and port number of the UDP keep-alive connection of the camera on its local network and 220.127.116.11:18371 is the public IP and port, while <cid> are the two keys used to establish the connection. App Connects to the Camera Now the app sends a UDP packet directly to 18.104.22.168:18371 which is the the public IP and port number of the camera from the <P2P> <C2D_T> <sid>99933377</sid> <conn>map</conn> <cid>555777</cid> <mtu>1350</mtu> </C2D_T> </P2P> and the camera responds with: <P2P> <D2C_T> <sid>99933377</sid> <conn>map</conn> <cid>555777</cid> <did>576</did> </D2C_T> </P2P> <did> is the only unique value received from the camera. To be continued…
OPCFW_CODE
Wonderfulfiction Reincarnation Of The Strongest Sword God – Chapter 2534 – Frightened Demons wild unusual recommendation-p1 Novel–Reincarnation Of The Strongest Sword God–Reincarnation Of The Strongest Sword God Chapter 2534 – Frightened Demons superb mountain “Hahaha! He’s doomed now! Once the Vice Guild Head profits, the city’s Good Demon NPCs would’ve very long since cared for him!” laughed the dark colored-clad Elder standing upright beside Furious Coronary heart. Nonetheless, although Demon’s Heart’s associates were actually experience elated around s.h.i.+ Feng’s irrational conduct, a sigh originated from the Black color Dragon. handling academic failures Despite the fact that s.h.i.+ Feng had talked within a sooth tone, when his speech was relayed via the Dark-colored Dragon, it was loudly amplified. Consequently, anyone within Demon Community been told him very obviously. Fortunately, the Black colored Dragon’s conditions were actually only creating the barrier tremble. To put it differently, the Dark colored Dragon’s strikes were around the barrier’s array of tolerance. “What?! He’s really about to siege town!?” “This is Demon Town we’re writing about! Is he insane!?” When Demon Community was first developed, the Ten Saints Kingdom experienced once sent an NPC army to attack it. In those days, having said that, regardless of whether multiple Level 4 NPCs swamped the city’s protective secret selection with everything else they had, the magic collection didn’t vibrate in any way. The NPC army broke throughout the city’s protective wonder variety only through the help of a Tier 5 Divine Established. With over 12 Level 4 Great Demons having action, just a Level 4 Hero should never have dreamed of escaping unscathed, not to mention a Level 3 person like s.h.i.+ Feng. In their point of view, it would’ve been best whenever they could somehow sneak into Demon City as well as leaving immediately. They never thought that s.h.i.+ Feng would actually have direct action against Demon Area! Even so, while Demon’s Heart’s associates were experience elated over s.h.i.+ Feng’s absurd behavior, a sigh has come from the Dark colored Dragon. Fortunately, the Black colored Dragon’s strikes had been only helping to make the obstacle tremble. In other words, the Black Dragon’s episodes were still inside the barrier’s variety of threshold. These Fantastic Demon NPCs were stronger than man NPCs. The Demon Monarch that determined this town was a good scary existence comparable to the Heroes of men and women. Present competitors would be helpless against him. Nevertheless, just before any person could deduce anything from s.h.i.+ Feng’s phrases, the Dark colored Dragon suddenly disappeared, causing s.h.i.+ Feng themself with its place. At this moment, despite the fact that, s.h.i.+ Feng presented a heavy, historical tome in their hands and wrists, which tome woke an instinctual panic in most Demon person from the metropolis. “He still hasn’t obtained really serious until recently?” Originally, their part have been helpless against s.h.i.+ Feng. The one thing they could do was hide inside Demon City’s safeguard and get ready for their Vice Guild Leader’s team’s give back. Within their view, it would’ve been excellent as long as they could somehow sneak into Demon Metropolis leaving quickly. They never imagined that s.h.i.+ Feng would actually get straight activity against Demon Metropolis! “What does he imply?” In response with this condition, Demon’s Heart’s people started off smiling. “Sure sufficient, it happens to be extremely hard to eliminate this wonder barrier only using the strength of a Level 3 Dragon. It seems I actually have no alternative but to acquire really serious.” NPCs rarely took action against people. Consequently, lots of players were definitely inquisitive to view the effectiveness of NPCs, in particular Demon NPCs. Initially, their part have been powerless against s.h.i.+ Feng. The single thing they may do was disguise inside of Demon City’s safety and wait their Vice Guild Leader’s team’s go back. Nevertheless, before any one could deduce everything from s.h.i.+ Feng’s ideas, the Black colored Dragon suddenly disappeared, departing s.h.i.+ Feng him self within its area. At this time, even though, s.h.i.+ Feng organised a wide, historical tome in his hands and fingers, and this also tome woke an instinctual dread in each and every Demon competitor in the city. “What?! He’s really about to siege the metropolis!?” In reaction to this circumstance, Demon’s Heart’s associates began smiling. “What power!” Regretful Breeze, who got just exited the nightclub he frequented, was surprised while he checked out the trembling magical boundary higher than him. “Demon City’s defensive miracle array doesn’t react so violently regardless if a Level 4 NPC goes all-out! How powerful is the fact that Black color Dragon’s infiltration?” These Good Demon NPCs were actually stronger than man NPCs. The Demon Monarch that ruled the metropolis was a scary lifestyle comparable to the Characters of human beings. Up-to-date competitors could well be helpless against him. With over 12 Level 4 Great Demons getting action, a good Level 4 Hero ought not dream of escaping unharmed, let alone a Level 3 person like s.h.i.+ Feng. Into their judgment, it would’ve been excellent should they could somehow sneak into Demon Area and then leave swiftly. They never imagined that s.h.i.+ Feng would actually get steer motion against Demon Town! Religions of Ancient China “Dragon! This is the Demon race’s town! You don’t belong listed here!” Morpheus proclaimed inside of a cool develop while he viewed the Black Dragon. “Hahaha! He’s destined now! Once the Vice Guild Chief earnings, the city’s Terrific Demon NPCs would’ve long since cared for him!” laughed the dark-colored-clad Elder status beside Mad Center. “This is Demon Location we’re talking about! Is he insane!?” “Sure ample, it is unattainable to destroy this miracle hurdle using only the effectiveness of a Level 3 Dragon. It seems like I have got no alternative but to obtain serious.” “Black Flames really is courageous. However, I ask yourself how much time he’ll have the capacity to get up against Morpheus?” Regretful Force of the wind, nonetheless, did not imagine that s.h.i.+ Feng was boasting. As well, even though, also, he felt the Swordsman couldn’t have a trump greeting card stronger when compared to the Black colored Dragon.
OPCFW_CODE
How to transfer the concepts of real-time control from "C" environment to "IEC61131 (CODESYS)"? We are trying to perform a real-time measurement/calculation on sampled data. Our previous experience was based on C programming. I wonder if anyone can help me transfer the real-time c programming structure into the PLC IEC61131 Structured Text? For a real time control loop (with a constant loop cycle), we need a start timer, an end timer, and a wait function that works as follows: while(1) { t_start=timems(); /* a variable gets the current processor time in ms*/ /*... here the function performs the calculation...*/ t_stop=timems(); /* a variable gets the current processor time in ms*/ deltaT=t_stop-t_start; /* time difference between the start of the loop and end of the loop is calculated*/; waitms(loop_constant-deltaT); /* the loop waits for the remainder of the constant loop time before the next iteration*/ } Specifically, I'm wondering how we can do these timing structures inside the IEC61131? We can do the delay using TON, I think. However, any advice on how to get the time from the processor is highly appreciated. (This post discusses that it's possible to write the code and transfer it to IEC61131. However, for educational purposes, writing the code inside the IEC61131 is preferred.) p.s.1: I am working on a SEL-3350 device which is equipped with CODESYS firmware for writing IEC61131 programs. p.s.2: after a couple of days of search I understood the difference between the real-time control based on C programming and the ones with IEC61131 (using CODESYS). Basically, when you code using the PLC devices, you have the option inside the task manager to set up the properties of the controller processing cycle. Therefore, unlike "C" there is not need to perform an infinite loop (while (1)) and the software takes care of it. For example, in the CODESYS environment, you choose the type of program as "cyclic" and the interval time as your "loop constant", and it will be similar to the C code mentioned above. What is "C" code? And we are not a translation service. @Olaf I believe the OP means the C programming language @TriskalJM: I really love it when people state the obvious because they missed the point. :-) I should have checked your profile, huh, @Olaf? :P Thanks for the comments. I reworded the question. What plc are you using? Many times plcs will have a task where you can select how often it scans (e.g. 1ms, 10ms, 100ms, etc.) @Olaf I really like this question. It shows the problem to transform a classic blocking code to a cyclically executed code. This has nothing todo with a "translation service" @FelixKeil: That's not how stack overflow works. We are neither a translation nor a tutoring service. OP should provide a possible solution and ask about the specific problem he has. Please take the [tour] and read [ask] to get more information. Such questions should not be answered, but voted to close. CoDeSys provides the library CmplecTask which gives you detailed information regarding the current task. Let your code run in a dedicated task that is triggered cyclically and control everything with the information programmatically read from the task information. All timings, jitters etc are accessible there.
STACK_EXCHANGE
Lovelynovel Trial Marriage Husband: Need to Work Hardblog – Chapter 1052 No One Would Be Able To Escape capable lettuce recommend-p2 V.Gfiction 百香蜜 – Chapter 1052 No One Would Be Able To Escape cellar grade share-p2 Novel–Trial Marriage Husband: Need to Work Hard–Trial Marriage Husband: Need to Work Hard Trial Marriage Husband: Need to Work Hard Chapter 1052 No One Would Be Able To Escape shade puffy Han Yu’s thoughts located a large load on Daddy Han’s shoulders. Of course, he was claiming that they was bullying a expectant mother. “Constable Han, I’ve listened to a great deal of in regards to you. May I check with what you are for?” “Ancient Han, you’ve been a male of military services merit, so I won’t go about in sectors on you. I am just below as a result pregnant woman,” Han Yu directed to Lin Qian. “I’ve already researched the issue and a lot of men and women with the air power structure have validated that Lin Qian’s spouse, Li Jin, has been held at home.” “Let’s check out the Han Spouse and children Household and demand to allow them to come back him,” Constable Han reported when he tweaked his hat. “But, I want you to use an action.” For this reason, Lin Qian started to weep hysterically and referred to as law enforcement using the Li Family. Powering her, Daddy Han was furious, “His wife has almost switched a medical facility upside-down, but you’re still keeping him within our property. Are you aiming to embarra.s.s me?” “What’s the point of moving this media? You can expect to just be named a couple adulterers. Apart from, Li Jin has long been unconscious the entire time. Regardless if other people consider the lie you’ve developed, you think I’d think you?” “Don’t a.s.sume that Lin Qian is helpless near you she simply hasn’t used her relationships nevertheless. Don’t overlook she was born to the Gu Household. Even when we placed her friends and family qualifications aside, she continues to have Tangning backing her up. Exactly how much for a longer period do you reckon you’d have the opportunity retain him undetectable in this article?” Trial Marriage Husband: Need to Work Hard Daddy Han predicted that Lin Qian would arrive in the near future, but he possessed little idea how she would display. tyrannical wang’s beloved wife wuxiaworld So, Han Yu ended up being arriving at the Han Spouse and children Residence by using a group of law enforcement officials. After speaking, Dad Han converted around and headed for Han Xiao’s sleeping area. When he stepped inside, he grabbed his child with the frizzy hair, “Thats a excellent little princess I had. Thanks to you, police officers are at my house. You’ve completely humiliated me!” “My surname is usually Han, but an alternative Han (composed differently in Chinese).” Lin Qian never predicted the officer to laugh close to about something such as this. “Father, I had nothing else alternative. I’m not passing him through,” Han Xiao held onto Li Jin’s hands like she was possessed. “You may watch for Lin Qian to look at our entrance.” “My surname is usually Han, but some other Han (published differently in Chinese).” “I haven’t been oblivious towards the enjoyment industry. From what I’ve found, Superstar Marketing has established a serious mix and created a reputation for themselves. I respect Tangning she’s strong and certain.” Soon after conversing, Father Han transformed all over and headed for Han Xiao’s bedroom. Once he stepped on the inside, he grabbed his little girl through the hair, “Such a excellent little princess I actually have. Thanks to you, the authorities are right here at my home. You’ve completely humiliated me!” Right behind her, Dad Han was furious, “His spouse has almost switched a medical facility upside-down, yet you’re still keeping him within our family home. Are you currently seeking to embarra.s.s me?” “Can there be a single thing more serious than bullying a pregnant woman? Other than, you can’t continue lying down to yourself in the long run. Considering that the police are actually engaged, it’s best you cooperate.” Following careful consideration, Grandfather Supporter still decided that they should get the opportunity to request Tangning to your Supporter Household House for any chat. Han Xiao watched inside a daze as Li Jin put peacefully in their bed furniture. Deep-down, she knew she wasn’t addressing the common particular person. Daddy Han amused law enforcement with compact talk, but he disliked what his little girl was accomplishing. On account of her, he was required to compel himself to teeth at everybody. Immediately after careful consideration, Grandfather Enthusiast still resolved which he should discover the opportunity to ask Tangning to your Fanatic Family House to get a conversation. The group stood outside of the house along with the fragile Lin Qian and hospital records at your fingertips. “Excellent, if so, please put it off a second.” Rapidly, the constable that Tang Yichen was knowledgeable about, found a healthcare facility. He was actually a muscly person within his ahead of time 40’s that immediately looked upright at first. Most notably, his beautiful dark brown eyeballs produced rays of righteousness. Lin Qian nodded her travel in realizing, “Constable Han, what should we do now?” “My surname can also be Han, but a unique Han (created differently in Chinese).”
OPCFW_CODE
Make a Strong, Easy-to-Remember Password Using Classical Cryptography? Passwords can be tough to remember. For example: H7535637353959595*9608J614625C1313^398583I0397897j^ So Bob wants to make and use a good password for GPG that he never has to remember. He will rarely use this password (asymmetric encryption for off-line storage). When he needs it, he is going to generate his password with pencil and paper out of some key information that is stored in one place: his head. He hopes to employ classical cryptography to turn what he has been unwilling or unable to remember into something that is available. How could Bob make a password strong enough for GPG by using classical methods? Importantly, he wants to avoid "security" through obfuscation. Some of the characteristics and principles behind the VIC Cipher came to mind (anyway): A 5-digit number (truly random) 67106 stretched to 10 digits, 67106 + (6+7=3) + (7+1=8) , etc.<PHONE_NUMBER> Memorized short phrase: kantscriticalphilosophy (using the first 20 letters). Resulting in<PHONE_NUMBER> and<PHONE_NUMBER> To make a long story short, following Bob's process, mostly like that of the VIC cipher (chain addition, creating permutations of 1 to 0, digit addition without carries), we arrive here: <PHONE_NUMBER> NO ADIEUS 3 BCFGHJKLMP 9 QRTVWXYZ Bob uses the straddling keyboard on his memorized long phrase: ITISRAININGINAMSTERDAMBUTNOTINPYONGYANG and puts the result in a columnar transposition whose length is three (not a broken transposition). Adds a pepper, if you will, at the end of the transposition rows: *^^ and 11=A, 22=B, etc., 111=a, 222=b, etc., 1111=!, 2222=@, etc. Result: H7535637353959595*9608J614625C1313^398583I0397897j^ With a little practice, it is not difficult to remember a process like what the VIC cipher uses. Questions: Can a method like this create a password strong enough to use in, let's say, GPG? What would a strong method using classical cryptography for password generation look like? When an attacker knows how you generate your password (and you always should assume that), an algorithm applied to the key becomes useless in terms of security. Expanding the keys does not add any security at all. You only can make it more memorizable which allows for longer initial random keys (which is the only thing you should care about) @jjj well, a computationally expensive key-derivation function does add security because it makes brute-forcing take more time. But of course, computing such a function even once by hand would then have to take years, if not millions of years. "... a [password] that he never has to remember .... when he needs it, he is going to generate his password [out of] some key information that is stored in one place: his head". So you want Bob to not have to remember anything.... by remembering something? This seems really pointless. Just choose a sensible password instead of 51 random characters (as in your first example) which is unnecessarily long for any feasible attack method anyway. This reminds me of a past question of mine: Strong PHP hashing without salt. TL;DL the purpose is to use personal information to generate passwords, that I can reproducibly generate again. I fail to see why one would want to use classical or pencil and paper tools for derivation. For anyone attacking your technique it will make no difference. An attacker with a modern computer will only brute force the part you memorized. Any key stretching done on pencil and paper will be a minor nuisance at best; anything done on paper will add no time at all to a brute force attack. Memorizing something from a high entropy source using various memory techniques is actually useful. Picking words at random or using acronyms can be helpful to memorize something with sufficient entropy. If this is then used with a proper modern key derivation function it can be very hard to attack. Memory tricks - good Pen and paper stretching - pointless That is indeed the best practice. and the question explictly states a desire to not rely on security by obfuscation. If there is no upper bound on the length of the password to be used, the most common suggestion I know to create strong, easily-memorable (for some definition of "easy") password is diceware. The basic idea behind it is that it chooses each word via a roll of 5 d6's (e.g. each word has $6^5= 7765= 2^{12.92}\approx 2^{13}$ options). The entire password is then some combination of $k$ independent words, giving a password with $\approx {13k}$ bits of entropy. You can then choose $k = 6$ (or other options you want) to get a password with $\approx 80$ bits of entropy. I just generated the password: YearlingExquisiteWorstUnsortedDenoteSkipper Can I memorize it immediately? No. Could I make up a story in $\approx 10$ minutes to vastly aid memorization? Probably. It also has the (massive) benefit that even relatively technologically unadvanced users could feasibly remember the generated password. It also has the (again, massive) benefit that it is "just" encoding a standard $\approx 80$ bit password using a (public) word list to aid human memorization. Mathematically, there is nothing non-trivial going on that may be able to be attacked. No need to use software. Just pick a song you remember all the words to to use as a word string. The universe of songs is big enough. Then on top of that pick an easily-remembered algorithm and keep it private ("the second and third letters of every word that starts with a consonant") and you'll have no problem keeping appropriately difficult-to-guess strings hundreds of characters in length entirely in your head. You can even write down which song encodes which passwords, without the derivation scheme even that's not enough to realistically brute-force. @JohnSmith If you are going to store information about your passwords, it should not be in an ad-hoc way --- just use a password manager. If you want your master key to be from some ad-hoc scheme fine, but I see no reason to do this, as it makes it difficult to argue the quantitative strength of your password. Moreover, training yourself to type some words "wrong" in a way that correlates with your passwords seems fairly suspect. Judging by the use-case and solution it's really a pointless thing to do. GPG is used on a computer, so why would you want to create steps to generate the password manually if in the end you'd still need to write it via keyboard? If anything, it's a possible attack vector if the instructions get leaked e.g. via a co-located person or you forgetting it somewhere where the cameras/people can read it. If you want strong password, that's not often used and is properly created... then just use a password manager to hold both the keypair and a random garbage password for it. A proper password manager can manage multiple password databases, thus you wouldn't have a single point of failure. What you would have is an encrypted blob you need one password for. At the same time you can have two password DBs, one for keys, one for password, so 2 passwords total. Separate them to two places and yet another layer of safety. So TL;DR you'd have only N-times the password and location to remember and anything else would be random, so unbruteforcable in a reasonable time and secured against dictionary attacks as well + if you don't disclose the locations then even if your passwords have been leaked (social engineering or simple copy-paste into chat mistake), the attacker would still need to get the "treasure chest". A key without a keyhole is just a worthless item. And it's the best thing for memorization because, well, you don't need to remember the GPG password, nor the algo that generated it. I understand your first point, but that is why the use-case is offline storage. I see what you are saying in your latter discussion. Normally you would use a key derivation function, but since this question is about classical cryptography, I'll stick to the basics. An example of what I'll be doing below can be found on Wikipedia. I'll assume the user can remember multiple words of different lengths, for example ["london", "istanbul", "sheffield"]. You can use a Vigenere cipher with multiple keys. When you do this, the key for the cipher becomes as long as the least common multiple of the key words. Using the example words, we get 72 characters. We then start with a string of 72 "A" characters, and then encrypt it using the Vigenere cipher with each key word. Keywords: ['LONDON', 'ISTANBUL', 'SHEFFIELD'] Least common multiple: 72 Initial key: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA Encrypted with LONDON: LONDONLONDONLONDONLONDONLONDONLONDONLONDONLONDONLONDONLONDONLONDONLONDON Encrypted with ISTANBUL: TGGDBOFZVVHNYPHOWFEOAEIYTGGDBOFZVVHNYPHOWFEOAEIYTGGDBOFZVVHNYPHOWFEOAEIY Encrypted with SHEFFIELD: LNKIGWJKYNORDUPSHIWVEJNGXRJVISKEDZSQQWLTBNIZDWPCYLOHMRXGZAMVCAKGDJJTIITB Final result: LNKIGWJKYNORDUPSHIWVEJNGXRJVISKEDZSQQWLTBNIZDWPCYLOHMRXGZAMVCAKGDJJTIITB Here's some Python code that does this. #!/usr/bin/env python3 import math import sys ALPHABET = "ABCDEFGHIJKLMNOPQRSTUVWXYZ" vigenere = {} for i, letter in enumerate(ALPHABET): a = list(ALPHABET) for _ in range(i): a.append(a.pop(0)) vigenere[letter] = a keywords = [x.upper() for x in sys.argv[1:]] print("Keywords:", keywords) key_len = math.lcm(*[len(x) for x in keywords]) print("Least common multiple:", key_len) key = ["A"] * key_len print("Initial key:", "".join(key)) for keyword in keywords: for i in range(key_len): key_letter = keyword[i % len(keyword)] index = ALPHABET.index(key[i]) key[i] = vigenere[key_letter][index] print(f"Encrypted with {keyword}:", "".join(key)) print("Final result:", "".join(key))
STACK_EXCHANGE
How to rotate a video by 180 degrees using ffmpeg? I want to cut a short part of a video and rotate it by 180 degrees. I tried ffmpeg -ss 70 -i GH030258.MP4 -t 10 -vf "transpose=2,transpose=2" -c:a copy passing_opponent_alarm_084305.mp4 ffmpeg -ss 70 -i GH030258.MP4 -t 10 -vf "transpose=2,transpose=2,vflip" -c:a copy passing_opponent_alarm_084305.mp4 ffmpeg -ss 70 -i GH030258.MP4 -t 10 -vf "transpose=1,transpose=1" -c:a copy passing_opponent_alarm_084305.mp4 ffmpeg -ss 70 -i GH030258.MP4 -t 10 -vf "transpose=1,transpose=1,hflip" -c:a copy passing_opponent_alarm_084305.mp4 ffmpeg -ss 70 -i GH030258.MP4 -t 10 -vf "transpose=1,transpose=1,vflip" -c:a copy passing_opponent_alarm_084305.mp4 ffmpeg -ss 70 -display_rotation 180 -i GH030258.MP4 -t 10 -c:a copy passing_opponent_alarm_084305.mp4 and some more random combinations. They do not seem to work. Any ideas? Try setting the metadata: ffmpeg -ss 70 -i GH030258.MP4 -c copy -metadata:s:v:0 rotate=0 -t 10 passing_opponent_alarm_084305.mp4, or maybe rotate=180. Modifying the metadata (with -c copy) may be better solution since it doesn't re-encode the video. Finally I found a very counterintuitive solution: ffmpeg -ss 70 -i GH030258.MP4 -t 10 -vf "rotate=0" -c:a copy passing_opponent_alarm_084305.mp4 Seems I have to rotate the video by 0 degrees to rotate it by 180 degrees! (But if I just omit this rotate=0, the video will NOT be rotated and generated a video upside down ...)
STACK_EXCHANGE
How to get post-receive server hook to run I’ve just installed a fresh version of GitLab on my own server: {“version”:“15.3.3-ee”,“revision”:“1615d086ad8”} System information: System: Debian 11 Proxy: no Current User: git Using RVM: no Ruby Version: 2.7.5p203 Gem Version: 3.1.6 Bundler Version:2.3.15 Rake Version: 13.0.6 Redis Version: 6.2.7 Sidekiq Version:6.4.0 I tried to follow this guide to set up a post-receive hook : https://docs.gitlab.com/ee/administration/server_hooks.html I confirmed the location of the Git repo via console and made the custom_hooks folder root@gitlab:/var/opt/gitlab/git-data/repositories/@hashed/4a/44# ls -l total 12 drwx--S--- 5 git git 4096 Sep 7 14:40 4a44dc15364204a80fe80e9039455cc1608281820fe2b24f1e5233ade6af1dd5.git drwx--S--- 4 git git 4096 Sep 7 14:09 4a44dc15364204a80fe80e9039455cc1608281820fe2b24f1e5233ade6af1dd5.wiki.git drwxr-sr-x 2 git git 4096 Sep 7 14:38 custom_hooks Created the file with proper owner. I tried multiple permissions settings. root@gitlab:/var/opt/gitlab/git-data/repositories/@hashed/4a/44# cd custom_hooks/ root@gitlab:/var/opt/gitlab/git-data/repositories/@hashed/4a/44/custom_hooks# ls -l total 4 -rwxrwxrwx 1 git git 43 Sep 7 14:34 post-receive The file is a simple test script #!/bin/bash echo "test custom" > /tmp/hook which runs properly when manually run as git user git@gitlab:~/git-data/repositories/@hashed/4a/44/custom_hooks$ ./post-receive git@gitlab:~/git-data/repositories/@hashed/4a/44/custom_hooks$ cd /tmp/ git@gitlab:/tmp$ ls -l total 44 -rw-r--r-- 1 git git 12 Sep 7 14:46 hook However I can’t get it to run on a push from local to remote (GitLab) repo. The repo files do update properly. Turns out that is not correct location for the custom_hooks folder. It has to go inside the *.git folder.
STACK_EXCHANGE
Posted: 23 Aug 2017 16:13 EDT Last activity: 25 Nov 2017 7:45 EST Error when adding Controls to Windows Form in Pega Robotics Studio Hi I just started working with Robotic Studio and doing the course. When I add controls to a Windows Form I get the error 'Could not get reference to OpenSpan.Dynamicmembers.Extentions.IExtentionTypeService'. I added a snip of the error. I cannot find any information on this error and how to get rid of it. I've never seen this error before. I suggest that you open an SR with support as it appears you need someone who can look at your machine. If you do open an SR, please post the SR number here for tracking purposes. The following article will be help you in creating an SR- Info | 09:52:58.986 AM | 1 | STA | User Data | | | Application path is not writable: C:\Program Files (x86)\OpenSpan\OpenSpan Studio for Microsoft Visual Studio 2015 Info | 09:52:59.002 AM | 1 | STA | User Data | | | PublicAssemblies path: C:\Users\<user>\AppData\Roaming\OpenSpan\PublicAssemblies Info | 09:52:59.331 AM | 1 | STA | Component Inspector | | | Deserialized 'D:\Documents\Pega Robotics Studio\Projects\CustomersSearching4Stores\CustomersSearching4Stores\Windows Form1.os' in 0.02 seconds Info | 09:52:59.409 AM | 1 | STA | Project | | | Project 'Main-UI' opened Info | 09:52:59.508 AM | 1 | STA | Office | | | Attempting to copy office files from C:\Program Files (x86)\OpenSpan\OpenSpan Studio for Microsoft Visual Studio 2015\Office2016 Info | 09:52:59.523 AM | 1 | STA | Office | | | Result from office assembly copy from C:\Program Files (x86)\OpenSpan\OpenSpan Studio for Microsoft Visual Studio 2015\Office2016: SameOfficeVersion Info | 09:52:59.523 AM | 1 | STA | Shell | | | State changing from Default to Designing Info | 09:53:00.677 AM | 1 | STA | Project | | | Project 'Main-UI' error check started Info | 09:53:00.693 AM | 1 | STA | Project | | | Project 'Main-UI' error check finished Info | 09:53:00.936 AM | 1 | STA | Designer | | | Loaded designer extension. Info | 09:53:01.132 AM | 1 | STA | Designer | | | Deserialized 'D:\Documents\Pega Robotics Studio\Projects\CustomersSearching4Stores\CustomersSearching4Stores\Windows Form1.os' in 0.23 seconds Info | 09:53:01.231 AM | 1 | STA | Shell | Windows Form1 | | Designer created and initialized for 'D:\Documents\Pega Robotics Studio\Projects\CustomersSearching4Stores\CustomersSearching4Stores\Windows Form1.os' Error | 09:53:05.199 AM | 1 | STA | Exception | | | Could not get reference to OpenSpan.DynamicMembers.Extensions.IExtensionTypeService., Verbose Message: General Information I am seeing this problem when I create a new, empty project and add a Windows Form as my first item in the project. Then I add a control to the form. Does this behavior happen in every scenario? or is there something specific that you are doing to recreate it? I was able to get rid of the error by adding an Automation Item to my project. I see the issue with Version 14.0.25420.1 (8.0.1053.0). But I tried two different downloads. One is the product that was shipped through the software request and the other one came from the Robotics Architect Training currently available. Both installations behave the same even on different environments and different Windows versions. Could a Microsoft hotfix cause this issue?
OPCFW_CODE
<?php /** * Created by PhpStorm. * User: bruno * Date: 12/02/2019 * Time: 11:15 */ namespace ThalassaWeb\BarcodeHelper\tests\units\ean; use atoum; /** * Class Calculator * Calcul de la clé de contrôle pour EAN * @package ThalassaWeb\BarcodeHelper\calculateur */ class Calculator extends atoum { /** * Calcul clé de controle EAN 13 */ public function testCleControleEan13() { $this->given($this->newTestedInstance) ->then ->string($this->testedInstance->getCleControle("761234567890")) ->isEqualTo("0") ; } /** * Calcul clé de contrôle EAN 8 */ public function testCleControleEan8() { $this->given($this->newTestedInstance(8)) ->then ->string($this->testedInstance->getCleControle("7612345")) ->isEqualTo("0") ; } /** * Calcul clé de contrôle UPC A */ public function testCleControleUpcA() { $this->given($this->newTestedInstance(12)) ->then ->string($this->testedInstance->getCleControle("04210000526")) ->isEqualTo("4") ; } /** * Calcul clé de contrôle UPC E */ public function testCleControleUpcE() { $this->given($this->newTestedInstance(6)) ->then ->string($this->testedInstance->getCleControle("42526")) ->isEqualTo("1") ; } /** * Calcul clé de contrôle EAN 14 */ public function testCleControleEan14() { $this->given($this->newTestedInstance(14)) ->then ->string($this->testedInstance->getCleControle("2419730963892")) ->isEqualTo("5") ; } }
STACK_EDU
Edit/Delete not working for some Cloud objects Steps to reproduce: Delete: Go to controller of the object. Select one or more. Configuration => Delete selected Edit: Go controller of the object. Select one. Configuration => Edit selected . Change anything. Click Save button. Cloud Volume changes cannot be edited => Error caught: [MiqException::MiqVolumeUpdateError] undefined method 'attributes' for nil:NilClass /Users/zita/Desktop/ManageIQ/manageiq/app/models/manageiq/providers/openstack/cloud_manager/cloud_volume.rb:36:in 'block in raw_update_volume' /Users/zita/Desktop/ManageIQ/manageiq/app/models/mixins/provider_object_mixin.rb:15:in 'block in with_provider_object' /Users/zita/Desktop/ManageIQ/manageiq/app/models/ext_management_system.rb:365:in 'with_provider_connection' /Users/zita/Desktop/ManageIQ/manageiq/app/models/manageiq/providers/storage_manager/cinder_manager.rb:16:in 'with_provider_connection' /Users/zita/Desktop/ManageIQ/manageiq/app/models/mixins/provider_object_mixin.rb:12:in 'with_provider_object' /Users/zita/Desktop/ManageIQ/manageiq/app/models/manageiq/providers/openstack/cloud_manager/cloud_volume.rb:129:in 'with_provider_object' /Users/zita/Desktop/ManageIQ/manageiq/app/models/manageiq/providers/openstack/cloud_manager/cloud_volume.rb:35:in 'raw_update_volume' /Users/zita/Desktop/ManageIQ/manageiq/app/models/cloud_volume.rb:61:in 'update_volume' /Users/zita/Desktop/ManageIQ/manageiq/app/controllers/cloud_volume_controller.rb:349:in 'update' Cloud Volume cannot be deleted => Error caught: [NoMethodError] undefined method 'status' for nil:NilClass /Users/zita/Desktop/ManageIQ/manageiq/app/models/mixins/provider_object_mixin.rb:15:in 'block in with_provider_object' /Users/zita/Desktop/ManageIQ/manageiq/app/models/ext_management_system.rb:365:in 'with_provider_connection' /Users/zita/Desktop/ManageIQ/manageiq/app/models/manageiq/providers/storage_manager/cinder_manager.rb:16:in 'with_provider_connection' /Users/zita/Desktop/ManageIQ/manageiq/app/models/mixins/provider_object_mixin.rb:12:in 'with_provider_object' /Users/zita/Desktop/ManageIQ/manageiq/app/models/manageiq/providers/openstack/cloud_manager/cloud_volume.rb:129:in 'with_provider_object' /Users/zita/Desktop/ManageIQ/manageiq/app/models/manageiq/providers/openstack/cloud_manager/cloud_volume.rb:47:in 'validate_delete_volume' /Users/zita/Desktop/ManageIQ/manageiq/app/controllers/cloud_volume_controller.rb:410:in 'block in delete_volumes' /Users/zita/Desktop/ManageIQ/manageiq/app/controllers/cloud_volume_controller.rb:400:in 'each' /Users/zita/Desktop/ManageIQ/manageiq/app/controllers/cloud_volume_controller.rb:400:in 'delete_volumes' /Users/zita/Desktop/ManageIQ/manageiq/app/controllers/cloud_volume_controller.rb:41:in 'button' Cloud Network cannot be deleted: [----] I, [2016-11-24T13:40:09.061901 #26858:3fcf59c817e4] INFO -- : Completed 500 Internal Server Error in 92ms (ActiveRecord: 4.7ms) NotImplementedError (raw_delete_network must be implemented in a subclass): [----] F, [2016-11-24T13:40:09.064570 #26858:3fcf59c817e4] FATAL -- : app/models/cloud_network.rb:91:in 'raw_delete_network' [----] F, [2016-11-24T13:40:09.064758 #26858:3fcf59c817e4] FATAL -- : app/models/cloud_network.rb:72:in 'delete_network' Cloud Network cannot be edited: nothing happens (no error message) Cloud Subnets cannot be deleted: Expected([200, 204]) <=> Actual(401 Unauthorized) excon.error.response :body => "{\"error\": {\"message\": \"The request you have made requires authentication.\", \"code\": 401, \"title\": \"Unauthorized\"}}" :cookies => [ ] :headers => { "Content-Length" => "114" "Content-Type" => "application/json" "Date" => "Thu, 24 Nov 2016 12:46:01 GMT" "Server" => "Apache/2.4.6 (Red Hat Enterprise Linux)" "Vary" => "X-Auth-Token" "WWW-Authenticate" => "Keystone uri=\"http://<IP_ADDRESS>:5000\"" "x-openstack-request-id" => "req-58e3606a-0c6c-4ec4-9855-c3ba3f8fd479" } :host => "<IP_ADDRESS>" :local_address => "<IP_ADDRESS>" :local_port => 49961 :path => "/v2.0/tokens" :port => 5000 :reason_phrase => "Unauthorized" :remote_ip => "<IP_ADDRESS>" :status => 401 :status_line => "HTTP/1.1 401 Unauthorized\r\n" [cloud_subnet/button] (This may be right. Needs confirmation.) Cloud Subnet cannot be edited: Error caught: [MiqException::MiqCloudSubnetUpdateError] Expected([200, 204]) <=> Actual(401 Unauthorized) excon.error.response :body => "{\"error\": {\"message\": \"The request you have made requires authentication.\", \"code\": 401, \"title\": \"Unauthorized\"}}" :cookies => [ ] :headers => { "Content-Length" => "114" "Content-Type" => "application/json" "Date" => "Thu, 24 Nov 2016 12:50:16 GMT" "Server" => "Apache/2.4.6 (Red Hat Enterprise Linux)" "Vary" => "X-Auth-Token" "WWW-Authenticate" => "Keystone uri=\"http://<IP_ADDRESS>:5000\"" "x-openstack-request-id" => "req-fb4bc236-8894-4c2e-a16b-c421362ff929" } :host => "<IP_ADDRESS>" :local_address => "<IP_ADDRESS>" :local_port => 50150 :path => "/v2.0/tokens" :port => 5000 :reason_phrase => "Unauthorized" :remote_ip => "<IP_ADDRESS>" :status => 401 :status_line => "HTTP/1.1 401 Unauthorized\r\n" (This may be right. Needs confirmation.) Network Router cannot be edited: Nothing happens. No error message. Network Router cannot be deleted: Delete initialized for Network Router. Nothing happens. This blocked a blocker BZ https://bugzilla.redhat.com/show_bug.cgi?id=1383203 . @tzumainn @aufi @Ladas Any input would be appreciated :) Volume related errors look to be interesting - first one uses controllers in Cloud provider, second one uses code in Storage provider. I am not sure if it is correct (I guess a link to Volumes was forgotten in Cloud provider), but Storage provider was recently extracted by @roliveri team so better ask there what is the expected behaviour. Router delete BZ https://bugzilla.redhat.com/show_bug.cgi?id=1397454 @gildub Any input on delete for Cloud Networks? @ZitaNemeckova, Not sure which branch is invoked here. For master: Cloud Network Create/update/delete works Cloud Subnet: Create/Delete works Update has an issue which is addressed with https://github.com/ManageIQ/manageiq/pull/12045 (replaces backend with tasks queuing) Network Router Create/edit/delete works @ZitaNemeckova, For euwe (5.7 -> Tested using commit #11037d88e32e681d8cb0311df5a24adc33584fdf) Cloud Network Create/edit/delete works Cloud Subnet Create/edit/delete works Network Router Create/edit/delete works Above tests do not include cases where network items have dependencies, which might be the case for your. @gildub thanks a lot :) @ZitaNemeckova, No worries, please don't hesitate if you have any questions. Good luck with tests. ;)
GITHUB_ARCHIVE
This post will cover how blockchain technology works, the advantages it presents, as well as its disadvantages. It is easy for newcomers to mix up cryptocurrencies and blockchain. Although the first blockchain (Bitcoin) is a cryptocurrency, it does not necessarily mean all blockchains are (or will be) necessarily used as payment networks. Blockchain technology has unique properties without which we would not be able to guarantee such a high level of transparency, decentralization and immutability. If you do not understand yet some of the concepts do not worry, we will explain them to you step by step. The following concepts, constitute the foundation of blockchain technology and we will go over them without getting too technical on the implementation details: - Distributed ledgers - Consensus mechanisms - Public and private blockchains - Blockchain platforms What is a ledger? Today, vast amounts of information are controlled and managed by institutions that we trust to act honestly. Blockchain technology enables a shift from today’s centralized repositories of information to more decentralized robust fault tolerant networks. Using blockchain technology, it is possible to imagine a future where we do not rely on centralized organizations to manage our data but we, the users, have greater control of our digital lives. So how do blockchains enable this? It starts with simple accounting. Ledgers are data sources that track accounts and balances of assets, they are fundamental to accounting and tracking value. Today most ledgers are maintained in databases run by central authorities such as banks, credit card companies or governments. Record-keeping by central authority is beneficial for several reasons: - A central authority can maintain data integrity by restricting access to the ledger to authorised users. - The data location is known and accessible to the data curators this allows for fast retrieval and regulating access to the data. Updates to a database are known as “transactions”. The following are key properties for a database transaction : - Transactions need to be atomic (where all updates are applied to the ledger or none of them are). - Transactions need to be durable, meaning they persist in the system and there is no chance of the change being reverted. - Transactions need to be consistent. Ledger data must be modified in a reliable permitted way. - They also need to be isolated. Transactions must be isolated one from another. Recently, distributed ledger technology has gained popularity. Distributed ledgers do not rely on a central authority to maintain data. Maintaining agreements among the shareholders of the ledger is a difficult problem. Data integrity can be maintained using public/private key cryptography which can verify who initiates transactions and that they are authorized to do so. Transactions in a distributed ledger must have the same properties as a centralized ledger, they must be atomic, durable and consistent. Accessing the latest data often takes longer in a distributed ledger system than a centralized one, because it takes time for the participants to agree on the ledger’s state. Therefore the latest transactions are not immediately available to every node (participant in the network). A ledger is distributed when it has been securely replicated across geographic locations. Some features of DLT include: - Consensus formation - Peer-to-peer protocols - Cryptographic infrastructure Blockchain technology is a version of distributed ledger technology. It implements these features through a specific data structure called a “blockchain” and consensus mechanisms such as proof of work, proof of stake, delegated proof of stake, proof of authority, etc. To know about ‘Why does ClimateTrade use blockchain technology to offset carbon emissions?’ click here.
OPCFW_CODE
I’m a Computing and Mathematical Sciences Ph.D. student in the Computational Vision Group at Caltech, advised by Pietro Perona. My interests tend to cluster around machine learning, computer vision, and signal/image processing. I’m supported by an NSF Graduate Research Fellowship. I’ll be spending summer 2019 at Microsoft Research in Redmond, WA. I graduated from Duke University in May 2017 with a B.S.E. in Electrical and Computer Engineering and Mathematics. I’ve also spent time at the Air Force Research Lab, the Duke University Marine Lab, and the Woods Hole Oceanographic Institution. Previously, I’ve worked on OCT imaging and image processing with Sina Farsiu, datacenter-level computational sprinting with Benjamin C. Lee, bioacoustic signal detection with Douglas Nowacek, environmental sensing with Martin Brooke, and oceanographic data analysis with Cindy Van Dover. Presence-Only Geographical Priors for Fine-Grained Image Classification O. Mac Aodha, E. Cole, P. Perona [webpage] [code] [demo] Statistical Models of Signal and Noise and Fundamental Limits of Segmentation Accuracy in Retinal Optical Coherence Tomography T. DuBose, D. Cunefare, E. Cole, P. Milanfar, J. Izatt, S. Farsiu IEEE Transactions on Medical Imaging, 2018 Enhanced visualization of peripheral retinal vasculature with wavefront sensorless adaptive optics optical coherence tomography angiography in diabetic patients J. Polans, D. Cunefare, E. Cole, B. Keller, P. Mettu, S. Cousins, M. Allingham, J. Izatt, S. Farsiu Optics Letters, 2017 Wide Field-of-View Wavefront Sensorless Adaptive Optics Optical Coherence Tomography for Enhanced Imaging of the Peripheral Retina J. Polans, B. Keller, O. Carrasco-Zevallos, F. LaRocca, E. Cole, H. Whitson, E. Lad, S. Farsiu, J. Izatt Biomedical Optics Express, 2017 Computational Sprinting: Architecture, Dynamics, and Strategies S. Zahedi, S. Fan, M. Faw, E. Cole, B. Lee ACM Transactions on Computer Systems, 2017 Seismic survey noise disrupted fish use of a temperate reef A. Paxton, J. Taylor, D. Nowacek, J. Dale, E. Cole, C. Voss, C. Peterson Marine Policy, 2017 Press: Vox, The News & Observer, Public Radio East, UNC-TV. This work was also cited in a letter from 103 members of the U.S. Congress to Secretary of the Interior Ryan Zinke. SyPRID Sampler: A Large-Volume, High-Resolution, Autonomous, Deep-Ocean Precision Plankton Sampling System A. Billings, C. Kaiser, C. Young, L. Hiebert, E. Cole, J. Wagner, C. Van Dover Deep-Sea Research Part II: Topical Studies in Oceanography, 2016 Press: Scientific American, WHOI Press Release. An ocean sensor for measuring the seawater electrochemical response of 8 metals referenced to zinc, for determining ocean pH M. Brooke, E. Cole, J. Dale, A. Prasad, H. Quach, B. Bau, E. Bhatt, D. Nowacek IEEE International Conference on Sensing Technology, 2015
OPCFW_CODE
Where can I find/ how can I make methy_data.bgz Hi Shian, Thank you very much for developing NanoMethViz. I am using it for my nanopolish output (.tsv format) and I am following the tutorial you wrote. I learn from the issues in NanoMethViz and the tutorial that I need to transfer the .tsv format by using create_tabix_file. But it is not clear for me how to use it. In Importing data part in the tutorial, methy_tabix <- file.path(tempdir(), "methy_data.bgz") samples <- c("sample1", "sample2") # you should see messages when running this yourself create_tabix_file(methy_calls, methy_tabix, samples) # don't do this with actual data # we have to use gzfile to tell R that we have a gzip compressed file methy_data <- read.table( gzfile(methy_tabix), col.names = methy_col_names(), nrows = 6) Do you know where can I find my methy_data.bgz or how can I prepare the file? Besides, I am kind lost when following steps of importing, exporting data and doing differential analysis. I am very appreciated if you can share more details about the relations of the example files among the steps. Thank you very much. Yingzi Hi Shian, I read more in R help and learned that "methy_data.bgz" is the output tabix file. I have two other questions: When I ran exon_tibble <- get_exons_homo_sapiens(), the progress showed: Loading required package: Homo.sapiens Loading required package: OrganismDbi Loading required package: TxDb.Hsapiens.UCSC.hg19.knownGene The reference genome I used in the upstream steps is hg38. Do you know what should I do to use hg38 in NanoMethViz? I created my tabix file by create_tabix_file( c(input_files), methy_tabix, c(samples) ) and bsseq<-methy_to_bsseq(methy_tabix,out_folder = tempdir(), verbose = TRUE) It reported an error as [2023-04-18 18:34:41] creating intermediate files... [2023-04-18 18:34:41] parsing chr11... [2023-04-18 18:34:43] samples found: Error in data.frame(sample = samples, file_path = path(out_folder, paste0(samples, : arguments imply differing number of rows: 0, 1 Would you suggest how can I fix it? The numbers of the input_files and the samples are the same. Looking forward to your reply! Thank you. Yingzi Hi Yingzi, I do need to make more explicit functions for different genome versions. You will need to construct your own hg38 annotation based on the style of the hg19 provided, otherwise the genomic coordinates will not line up. If you cannot do this yourself I may get around to it some time next week. I'm not entirely sure what is causing the error in the conversion to a bsseq object, it looks like no sample names were detected in the bgzip-tabix file. Could you gunzip -c methy_data.bgz | head in the terminal to check if the contents looks correct? Hi Shians, Thank you very much for answering! I figured out the hg38 problem and the bsseq object. Thank you very much for the help. I will close this issue and raise another two issues about differential analysis and plotting. Thank you very much! Yingzi
GITHUB_ARCHIVE
Bitcoin Forks Explained Since its invention by the mysterious Satoshi Nakamoto in 2008, the Bitcoin (BTC) network has churned out over 600,000 blocks while adhering to the protocol outlined in the whitepaper, and without ever being compromised on security. There are currently 18.5 million Bitcoins in circulation, with another 2.5 million to be mined over the next 120 years. The last Bitcoin is expected to be mined in 2140, basing the calculation off the current hash rate. Since the third halving on May 11, 2020, there are 900 Bitcoins minted per day. The next halving is currently projected to fall on February 29, 2024. Bitcoins are issued as a reward to miners for completing the computationally-expensive process of mining. Mining involves verifying that every Bitcoin transaction is legitimate, i.e. the funds being sent are actually owned by the sender. To see a real-time count of the number of Bitcoins left to be mined and the total number in existence, check out our tracker here. All of these features were coded into Bitcoin at its creation. They give the network its strength. They are publicly known facts, which is an integral part of the decentralization of the network. However, as millions of people have adopted the technology, Bitcoin has encountered some growing pains. In particular, the limited block size has meant that Bitcoin can only process a certain number of transactions every block, or 10 minutes. This leads to network congestion and high fees. Forking Bitcoin is one way to address the issues caused by widespread adoption. Forking is essentially the addition of a new chain to the existing blockchain. It doesn’t replace previous blocks, but the creators of the fork hope that their fork is widely adopted and becomes the standard. Let’s take a look at the origins of the forking debates, then go through the different Bitcoin forks that exist today. The original Bitcoin protocol coded in a block size of 1Mb, with each block being processed every 10 minutes. Using the median transaction size, this works out to between 4 and 7 transactions per second (tps). This rate wasn’t a problem during the early years of Bitcoin, when the number of transaction processed on the network was still low. As this number grew, however, it started to take much longer to get a transaction processed. Users who wanted to get their transaction in the next block were paying fees of up to $50 in December 2017. This is called the scaling problem. Taking into account the structure of Bitcoin, there are two viable solutions to the scaling problem: Both increasing the block size and reducing transactions size are means of allowing more transactions to fit in each block. The Bitcoin block size and its associated scaling problems have been the cause of forks in the protocol. Bitcoin Cash and Bitcoin SV (itself a fork of Bitcoin Cash) are the two best known forks. But did you know that Bitcoin itself has undergone a fork? Let’s first take a look at the different types of fork before we get into the main features of each split. The main difference between a soft fork and a hard fork is the degree to which an update is respected by miners. If all miners agree to a rule change and then proceed to only validate blocks that respect the new rule, then there is no need for a new chain to split off. However, if there is not consensus around the rule change, then some miners will continue to validate blocks according to the old rules, while others will validate according to the new rules. Blocks mined by each group will be incompatible with the other. This causes a hard fork, i.e. a new chain that splits off. For an in-depth guide to hard forks, soft forks, and chain splits, see this writeup. SegWit (which stands for Segregated Witness) is a soft fork of Bitcoin. The SegWit protocol upgrade intends to reduce transaction size by not including transaction signatures in the block. As signatures make up a large proportion of the size of a transaction, their removal means that more transactions can be processed per block. The result is lower transaction fees and shorter confirmation times. Segwit is a soft fork protocol upgrade to fix all forms of malleability and increase the block capacity. The Segwit transactions use different signatures and redeem scripts that are moved to a new structure, which doesn’t count towards a block size limit of 1MB. Depending on the parameters, Segwit transactions are at least 25% smaller in size when compared to legacy transactions. Therefore, the blocks are still the same size but they can fit more Segwit transactions. Since they are smaller and the fee is determined by size, the Segwit transactions naturally cost less. A smaller fee can achieve the same speed as legacy transactions. For a deep dive on the SegWit upgrade, check out our guide here. Bitcoin Cash is a hard fork of the Bitcoin blockchain. It split off from the main chain at block number 478558 on August 1, 2017. Everyone who owned Bitcoin received Bitcoin Cash on that date. BCH initially traded at around $240, while BTC was at $2,700. Bitcoin Cash allowed for 8MB blocks, while keeping the block time at 10 minutes. This meant that it can process up to 8 times as many transactions per block as Bitcoin. There has been controversy around Bitcoin Cash since its inception. Some people object to its use of the Bitcoin name, taking the view that there is only one Bitcoin: the original (BTC). Bitcoin Cash aims for usability by decreasing transaction times and fees. Yet in doing so, it compromises on the features that have given Bitcoin its strength since Day 1: its widespread adoption, decentralized nature, and resistance to attack. If bitcoin is more expensive or slower than traditional financial systems, people aren't going to use it. Hard forks ensure a fragmentation of the community. This decreases the strength of the network and makes it easier for a single bad actor to gain control via a 51% attack (see Bitcoin Gold below for successful examples of this exploit). It’s not difficult to imagine a future where everyone who thinks they have a solution to one of Bitcoin’s scalability problems forks the blockchain, creating Bitcoin version 50.0, with each fork splitting the community further and weakening all blockchains. Bitcoin is designed to resist any central authority. Naturally, this makes the efficiency of unilateral decision-making unrealistic, not to mention undesirable. Decentralization is a core tenet of Bitcoin itself. The SegWit upgrade was an example of stakeholders coming together to improve a common good, reaching consensus in a decentralized manner. SegWit was “voted” on by miners including a piece of code in the IDs of blocks they mined if they were in favor of the upgrade. Bitcoin forks are often most vocally supported by people whose personalities stand to benefit more than the technology and community. Don’t be fooled by the name. Satoshi’s vision is laid out in the whitepaper. Bitcoin SV is a fork of Bitcoin Cash, itself a fork of Bitcoin. On November 15, 2018, at block 556766, everyone holding BCH received the same amount of BSV. Bitcoin Cash was trading around $420, while BSV closed its first daily candle at $101. Bitcoin SV’s loudest proponent is Craig Steven Wright, an Australian businessman whose claims to be Satoshi Nakamoto have been repeatedly and convincingly discounted. BSV’s main claim to fame is its increased block size. In May 2020 a 309MB block was mined on the network, making it the largest block ever mined on a blockchain. This race for the largest block ignores many factors of the original Bitcoin that lend it its strength. Such huge blocks require massive hashing power, which concentrates control of the network in the hands of those with the most means at their disposal. This is a far cry from the trustless, decentralized system that Satoshi described in the whitepaper. In fact, if you remove the decentralization aspect of Bitcoin you end up with something closer to the Federal Reserve. Cryptocurrencies that are truly focused on the benefits of decentralization have introduced algorithms that are resistant to ASIC (application specific integrated circuit) miners. Monero, for example, can be effectively mined on consumer-grade hardware, which ensures that power is distributed evenly over the network. Another cryptocurrency that aims to resist ASIC miners is Bitcoin Gold, another hard fork of Bitcoin. As we will now see, it hasn’t worked out so well for BTG. Bitcoin Gold forked from the main Bitcoin chain on October 17, 2017, at block 491407. Just afterward, the Bitcoin Gold website suffered a distributed denial of service (DDoS) attack. These attacks have plagued BTG since its creation. It has twice been the subject of 51% attacks, where one actor gains an absolute majority of the hashpower and is able to approve whichever blocks they like. The result was an unidentified party stealing 388,000 BTC (worth approximately USD$18 million) from a number of exchanges in May 2018. Bitcoin Gold was subsequently delisted from Bittrex, and once again suffered a 51% attack in January 2020, where $72k was double-spent. Despite multiple forks with more or less success (often less), the original Bitcoin is still the most decentralized, most secure, and most valuable cryptocurrency out there. Its first mover effect continues to play a powerful role in adoption. Satoshi’s vision, which he explained in the whitepaper, was for a trustless currency that relied on no central authority. This has meant that improvements to the network are often slow. While forks of Bitcoin have largely relied on the name recognition of their crypto celebrity advocates, changes to Bitcoin are achieved through community consensus, where no-one is compelled to support anything they don’t want to. Developers are working on solutions to Bitcoin’s scaling issues. Any future changes will not be forced upon users, but instead will be brought about through discussion, collaboration, and consensus. This is the future of decentralized decision making, and Bitcoin is at the fore of it all. The scaling problem refers to the bottlenecks encountered at times of high congestion on the Bitcoin network. Bitcoin needs to scale to meet transaction demands by users. This has been addressed with the introduction of BIP141 (Bitcoin Improvement Protocol 141: Segregated Witness - Consensus Layer) better known as SegWit. The Lightning Network will also improve Bitcoin’s scaling. It aims to support billions of transactions per second, with each occurring in just milliseconds. SegWit is a Bitcoin Improvement Proposal that reduces transaction sizes, meaning more can fit into a single block. This lowers fees for SegWit transactions. SegWit also fixes a potentially very destructive flaw in the Bitcoin protocol: transaction malleability. For a discussion of this, see here. Bitcoin Cash is a hard fork of the Bitcoin network. It uses 8Mb blocks, in comparison to Bitcoin, where the average block size hovers between 0.8 and 1.3MB. BSV stands for Bitcoin Satoshi’s Vision.
OPCFW_CODE
Nice picture .... .... but it has no precious things inside. Rackspace has completed its Crawley data centre in West Sussex, and claims that it is among the most power-efficient in the UK. The new facility is 130,000 sq ft in area, and the site covers 15 acres in all. It is designed for up to 50,000 servers. The amount of power available is initially 6MW across two suites, with plans … "...3 kW for all the fluorescent tubes lighting up the empty space" The lights need to be on to take the picture. Large data centres of this style have used motion detecting switches for lighting for years. I've been through Equinox centres a few times, and they all completely dark, except for the segments happen to have people in them. “Companies prefer to keep their data local,” Texas-based Rackspace tells us. That may be so, but data locality and privacy is causing a massive headache for Microsoft and other cloud giants right now. That raises an interesting question (which the sub header alludes to): what would happen if Rackspace got served with an warrant for data access in the US, for a UK customer? As far as I can tell, they will have to comply as a US company (if the events surrounding MS & Ireland are an indication). Anyone with a legal background willing to have a go at that one? Funny you should mention Microsoft. You should ask Caspar Bowden. He's the former head of privacy there, and seems to be strongly of the opinion that US company = US law, regardless of location. If only there were some way to obfuscate your data so that when your cloud provider sends it to the authorities it's unreadable...oh yes, encryption. If a UK based company holds the encryption keys on their own premises then the U.S. authorities can't force them to hand them over because the UK company isn't subject to American laws. Yes, the data will be handed over, but it won't be readable so who cares? If you were just using cloud storage, such that the data was being encrypted as it left your site, and decrypted as it entered your site, this may work. Unfortunately, if you actually processed any data in a cloud service, it would need to be able to decrypt and encrypt the data as it was used, requiring the encryption keys to be on cloud servers themselves, and thus as vulnerable to being snaffled as the data itself! So, unfortunately, encryption is not the answer to all the issues. Maybe RackSpace 'UK' should simply pay Rackspace <insert_haven_of_choice_here> a squillion or two a year for the use of the name rackspace, and deny all knowledge of any actual business done is this country.. That's not going to work. Either you have a clear, legally acceptable defensive model or you are not running a sustainable business, and you shouldn't pretend you can protect customers. Although, that strikes me as the Silicon Valley model anyway. OK, forget what I said :). "Anyone with a legal background willing to have a go at that one?" IANAL but as ever the devil is in the detail. A quick look at Webcheck shows an E&W company Rackspace Ltd. Who owns this? Are all the officers of the company UK citizens? What is the legal relationship with the US company? Are the agreements which create that relationship with the US company under English law? Do the agreements forbid handing over customers' data to anyone except the customers unless ordered to do so by an English court? These are the sort of questions that any customer's legal department should be asking of any hosting company with whom they are thinking of doing business. Biting the hand that feeds IT © 1998–2019
OPCFW_CODE
feat: experiment for reusing allocated buffers for Dyn multivariate takes approach of making a few fields OnceCell, allowing for init-ing precomputed values while allow taking them out to modify without reallocate Implementing similar way to update location via set or set_with would also be implemented. Would also implement nearly identical in StudentsT. @FreezyLemon an alternative to #273, what are your thoughts? Noting as a tangential discussion, since #180 was referring to MCMC, perhaps there would also be a need for many more variables than nalgebra is well suited to. We've mentioned ideas for supporting different backends before but I figure if we can specify some of the API for nalgebra, then we can emulate that where possible for a different matrix crate. @FreezyLemon an alternative to https://github.com/statrs-dev/statrs/pull/273, what are your thoughts? This seems like it will result in a nicer API than 273 for sure. Hm.. without OnceLocks or some other thread synchronisation, this will make the struct non-Sync and non-Send though. And I can see complexity becoming a problem, with the internal state needing to be valid in any circumstance. I realized this morning all of this works because of take, so Option would be viable Also, you mentioned how a distribution should be immutable, and I agree, so semantically it would make more sense to have into_new_from_nalgebra that takes self and relies on take and clone_into. redid this with Option, regarding async, I think this would be fine for an RWLock style of code since the only times the Options are None are when there's a mutable reference to them. I considered a type that represented this kind of "field that owns a data" that always logically stores a value, but is just a wrapper to an Option. Have you benched this? I've tried implementing a builder pattern for some time and benches have always shown it's not really any faster than just reallocating. Unless the amount of variables (and thereby allocation size) is much larger than I've tested, I doubt this makes much difference at all performance-wise. Some napkin math (please double-check) reveals that for n variables, we need to allocate 3 n x n matrices and one n x 1 matrix/vector. For 10 variables, that's 310 f64s (8 bytes each), so 2480 bytes i.e. less than 2.5KiB. Such small allocations will usually be incredibly fast unless on memory-constrained hardware. Even with 1000 variables, we're talking about ~3MiB which really isn't that much. That should fit into CPU cache on most modern hardware.. I did a rudimentary profiling run for 1000 variables, just running MultivariateNormal::new 40 times (~10 secs runtime): https://share.firefox.dev/4enR0hX The code is basically: fn main() { let (mean, cov) = create_mean_and_cov(); for _ in 0..40 { MultivariateNormal::new_from_nalgebra(mean.clone(), cov.clone()).unwrap(); } } If you look for allocating functions (mostly clone()), you'll find that apart from some calls very deep inside nalgebra that are probably not avoidable by statrs, the allocations that are done don't seem to be very heavy at all. Even the manual clones inside the loop in fn main are less than 0.5% of the total runtime. Oh this is a smarter way to do preliminary benchmarks. I was going to check if this in-place mutation was faster 🙃. Thanks for doing this! What did you use to profile and view with Firefox profiler? Changing from nalgebra right now does seem a bit like it wastes effort, so I'm averse to it until we get some more feedback. I'll close this and share thoughts about something benefitting MCMC on the initial issue. What did you use to profile and view with Firefox profiler? I used samply, it's really easy to use after a bit of setup and does the firefox profiler stuff for you. perf record also works, but is a bit more barebones (flamegraph can convert the artifacts from that into nice graphs)
GITHUB_ARCHIVE
ReMapEnrich is a R-software package to identify significantly enriched regions from ReMap catalogues or user defined catalogues. ReMapEnrich provide functions to import any in-house catalogue, automate and plot the enrichment analysis for genomic regions. Bioinformatics tools to compute statistical enrichment of geonomic regions within ReMap catalogue or any other catalogue of peaks. Current next generation sequencing studies generate a large variety of genomic regions ranging from regulatory regions with transcription factors or histone marks ChIP-seq to variant calls or coding/non-coding transcripts. Also, the number of complex catalogues from large-scale integrative efforts are increasing and large sequencing projects. To facilitate the interpretation of functional genomics, epigenomics and genomics data we have developed a R-software package ReMapEnrich to identify significantly enriched regions from user defined catalogues. ReMapEnrich provide functions to import any in-house catalogue, automate and plot the enrichment analysis for genomic regions. These instructions will get you a copy of the project up and running on your local machine for development and testing purposes. See deployment for notes on how to deploy the project on a live system. Here are dependencies used in our package. Build will prompt for installation. R.utils, data.table, RMySQL, GenomicRanges To install a R package, start by installing the devtools package from CRAN. Install the ReMapEnrich R-package from GitHub using the following code, where you need to remember to list both the author and the name of the package Although RStudio does have various tools for installing packages, the most straightforward approach is to download the zipped file, open the ReMapEnrich.Rpoject file, and in Rstudio Build -> Install and Restart. Or just follow the steps described in the previous section, entering the code into the Console in RStudio. This example is based on small dataset (input and catalog) released with the ReMapEnrich package. It will go through various steps : loading data, computing enrichments, visualizing results. Please read Basic use documentations, and plot examples. Here we will be discovering more advanced functions and possibilities of the ReMapEnrich package. You may want to read the basics functions first in order to understand the principles of enrichment analysis. Please read Advanced use documentations, and universe usage. Please read Basic use and Advanced use for full documentations of ReMapEnrich functionalities. query <- bedToGranges(system.file("extdata", "ReMap_nrPeaks_public_chr22_SOX2.bed", package = "ReMapEnrich")) # Create a local directory demo.dir <- "~/ReMapEnrich_demo" dir.create(demo.dir, showWarnings = FALSE, recursive = TRUE) # Use the function DowloadRemapCatalog remapCatalog2018hg38 <- downloadRemapCatalog(demo.dir) # Load the ReMap catalogue and convert it to Genomic Ranges remapCatalog <- bedToGranges(remapCatalog2018hg38) The basic way to compute an enrichment is to run with default parameters. - no universe - single core - Default shuffling - defautl overlaps. Please read Basic use vignette for more documentations enrichment.df <- enrichment(query, remapCatalog, byChrom = TRUE) # The option byChrom is set to TRUE as we are only working on one chromosome for this analysis. Here we display a dot plot. Please read Basic use and Advanced use for more documentations This project is licensed under the MIT License - see the LICENSE.md file for details We are grateful to Aurélien Griffon and Quentin Barbier for their very early contributions to this project.
OPCFW_CODE